Intelligent Systems

RUG > FSE > Bernoulli Institute

Available projects for RuG students

Currently available projects are described in this document.

You can also contact one of the group members to check for current research topics and supervision availability.

2023/2024

Supervisor: Kerstin Bunte

  • Efficient Sampling Methods for Learning in the Model Space, BSc/MSc

Status: Open

No. of positions: 3

Learning in the Model Space
Time series data emerges in virtually all scientific sectors and also plays an important role in modern medicine. Medical data tend to be irregularly or sparsely sampled and typically contain noise. These limitations in the amount and quality of data make classification using traditional machine learning (ML) methods difficult. However, if the process that produces the time series data can be explained by a dynamical system, then a mechanistic model can be introduced to incorporate domain knowledge, effectively guiding the ML process and making the learned outcomes interpretable.
A framework which is particularly suited for this job is Learning in the Model Space (LiMS). Instead of performing classification on the time-series directly, a posterior distribution is constructed for every time series, which quantifies the level of belief for every possible parameterization (realization) of the given mechanistic system. Obtaining this distribution is typically not analytically tractable, so the posterior has to be approximated using sampling techniques.

Sampling Methods
In the context of Bayesian inference, sampling involves drawing random samples from the posterior distribution of a model. These samples are used to approximate complex integrals, enabling the estimation of model parameters and making predictions. Several sampling methods have been developed to tackle Bayesian inference problems. Notable techniques include:

  • Markov Chain Monte Carlo (MCMC): MCMC methods involve generating a Markov chain of samples, which converges to the target posterior distribution. Two MCMC variants that are particularly relevant are:
    * Parallel Tempering MCMC: This method introduces multiple chains with different temperatures to improve exploration of the parameter space.
    * Hybrid Monte Carlo MCMC: Hybrid Monte Carlo combines Hamiltonian dynamics and MCMC to enhance the efficiency of sampling.
  • Nested Sampling: Nested Sampling is an alternative method that explores the posterior distribution by iteratively constructing a sequence of nested likelihood-weighted distributions.

Research Question 
Which sampling method is most suitable for Learning in the Model Space applications?

Concrete Research Tasks

  • Learn about the LiMS Framework:
    * Gain a comprehensive understanding of the Learning in the Model Space (LiMS) framework and its importance in Bayesian inference.
  • Study State-of-the-Art Sampling Methods:
    * Investigate state-of-the-art sampling methods, including Parallel Tempering MCMC, Hybrid Monte Carlo MCMC, and Nested Sampling.
    * Utilize research papers by Ballnus et al. and the review by Buchner as starting points for in-depth exploration.
  • Determine the Most Suitable Sampling Technique:
    * Implement state-of-the-art sampling methods. Core functionalities preferably implemented in C/C++ with a Matlab or Python wrapper.
    * Evaluate and compare the effectiveness and efficiency of the sampled techniques in the context of LiMS applications.
    * Identify the sampling method that best suits the unique requirements of LiMS.

Depending on the level of the student (BSc/MSc), the scope of the project is flexible. Whether the project focuses on contributions on the theoretical side or more on implementation is up to the student’s personal preference.

References

  • A classification framework for Partially-observed Dynamical Systems 
  • Nested Sampling methods
  • Comprehensive benchmarking of Markov chain Monte Carlo methods for dynamical systems

How to apply for this project?
For further information contact: Kerstin Bunte (k.bunte@rug.nl), Elisa Oostwal (e.c.oostwal@rug.nl) or Janis Norden (j.norden@rug.nl).

  • Pediatric pulmonary hypertension (PH), MSc

Status: Open

No. of positions: 3

Pediatric pulmonary hypertension (PH) is a rare disease and defined by an increased pulmonary arterial pressure. Based on pathophysiological mechanisms, clinical presentation, and hemodynamic characteristics, PH can be classified into five main diagnosis groups: pulmonary arterial hypertension (PAH, group 1), PH due to left heart disease (group 2), PH due to lung disease and/or hypoxia (group 3), PH due to pulmonary artery obstructions (group 4), and PH with unclear and/or multifactorial mechanisms (group 5). Each PH type can be further divided into multiple diagnosis subgroups.

Group 1 PAH is a progressive and eventually fatal pulmonary vascular disease. The introduction of PAH-targeted therapies in the past two decades has improved survival in children and adults with PAH, but prognosis is still poor. Once a patient is in end-stage disease, lung transplantation is the only remaining treatment option. Current treatment strategies are guided by risk stratification, where the patients are categorized as having low, intermediate or high risk for mortality with the aim to achieve and maintain a low-risk status. The estimated risk is based on multiple clinical, hemodynamic, and echocardiographic parameters with their own cut-off values for each risk category. However, the prognostic ability of current risk stratification models in adult patients is moderate at best, and research supporting the use of risk stratification in children with PAH is scarce.

In the Netherlands, all children with PH are referred to the national referral center for pediatric PH in Groningen, where they are diagnosed, treated and followed according to standardized formats for over 20 years now. The Dutch National Registry for Pulmonary Hypertension in Childhood systematically collects data of these patients at set follow-up time points, including clinical presentation, symptoms, physical examination, genetic analysis, biochemical biomarkers, ECG, echocardiography, MRI, cardiac catheterization data, exercise performance, accelerometry, treatment strategies and outcome. Using machine learning techniques, we want to find hidden patterns within these registry data, specifically focusing on 1) searching for clusters within the data that may serve as disease phenotypes, determining whether these phenotypes improve the current classification of PH patients, and how they relate to outcome and 2) generating a self-learning predictive model to evaluate disease progression, treatment response and prognosis in pediatric PAH patients. With the resulting models we hope to create the gateway to an evidence based personalized treatment approach for pediatric PH and, ultimately, to improve the outcomes of these children.

The dataset

The Dutch National Registry for Pulmonary Hypertension in Childhood will be used for data analysis. This registry contains data systematically collected over the last 20 years from around 250 consecutive children with pulmonary hypertension. The registry includes over 3500 follow up moments with a mean follow up time of 5 years, which add up to 1100 patient follow-up years. The data collected include individual patient data regarding clinical presentation, diagnostic work-up, diagnostic classification (etiology), genetic test results, treatment and follow-up. Follow-up data, collected every 3 to 6 months, contain biometric and clinical data such as physical examination data, treatment data, exercise test results including six-minute-walking-distance and accelerometry, laboratory test results (Including biochemical markers for heart failure such as NT-proBNP), ECGs, echocardiographic data (measurements and raw data). Data indicating disease progression such as hospitalizations, treatment escalations, the need for catheter interventions or lung transplantation, and death are also included in the database. Due to the grim prognosis of pediatric PAH with a 5 year survival rate of 65%, almost 100 outcome events such as deaths or lung transplantations have been registered. To summarize, in total the registry includes more than 150,000 data points, and in addition raw ECG, raw echocardiographic and raw accelerometric data.

Expectations

This project is a first exploratory data analysis of a novel collaboration. Therefore, the directions are multitude. If you are interested in biomedical data analysis and have a good understanding of general supervised and unsupervised machine learning techniques, as well as fun potentially going deeper into mathematical concepts and not scared by exploring novel directions independently, this project might be for you!



How to apply for this project


This is a collaboration project and hence we have a procedure of application. Submit the following:
 1) a short motivation letter of why you are the best student to take up this project (max 2 pages)
 2) a short CV/course grades, so we can see your background knowledge (max 2 pages)
 After we viewed the material we will invite suitable candidates for a short interview.
For further information contact the supervisors: 
CS: Kerstin Bunte (k.bunte@rug.nl) or Elisa Oostwal (e.c.oostwal@rug.nl) or  UMCG: Chantal Lokhorst (c.lokhorst@umcg.nl).

  • Digitization of ECG signals from images, SPP

Status: Taken

No. of positions: 1

Pulmonary hypertension (PH) is a rare disease and defined by an increased pulmonary arterial pressure. Based on pathophysiological mechanisms, clinical presentation, and hemodynamic characteristics, PH can be classified into five main diagnosis groups. Group 1 PAH is a progressive and eventually fatal pulmonary vascular disease. The introduction of PAH-targeted therapies in the past two decades has improved survival in children and adults with PAH, but prognosis is still poor. Once a patient is in end-stage disease, lung transplantation is the only remaining treatment option.

In the Netherlands, all children with (suspected) PH are referred to the national referral center for pediatric PH in Groningen, where they are diagnosed, treated and followed according to standardized formats for over 20 years now. To diagnose PH the patient has to undergo right heart catheterization (RHC). With this invasive procedure, a catheter is guided to the right side of the heart and into the pulmonary artery. Along the way, multiple pressure curves are obtained. An example of one of these pressure curves is given in the figure below. The three signals at the top (I, II, and III) are ECG signals, and the one at the bottom is the pressure curve measured in the left pulmonary artery (PA L). The x-axis shows the time in seconds (at the top) with a paper speed of 25 mm/s and the y-axis gives the pressure in mmHg. The values on the axis can differ for each graph/image and the images are screen shots (png-files), so the pixel position of the axis may differ as well.

The aim of this project is to convert the images of the RHC pressure curves into digital signals.

The end product should be able to:

1. Recognize the values on the x and y-axis.

2. Generate the signal trace.

3. Get the numerical values from the pressure plots and validate these with given values at the bottom of the plot.

4. Output of the signal as an excel file with two columns, one for time (s) and one for pressure (mmHg), and with a high sample frequency.

5. Optional: Split the signal in epochs based on the heart cycle using the RR interval. In the figure above, one RR interval is marked by blue lines as an example.

Challenging aspects include overlapping signals and multiple pressure curves in one image which should be separated.

For further information contact the CS or UMCG supervisor: Kerstin Bunte (k.bunte@rug.nl) or Chantal Lokhorst (c.lokhorst@umcg.nl).

  • INTERFACE DESIGN FOR SMART BIODIGESTERS, SPP

Status: Open

No. of positions: 2

The company Circ is the pioneer of the BioTransformer, a machine with a footprint of about 4m2 that transforms biowaste, such as leftover food and cutting waste, into a renewable source of biogas. BioTransformers are 'smart', in the sense that they are connected to the internet to collect sensor information and monitor and control the process remotely. This allows Circ to roll out machines across the country. Customers can also see production statistics in a mobile app.

Since its initial release, the user interface at the front of the BioTransformer has received few updates, and is in dire need of a redesign. The user interface is the primary way for the user to interact with, control, and fill their BioTransformer, and is vital to operate smoothly and intuitively. Moreover, customers have asked for a range of features that should be considered for implementation. Lastly, the UI is connected to over 100 sensors, relays, and actuators. These factors make designing a suitable UI quite a challenge.

The project Circ presents, comes in 3 phases

1. Retrieving stakeholder wishes as short user stories and requirements, and briefly analyse the feasibility of planned features. 2. Drafting and designing an interface that meets the requirements. 3. Setting up hooks to connect to the underlying control model. 4. The deliverables of the project include

1. A software package containing a user interface written in Qt5. 2. A short and concrete analysis of stakeholders, requirements, user stories, and design considerations. 3. A demonstration of the user interface (if time allows, preferably on location) A satisfactory product demonstration will lead to the deliverable being used in developing the next major version of the BioTransformer. For further information, please do not hesitate to reach out to Robbin de Groot (r.degroot@circ.energy). For more information about Circ and their BioTransformer, see https://circ.energy.

  • FROG PATTERN RECOGNITION, SPP

Status: Open

No. of positions: 1

African clawed frog (Xenopus laevis) is a commonly used model organism for cell biological, developmental, and biomedical research. In the laboratory setting, frog colonies are generally housed in aquatic tank systems, usually in groups of ten to twenty individuals per tank. For health monitoring and experimental quality control purposes, it is desirable to identify individual frogs regularly throughout their life. Recently, we have developed a novel pipeline for data acquisition, pre-processing, and training of a classification model based on the recognition of the biometric pattern these frogs show on their backs (Prins et al., 2023). 

To make this tool available to the larger research community, in this project you will develop a web-based API which integrates the developed algorithm and provides users with the options to either train the model with their own frogs/colony, or apply the model to identify frogs within a colony. The frog ID should then be connected to a user-friendly database implementation that allows for the input/output of research relevant data (e.g., location/tank number, health history, experimental outcome parameters) to facilitate reusable and sustainable data management (DMP). For further information contact the supervisors: Kerstin Bunte (k.bunte@rug.nl) and Dario Tomanin (d.tomanin@rug.nl) PhD student in the Kamenz Lab, part of the Molecular Systems Biology group.

Supervisor: Michael Biehl

Supervisor: Michael Wilkinson

  • Adapting Max-Tree Objects (MTO) to extremely low photon counts, BSc/MSc

Status: Open

MTO is an astronomical source-finding tool based on max-trees and the use of chi-squared statistics. It has shown great performance for images in which the number of photons detected per pixel is high enough to allow the Poisson noise in the signal to be approximated by a Gaussian distribution with variance scaling linearly with the mean. In certain imaging modalities, such as in STED microscopy, photon counts are far too low to make this work reliably. There are several better statistical tests that could be used in this situation, but such an adaptation has yet to be realised. The aim of this project is to explore suitable statistical tests, and test this on real and simulated STED microscopy data.

  • Comparison of LSBGnet to MTO for finding low-surface-brightness galaxies, BSc/MSc

Status: Open

Comparison of LSBGnet to MTO for finding low-surface-brightness galaxies LSBGnet (Su, et al, 2024) is a recently published deep network aimed at finding low-surface-brightness (LSB) galaxies in particular in large surveys. Although the original paper does compare the performance of this network to one classical source detector (Sextractor, Bertin & Arnouts 1996), and several deep networks, it does not compare the method to a state-of-the-art faint object detector MT-Objects (or MTO, Teeninga et al, 2016), which came out best i a recent comparison of classical tools (Haigh et al, 2021).

The aim of this project is to make a thorough comparison of LSBGnet to MTO, using several quality criteria. An important difference between LSBGnet and the likes of MTO is that the latter aim to detect all the objects, not just the LSB objects. This must be addressed in the comparison.

References

E. Bertin, S. Arnouts, Astron. Astrophys. Suppl. S. 117, 393 (1996)

Haigh et al, Astronomy & Astrophysics 645, A107 (2021)

Su et al. MNRAS 528, 873–882 (2024)

Teeninga et al. Mathematical Morphology - Theory and Applications 1 (1), 100–115 (2016)

  • Removal of Cosmic Ray Events from WEAVE Data Cubes, BSc/MSc

Status: Open

The new WEAVE astronomical instrument (Dalton et al, 2018) is an imaging spectrometer, capturing data cubes of quite low spatial resolution, but very high spectral resolution. Rather than just having red, green and blue data per pixel, each pixel contains two spectra of about 4000 spectral channels each. This allows the study of astronomical objects such as interacting galaxies in unprecedented spectral detail. Motions of ionized gas within the structures can be mapped clearly, and compositions of stellar populations can be estimated. One problem in these data is the presence of cosmic ray events, which show up as bright spikes in the data cubes. The aim of this project is to detect these events, and remove the resulting spikes from the data. The basic tool that will be used is MTObjects (Teeninga et al, 2016, Haigh et al, 2021), which is a powerful source detector for optical data.

References: Dalton, G., Trager, S., Abrams, D. C., Bonifacio, P., Aguerri, J. A. L., Vallenari, A., Middleton, K., Benn, C., Dee, K., Sayède, F., Lewis, I., Pragt, J., Picó, S., Walton, N., Rey, J., Allende, C., Lhomé, É., Terrett, D., Brock, M., ... Jin, S. (2018). Construction progress of WEAVE: The next generation wide-field spectroscopy facility for the William Herschel Telescope. In C. J. Evans, L. Simard, & H. Takami (Eds.), Proceedings Volume 10702, Ground-based and Airborne Instrumentation for Astronomy VII; 107021B (Vol. 10702). [107021B] SPIE.Digital Library. https://doi.org/10.1117/12.2312031 Haigh, C., Chamba, N., Venhola, A., Peletier, R., Doorenbos, L., & Wilkinson, M. H. F. (2021). Optimising and comparing source-extraction tools using objective segmentation quality criteria. Astronomy & astrophysics, 645(January 2021 ), [A107]. https://doi.org/10.1051/0004-6361/201936561 Teeninga, P., Moschini, U., Trager, S. C., & Wilkinson, M. H. F. (2016). Statistical attribute filtering to detect faint extended astronomical sources. Mathematical Morphology - Theory and Applications, 1(1), 100–115. https://doi.org/10.1515/mathm-2016-0006

  • Tracking the division of yeast cells, BSc/MSc

Status: Open

In microbiology, tracking dividing yeast cells in time series imaging is a tedious task, and some results in automatic this task have been obtained using deep learning. However, it is difficult to get sufficient ground truth data for training, and the method does not yield good results on complicated cell shapes. The aim of this project is to explore classical morphological image processing tools, to circumvent these problems. It is also possible to combine these morphological methods with deep neural networks.

References: TO DO

  • Adaptive Binarization for Multichannel Video, BSc/MSc

Status: Taken

Nuwa Pen is the world's first smart ball point pen. Nuwa pen has successfully captured the essence of digital writing without making any compromises to the true writing experience of a ballpoint pen on any piece of paper. Nuwa Pen works with cutting edge processing power along with a suite of sensors which helps the pen figure out what the user is writing and where the user is writing. Nuwa Pen is developed by Nuwa Pen B.V. Nuwa Pen B.V. is a start-up based out of Groningen, The Netherlands. The aim of Nuwa Pen B.V is to combine the analogue world with the digital world with the motto of Making the World your canvas. At Nuwa Pen B.V. we want to revolutionize how humans write and interact with the digital world.

In this research project, you will be working on designing a binarization algorithm in C++ for low-resolution, gray-level multichannel video sequence. The binarization has to use adaptive threshold and be consistent over different channels and timestamps. The algorithm has to be highly optimized, and has to be written from scratch without using any 3rd party library, based on the rudimentary binarization algorithm implemented in our proprietary codebase. You will be working as a member of our software engineering team who you can share your idea, get support, and collaborate using Github.

  • Image Skeletonization for Handwritten Notes, BSc/MSc

Status: Taken

Nuwa Pen is the world's first smart ball point pen. Nuwa pen has successfully captured the essence of digital writing without making any compromises to the true writing experience of a ball point pen on any piece of paper. Nuwa Pen works with cutting edge processing power along with a suit of sensors which helps the pen figure out what the user is writing and where the user is writing. Nuwa Pen is developed by Nuwa Pen B.V. Nuwa Pen B.V. is a start-up based out of Groningen, The Netherlands. The aim of Nuwa Pen B.V is to combine the analogue world with the digital world with the moto of Making the World your canvas. At Nuwa Pen B.V. we want to revolutionize how humans write and interact with the digital world.

In this research project, you will be working on designing skeletonization algorithm in C++ for low-resolution images. The image skeleton has to have sub-pixel accuracy due to low image resolution, and has to be robust to image noise. The algorithm has to be highly optimized, and has to be written from scratch without using any 3rd party library, based on the rudimentary skeletonization algorithm implemented in our proprietary codebase. You will be working as a member of our software engineering team who you can share your idea, get support, and collaborate using Github.

Supervisor: Jiapan Guo

  • Few-shot learning for image classification, BSc/MSc

Status: Open

No. of positions: 3

The common practice for machine learning applications is to feed as much data as the model can take. This is because in most machine learning applications feeding more data enables the model to predict better. However, few-shot learning aims to build accurate machine learning models with less training data. The metric-learning based few-shot image classification focuses on learning a transferable feature embedding network by estimating the similarities between query images and support classes from very few images.

In this project, we aim at improving the current few-shot learning approaches for image or video classification. TWO projects can be offered under image classification, one focuses on channel attention and the other on incorporating frequency information. One project is offered under video classification with a focus on vision transformer. For more information, please contact: j.guo@rug.nl.

References: [1] Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, Jiebo Luo. Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning. https://arxiv.org/abs/1903.12290

[2] Hu et al. (2017). Squeeze-and-Excitation Networks. CVPR. https://arxiv.org/abs/1709.01507

[3] Cheng et al. (2023). Frequency Guidance Matters in Few-Shot Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).

  • Visual analysis of disinformation, BSc/MSc

Status: Open

No. of positions: 3

Dis/misinformation, including fake news and hate speech, is prevalent on social media, particularly in the context of the Covid-19 pandemic and the Ukraine-Russia war. While extensive research have been made in the textual analysis of disinformation, the impact of visual elements like images, videos, and memes in spreading misinformation remains less explored. These visual elements are potent in emotionally engaging and influencing public opinion, thereby exacerbating societal divisions and fueling community polarization. In this project, we will focus on using deep learning, e.g. convolutional neural networks, vision transformers, or even diffusion models, for the visual analysis of disinformation. Multiple possible directions can be offered depending on your interests. For more information, please contact: j.guo@rug.nl.

References: [1] Nakamura et al. r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection. https://arxiv.org/pdf/1911.03854v2.pdf  

[2] https://paperswithcode.com/paper/rfakeddit-a-new-multimodal-benchmark-dataset/review/

2022/2023

Supervisor: Kerstin Bunte

  • Astronomy: extraction and Analysis of Manifolds in noisy environments, SPP/BSc/MSc

Context

Filamentary structures (one-dimensional manifolds) are ubiquitous in astronomical data sets. Be it in particle simulations or observations, filaments are always tracers of a perturbation in the equilibrium of the studied system and hold essential information on its history and future evolution. 1-DREAM is a toolbox composed of five main Machine Learning methodologies whose aim is to facilitate manifold extraction. Problem

The toolbox has been published and tested on several problems. However, its use is limited by the size of the input data, and its historically grown distributed implementation makes it not too user-friendly. Furthermore, despite showing better results than other tools dedicated to the same task, 1DREAM is not as well known by the scientific community as they are.

Goal

The international team of 1DREAM is looking for an efficient implementation of the current code in C++ with a Python interface to make it faster and more user-friendly. Moreover, we are eager to make this tool more widely known to the scientific community improving the web page and the associated documentation. Depending on the background of the student and type of project taken, also algorithmic improvements to ML algorithms within 1DREAM are possible.

Assignment

The team of 1DREAM wants to pass the current code written on Matlab and Python to an optimum design written in C++ wrapped by Python. Moreover, we are looking for a student who can help us in making the tool more user-friendly and more efficient. The student (MSc/BSc) is given the assignment to do research on which code design is most suitable for doing this. The student can then use the current datasets provided by the 1DREAM team to carry out experiments with the various designs to achieve good performance. The student must create the documentation necessary to make 1DREAM more user-friendly. Finally, the student is motivated to show ideas about the performance of 1DREAM and/or the design of a better website for the tool.

How to apply and/or get more information

Please prepare a short 1-2-pages CV and 1-page motivation letter to apply for this project and indicate if you are looking as a Master or Bachelor student. Several supervision slots are available. Contact information below: Contact: Felipe Contreras and Kerstin Bunte E-mail: f.i.contreras.sepulveda@rug.nl and k.bunte@rug.nl

Related literature

https://doi.org/10.1016/j.ascom.2022.100658 https://doi.org/10.1093/mnras/stad428 https://doi.org/10.1109/TKDE.2022.3177368 https://doi.org/10.1162/neco_a_01478 https://doi.org/10.1016/j.artint.2021.103579

  • Short-term price forecasting on the electricity imbalance market through the use of Machine learning methods, BSc/MSc

This project is provided in collaboration with Repowered. Due to the technical nature of the energy system, electricity demand and supply have to be in balance at all times. If this is not the case, an energy imbalance is present on the electricity grid. The total imbalance on the grid is the aggregate of local imbalances of individual energy consumption and production assets such as solar parks. For example, a solar park causes imbalance when its actual production is unexpectedly higher than its nomination (the volume that has been sold on the day-ahead market based on production forecasts). This imbalance is (partially) negated for example if a wind park is producing less energy than nominated (forecasted) at the same time. However, in total there is always some imbalance present in the grid as a whole, especially as the amount of renewable energy assets (which are difficult to forecast) increases.

To counter imbalances on the grid, the Transmission System Operator (TSO; TenneT in the Netherlands) tries to encourage electricity market participants to change their production/consumption to restore the balance on the grid. Market participants that have an imbalance in the same direction as the current imbalance have to pay a penalty, which is then paid out to those that have an imbalance in the ounter-direction to the current imbalance. The penalty/compensation (the imbalance price) is determined per 15 minute block and is paid out per MWh of extra consumption/production relative to its nominated volume.

Solar parks can participate actively in this imbalance market by curtailing (shutting down) its production at selected times when a surplus of energy is present in the grid and the current imbalance price is below a certain threshold (i.e. to get paid to produce less electricity). The difficulty lies in choosing the right moment to start curtailing, since the imbalance price is only published after the 15 minute block in which the imbalance is measured. To this end, a price forecast is needed to determine whether to curtail the solar park during a certain time block or not. Repowered offers solar curtailment as one of its services to solar park owners, and is looking for ways to improve its services and further develop its know-how on price forecasts and AI models. This assignment entails the creation of a machine learning model for curtailment decision making. Because of the binary nature of the decision (to curtail or not), a binary classification model with time series input should be made. Furthermore, the assignment entails the validation of the model in the virtual production environment of Repowered. In a previous project, several regression models have been created to predict the imbalance price for the next 15 minute block. This has given some good insights for a continuation of the project and can be used as a starting point for data gathering and literature review.

For more information contact: Kerstin Bunte or Timo Dettmering <t.dettmering@newenergycoalition.org>

  • Autonomous navigation combining Tag- and visual SLAM, BSc/MSc

Autonomous navigation, although theoretically solved, is still problematic in practice. We investigate joining traditional approaches with Dimensionality Reduction Machine Learning as well as the combination of complementary sensors to achieve the task more efficiently and robustly. The project is flexible and can take multiple directions to choose from.

For more information contact: Kerstin Bunte

Related literature: https://www.sciencedirect.com/science/article/abs/pii/S0031320319304923

  • Exploratory data analysis for Nephrology, BSc/MSc

Nephrology specializes in the study of kidneys and medical issues concerning them. In collaboration with the UMCG we would like to analyze a multitude of problems and the potential use of computer aided diagnosis systems based on different data modalities that open the opportunity for different project directions. One of which is the use of data extracted from bodily fluids, such as urine, to detect abnormalities and biomarkers for certain conditions and another is the segmentation of functional MRI volume sequences to analyze the function of a kidney by detecting vessels and their streaming behavior over time. The hospital is also a well known center for kidney transplantation and is interested in early detection of problems after transplantation. Since this is a novel collaboration the direction of a potential project is flexible and exploratory to gain further insight into the problems faced in Nephrology. Email: k.bunte@rug.nl for further information

  • Appliance Detection using machine learning (ML) , BSc/MSc
Powerchainger/EDGE Nov 2022

Context

Powerchainger is a startup based in Groningen. Our mission is to make the energy transition accessible and affordable for everyone. We believe that everyone has the right to clean energy and a sustainable living environment. By using energy data as a driver, we ensure energy-efficient households and more sustainable neighborhoods.

Problem

Powerchainger wants to investigate whether household appliance detection is possible using machine learning (ML). The consumption of electrical appliances is measured in kilowatt-hour (kWh) as a unit of energy. This is done with the help of a smart meter. Smart meters can now be found in nearly 90% of all Dutch households. The measured values that smart meters produce consist of the total consumption of all devices in the household. Because of that, we cannot initially see what this consumption consists of. Or rather: which specific or unique devices are consuming.

Goal

Powerchainger is looking for ways to detect near real-time which devices in the house are on (running and consuming), by using smart meter data. We are looking for possibilities to distinguish electrical appliances automatically based on their consumption. And to detect them and to be accurate as possible. If this is successful, it offers opportunities to give households real-time feedback about their energy consumption. And to reward energy-efficient behavior.

Assignment

Powerchainger wants to know if it is possible to use a machine learning model to detect which devices in the house are running. This model can be trained with measured values from the past, labeled with the composition of devices that have produced these measured values. The student (MSc/BSc) is given the assignment to do research on which machine learning models are most suitable for doing this. The student can then use a dataset provided by Powerchainger to carry out experiments with the various models to arrive at sound advice. The emphasis is on (applied) research related to ML. Backend development skills are a plus. How to apply and/or get more information Please prepare a short 2 page CV and 1 page motivation letter to apply for this project and indicate if you are looking as a Master or Bachelor student. Several supervision slots are available. Find the contact information below. Sent the material via email to both the company and University supervisor in cc.

Contact: Yang Soo Kloosterhof and Kerstin Bunte E-mail: yangsoo@powerchainger.nl and k.bunte@rug.nl

Related literature

https://doi.org/10.1145/2602044.2602051 https://doi.org/10.1007/s12053-014-9306-2 https://doi.org/10.48550/arXiv.1610.01191 https://doi.org/10.1007/978-3-319-61578-3_12 https://bisite.usal.es/archivos/non_intrusive_load_monitoring_nilm.pdf https://doi.org/10.1002/widm.1265 https://doi.org/10.1016/j.ifacol.2015.12.414 https://doi.org/10.3390/s121216838

Supervisor: Michael Biehl

  • Three possible student projects related to NEMO, BSc/MSc

General information: The "Next Move in Movement Disorders" (NEMO) project, see NEMO - Scientific research focuses on computer aided diagnosis of hyperkinetic movement disorders, which are characterized by an excess of involuntary movements, including tremor, myoclonus, dystonia, tics, chorea, spasticity and ataxia. Each movement disorder has its own clinical presentation, but frequently complex and variable mixed forms occur. Research has demonstrated that it is difficult for neurologists to distinguish these disorders if they do not see these patients frequently, and also that doctors do not always agree amongst themselves. Since hyperkinetic movement disorders have no clear anatomical abnormalities, pathology is most likely attributed to altered function of brain networks. However, so far, imaging studies have shown inconsistent distinctions in the topography of regional cerebral metabolism. This is likely due to the use of different methodologies in different groups of patients and inconsistent phenotyping. The NEMO project aims to improve patient diagnosis using computer aided methodology. In this project many different data acquisition includes multiple modalities such as video analysis, movement registration (accelerometry and EMG) and neuroimaging including functional magnetic resonance imaging (fMRI). Data acquisition for the NEMO project is ongoing and, currently, we have for instance EMG data from over 170 participants.

A1) Automated detection of myoclonus bursts Supervision: Elina van den Brandhof and Michael Biehl Level: BSc thesis, internship or MSc thesis project Suitable for 1 or 2 students of CS (or possibly AI) Research questions/tasks:

     o   Review: Which machine learning algorithms could be useful to automatically detect myoclonus bursts? Based on application in, for instance, EEG and seismological data.
     o   Application: Can myoclonic bursts be identified automatically in 
                  EMG and/or accelerometry data using the machine learning algorithms   
                  found in the review?  Implementation and testing of suitable methods.
              Available data: myoclonus burst labeled data from (NEMO and KNF data) 

The goal of the student project(s) is to automatically detect myoclonus bursts using machine learning. During the clinical diagnostic process, these bursts are often labeled by hand if clinical neurophysiology measurements are performed. This is done to quantify burst frequency and duration, which facilitates the subclassification of myoclonus and its neural origin (cortical or subcortical). This project will start with a review of machine learning algorithms (for instance in EEG and seismological data) suitable for burst detection in other fields, after which the suitable algorithms will be applied or developed to perform automatic burst detection and machine- learning assisted burst detection.

      A2)  Classification or clustering of myoclonus bursts
       Supervision: Elina van den Brandhof and Michael Biehl 
       Level: BSc thesis, internship or MSc thesis project 
       Suitable for 1 or 2 students of CS (or possibly AI)        

        Research questions / tasks:
               o   Review: Which supervised or unsupervised machine learning 
               algorithms could be useful to cluster or classify  myoclonus bursts?         
               o   Application: Can myoclonic bursts be classified (or clustered)     
               automatically in EMG data.
               Implementation and testing of suitable machine learning based methods.                      
               Available data: emg data, labeled myoclonic bursts           

             Here the goal is to perform clustering or to train classifiers which 
             discriminate between subtypes  of myoclonus and its neural origin 
              (cortical or subcortical), based on identified and labeled  
              myoclonus bursts (see project idea A1).

A3) Classification or clustering movement disorders based on time labeled data

       Supervision: Elina van den Brandhof and Michael Biehl 
       Level: BSc thesis, internship or MSc thesis project 
       Suitable for 1 or 2 students of CS (or possibly AI)       

Research questions / tasks:

               o   Review: Which supervised or unsupervised machine learning algorithms 
               could be useful to cluster or classify movement disorders?         
               o   Application: Can movement disorders be classified (or clustered) 
               automatically in features extracted from EMG data? 
              Implementation and testing of suitable machine learning based methods.                      
             Available data: EMG and ACC data, labeled time windows (two seconds each)           

This project focuses on EMG and accelerometry data. These data are acquired using a number of sensors placed on different body muscles. Using feature engineering and machine learning we try to find the relevant information that distinguishes one movement disorder from the other.

 For 11 patients, 20 tasks have been labeled in two-second time windows. In this project we aim to understand if disorder classification is possible per time window by using class labels per time point (i.e., the disorder visible vs not visible) and whether such models are more powerful than models trained on a single label per patient.
  • Bias correction in classification problems, BSc/MSc

General information: In many real world data sets, subtle biases can obscure the information contained and mislead machine learning algorithms and their interpretation. For instance, in medical diagnosis problems, data acquired from different sources (different medical centers, scanners or processing pipelines ) can display specific properties which overlay the disease-relevant information. Similar problems occur in the presence of patient sub-cohorts (e.g. male and female) whose specific properties overlay the target information. For simplicity, we refer to these as “center effects” here but of course other biases could be addressed in a similar way. Although centers may use the same or very similar technical equipment and supposedly identical processing pipelines, very often the actual source of a sample can be identified easily, e.g. by a suitably trained classifier. In the projects suggested below, the aim is to modify the training of a classifier in such a way that irrelevant center-effects are eliminated as much as possible.

Exploration of a modified cost function for training a classifier In this project the goal is to implement a method for the elimination of center effects which is based on an available control data set which can be assumed to display only center-specific differences. In a medical diagnosis problem, this could be a cohort of healthy controls (HC) which should have identical properties across centers. The correction is applied when training a classifier for diagnosis of different diseases. It is based on a penalty term added to the loss function of Generalized Matrix Relevance Learning (GMLVQ), which measures how much the HC samples from different centers separate in the feature subspace that is relevant for the actual diagnosis.

 Supervision: Sofie Lövdal and Michael Biehl 
 Level: BSc thesis or research internship, suitable for 1 student of CS 
     (or possibly AI)
 Concrete task: 
 Implementation of the center correction (modification of existing matlab 

or python code) and testing in terms of toy data sets. Comparison with a previously developed alternative method. A possible extension could be the application to real world neuroimaging data

  • Classification of neurodegenerative diseases using regions of interest, BSc/MSc

Supervision: Sofie Lövdal, Michael Biehl

Neuroimaging with FDG-PET can be used to diagnose various neurodegenerative diseases, as areas with lower metabolic uptake will form different disease patterns in different neurodegenerative diseases. This project aims to evaluate the performance of a classification system distinguishing between diagnoses on the spectrum of neurodegenerative diseases using a region of interest (ROI)-based approach. Here, we would like to extract features from specific regions in preprocessed FDG-PET scans, and evaluate how various methods of feature extraction and feature selection impact the performance of a classifier. This project is suitable for a BSc thesis or a MSc research internship.

  • Improving preprocessing methods to enhance classification performance in neurodegenerative diseases using FDG-PET, MSc

Supervision: Sofie Lövdal, Michael Biehl

Neuroimaging with FDG-PET can be used to diagnose various neurodegenerative diseases, as areas with lower metabolic uptake will form different disease patterns in different neurodegenerative diseases. Nuclear imaging with PET is, however, a noisy imaging modality and factors such as age, gender, scanner, reconstruction protocol and amount of injected radiotracer all affect the signal. A common preprocessing method for FDG-PET is the Scaled Subprofile Model (SSM), where the brain scan is masked and log-transformed, followed by subtraction of group-level mean and subject mean. A drawback of SSM is that the preprocessed image only shows metabolism relative to the subject itself, so it is not possible to know whether an area with relatively high metabolism represents normal metabolism, or (abnormal) hypermetabolism. We would like to develop an algorithm that preprocesses the input image to be more true to the absolute value of the metabolic uptake rather than a relative estimate. In turn, this could potentially increase the performance of various machine learning tasks, such as classification of diseases on the neurodegenerative spectrum. This project is suitable for a MSc research internship or a MSc thesis.

Supervisor: Michael Wilkinson

  • Removal of Cosmic Ray Events from WEAVE Data Cubes, BSc/MSc

The new WEAVE astronomical instrument (Dalton et al, 2018) is an imaging spectrometer, capturing data cubes of quite low spatial resolution, but very high spectral resolution. Rather than just having red, green and blue data per pixel, each pixel contains two spectra of about 4000 spectral channels each. This allows the study of astronomical objects such as interacting galaxies in unprecedented spectral detail. Motions of ionized gas within the structures can be mapped clearly, and compositions of stellar populations can be estimated. One problem in these data is the presence of cosmic ray events, which show up as bright spikes in the data cubes. The aim of this project is to detect these events, and remove the resulting spikes from the data. The basic tool that will be used is MTObjects (Teeninga et al, 2016, Haigh et al, 2021), which is a powerful source detector for optical data.

References: Dalton, G., Trager, S., Abrams, D. C., Bonifacio, P., Aguerri, J. A. L., Vallenari, A., Middleton, K., Benn, C., Dee, K., Sayède, F., Lewis, I., Pragt, J., Picó, S., Walton, N., Rey, J., Allende, C., Lhomé, É., Terrett, D., Brock, M., ... Jin, S. (2018). Construction progress of WEAVE: The next generation wide-field spectroscopy facility for the William Herschel Telescope. In C. J. Evans, L. Simard, & H. Takami (Eds.), Proceedings Volume 10702, Ground-based and Airborne Instrumentation for Astronomy VII; 107021B (Vol. 10702). [107021B] SPIE.Digital Library. https://doi.org/10.1117/12.2312031 Haigh, C., Chamba, N., Venhola, A., Peletier, R., Doorenbos, L., & Wilkinson, M. H. F. (2021). Optimising and comparing source-extraction tools using objective segmentation quality criteria. Astronomy & astrophysics, 645(January 2021 ), [A107]. https://doi.org/10.1051/0004-6361/201936561 Teeninga, P., Moschini, U., Trager, S. C., & Wilkinson, M. H. F. (2016). Statistical attribute filtering to detect faint extended astronomical sources. Mathematical Morphology - Theory and Applications, 1(1), 100–115. https://doi.org/10.1515/mathm-2016-0006

  • Tracking the division of yeast cells, BSc/MSc

In microbiology, tracking dividing yeast cells in time series imaging is a tedious task, and some results in automatic this task have been obtained using deep learning. However, it is difficult to get sufficient ground truth data for training, and the method does not yield good results on complicated cell shapes. The aim of this project is to explore classical morphological image processing tools, to circumvent these problems. It is also possible to combine these morphological methods with deep neural networks.

References: TO DO

  • Adaptive Binarization for Multichannel Video, BSc/MSc

Nuwa Pen is the world's first smart ball point pen. Nuwa pen has successfully captured the essence of digital writing without making any compromises to the true writing experience of a ballpoint pen on any piece of paper. Nuwa Pen works with cutting edge processing power along with a suite of sensors which helps the pen figure out what the user is writing and where the user is writing. Nuwa Pen is developed by Nuwa Pen B.V. Nuwa Pen B.V. is a start-up based out of Groningen, The Netherlands. The aim of Nuwa Pen B.V is to combine the analogue world with the digital world with the motto of Making the World your canvas. At Nuwa Pen B.V. we want to revolutionize how humans write and interact with the digital world.

In this research project, you will be working on designing a binarization algorithm in C++ for low-resolution, gray-level multichannel video sequence. The binarization has to use adaptive threshold and be consistent over different channels and timestamps. The algorithm has to be highly optimized, and has to be written from scratch without using any 3rd party library, based on the rudimentary binarization algorithm implemented in our proprietary codebase. You will be working as a member of our software engineering team who you can share your idea, get support, and collaborate using Github.

  • Image Skeletonization for Handwritten Notes, BSc/MSc

Nuwa Pen is the world's first smart ball point pen. Nuwa pen has successfully captured the essence of digital writing without making any compromises to the true writing experience of a ball point pen on any piece of paper. Nuwa Pen works with cutting edge processing power along with a suit of sensors which helps the pen figure out what the user is writing and where the user is writing. Nuwa Pen is developed by Nuwa Pen B.V. Nuwa Pen B.V. is a start-up based out of Groningen, The Netherlands. The aim of Nuwa Pen B.V is to combine the analogue world with the digital world with the moto of Making the World your canvas. At Nuwa Pen B.V. we want to revolutionize how humans write and interact with the digital world.

In this research project, you will be working on designing skeletonization algorithm in C++ for low-resolution images. The image skeleton has to have sub-pixel accuracy due to low image resolution, and has to be robust to image noise. The algorithm has to be highly optimized, and has to be written from scratch without using any 3rd party library, based on the rudimentary skeletonization algorithm implemented in our proprietary codebase. You will be working as a member of our software engineering team who you can share your idea, get support, and collaborate using Github.

Supervisor: Jiapan Guo

  • Few-shot learning for image classification, BSc/MSc

The common practice for machine learning applications is to feed as much data as the model can take. This is because in most machine learning applications feeding more data enables the model to predict better. However, few-shot learning aims to build accurate machine learning models with less training data. The metric-learning based few-shot image classification focuses on learning a transferable feature embedding network by estimating the similarities between query images and support classes from very few images.

In this project, we aim at improving the current few-short learning approaches for image classification. For more information, please contact: j.guo@rug.nl.

References: [1] Wenbin Li, Lei Wang, Jinglin Xu, Jing Huo, Yang Gao, Jiebo Luo. Revisiting Local Descriptor based Image-to-Class Measure for Few-shot Learning. https://arxiv.org/abs/1903.12290

  • Few-shot learning for video classification, BSc/MSc

The common practice for machine learning applications is to feed as much data as the model can take. This is because in most machine learning applications feeding more data enables the model to predict better. However, few-shot learning aims to build accurate machine learning models with less training data. In this project, the aim is to classify short videos under the few-short learning setting. The student will investigate different existing methods to extract video representations/embeddings, such as ConvLSTM, ViViT, 3D Resnet 50 etc.

For more information, please contact: j.guo@rug.nl.

References:

[1] Kaidi Cao, Jingwei Ji, Zhangjie Cao, Chien-Yi Chang, Juan Carlos Niebles. Few-Shot Video Classification via Temporal Alignment.

[2] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, Wang-chun Woo. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting. https://arxiv.org/abs/1506.04214

[3] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lučić, Cordelia Schmid. ViViT: A Video Vision Transformer. https://arxiv.org/pdf/2103.15691v2.pdf