# Research Internships

#### Model checking st-t-t-tutter equivalent system models

**Supervisor:** Heerko Groefsema. **Status:** available. **Date:** 26/10/2021.

Verification entails proving or disproving the correctness of a system model with respect to its specification.
One approach towards verification is model checking.
In our research, we use model checking to verify process models against multiple sets of formal specifications based on atomic propositions.
Often, however, within these different sets of specifications we are only interested in certain sub-sets of atomic propositions.
Therefore, to optimize model checking performance, we are interested in obtaining multiple stutter equivalent sub-models of the given system model that only contain states with certain atomic propositions and minimal paths between those states.
In this project the student will investigate system models and the concept of stutter equivalence, and design a data structure that contains the system model and its stutter equivalent sub-models, such that states are never duplicated while stutter equivalent paths in the sub-models easily map back to the respective states in the system model.

#### Expansion of temporal logics using automated planning

**Supervisor:** Heerko Groefsema. **Status:** available. **Date:** 26/10/2021.

Verification entails proving or disproving the correctness of a system model with respect to its specification.
Such specifications are often expressed using formal methods of mathematics such as temporal logics.
To obtain information on successor states in system models, it is possible to rewrite temporal logic expressions using semantic equivalences and expansion laws.
Automated planning is an artificial intelligence technique that aims to find an optimal set of actions which together accomplish a predetermined goal.
The question for the student then is, can we use automated planning to obtain the possible expanded logic expressions and can we obtain the optimal expanded expression?

#### Make a difference in Energy Transition with Machine Learning

**Contact:** Viktoriya Degeler or Frank Blaauw.. **Status:** available. **Location: ** eWEning star. **Date:** 01/06/2021.

eWEning star is a “fresh from the oven” Start-Up, which is currently developing a discovery tool that serves stakeholders in the renewable energy sector with relevant scientific information regarding renewable energy. Currently people in this sector use key-word based search queries in order to find scientific papers and reports, but with eWEning star’s concept, these papers are smartly categorized, saving users a lot of time and nerves. By making the search process more efficient we can make the energy transition towards renewables faster!
Currently we have around 900 documents that are manually categorized in three different ways: (i) perspective, (ii) position in value chain, and (iii) geographical location. Combined, we have created 15 categories. Depending on the length of your internship, it is possible to work on these all, or choose one out of the three options.
While this manual approach is feasible for a small number of papers, it does not scale well.
Our aim is to apply Machine Learning to improve this process. We expect that machine learning can provide us with a fast solution for categorizing already published papers according to eWEning star concept. You are given the freedom to design, develop and test a process which leads to the automated categorization.
You have a background in Data Science and/or computer science, and you have natural curiosity for solving issues. You aren’t afraid to ask questions if you seem to “hit the wall”, but are capable of working independently. Some entrepreneurial mentality is a benefit as eWEning star is a Start-Up. Good communication skills are needed towards non-technical founder.

#### Researchable in-company internship

**Contact:** Frank Blaauw.**Status:** available. **Location:** Researchable B.V. **Date:** 01/06/2021.

Researchable B.V. is a small startup located in Groningen. They aim to improve science by developing software in the early phases of research projects (e.g. developing software to collect data, or automate other parts of research) and at the final phase of research projects (i.e., the valorisation of research). During this internship, the student will be part of the Researchable team, and work on various projects that they are currently running. Their office is located on Zernike.

#### Caching with Limited Memory Consumption for Backtracking Search

**Supervisor:** Michel Medema. **Status:** available. **Date:** 25th of November 2021.

When solving Constraint Satisfaction Problems, standard backtracking search algorithms oftentimes encounter the same subproblem more than once. In order to avoid solving a subproblem repeatedly only to discover that it still has no solution, solvers can employ caching techniques to record unsatisfiable subproblems. Unfortunately, the size of the cache, and therefore the amount of memory that it consumes, is, in the worst case, exponential with respect to the size of the problem, making this generally infeasible in practice. Instead of storing all subproblems, it may be possible to store a fixed number of them. The aim of this project is to analyse how the entries in the cache are used during the search process and to use these insights to reduce the memory consumption of caching techniques. Some subproblems may, for example, not contribute much to the overall speed-up and could easily be discarded.

#### Reducing Redundant Exploration of Parallel Search Algorithms

**Supervisor:** Michel Medema. **Status:** available. **Date:** 25th of November 2021.

Finding the optimal solution to a Constraint Satisfaction Problem is an NP-complete problem, meaning that, in the worst case, the time complexity of a search algorithm grows exponentially with respect to the size of the problem. Many techniques have been developed to reduce the search space, including constraint propagation and pruning based on a bound on the cost. Besides that, parallel search algorithms can be used to explore multiple solutions in parallel, thereby potentially reducing the overall search time. Parallel search algorithms may potentially perform some redundant search, however, as it is not always possible to make the most effective use of pruning techniques. This redundant search may occur because, unlike sequential algorithms that can use the upper or lower bound on the cost to disregard subsequent solutions, a parallel algorithm may have already started to explore such solutions before the bound is known. The impact of this problem becomes considerably worse when the search is distributed and involves many local decisions without any global coordination, such as when the algorithm uses decomposition techniques. This project aims to explore the influence that this redundant search has on the overall execution time of the algorithm and possible techniques that can be used to avoid it, at least partially.

#### Decomposition Techniques for Optimisation Problems

**Supervisor:** Michel Medema. **Status:** available. **Date:** 25th of November 2021.

Decomposition techniques try to divide a Constraint Satisfaction Problem into independent subproblems based on the dependencies that exist between the variables. The decomposed problem often has a lower worst-case complexity than the original problem, and finding a solution to the problem is generally faster. One such algorithm is Backtracking with Tree-Decomposition, which applies standard backtracking search on the decomposed version of a problem. However, this algorithm was originally designed to solve satisfaction problems rather than optimisation problems, meaning performance results are not available for optimisation problems. The evaluation of Backtracking with Tree-Decomposition on optimisation problems is the focus of this project, as well as comparing these results to other constraint solvers and decomposition techniques.