Scientific Visualization and Computer Graphics

Scientific Visualization and Computer Graphics > BI > FSE > RUG * Print * Search

Table of available RSinCS/BSc thesis projects

Supervisor Abbrv.
Prof. Dr. Jiri Kosinka JK
Dr. Steffen Frey SF
Dr. Cara Tursun CT
Dr. Christian Kehl CK

This page provides a list of current BSc/MSc projects offered by the SVCG group and their status. The column called Sup. indicates the main supervisor of the project (see the table on the right), and the column labelled # indicates how many positions are currently available in a particular project.

Type Sup. # Project
BSc JK 0 Precision of drawing in VR
What is the precision of drawing simple shapes, such as line segments and circles, in virtual reality? How do various guide tools help? You will investigate these research questions in this project. You should own a computer able to drive a VR headset. If you do not own a VR headset, you can borrow one from us or the CIT. You can build on recent Tilt Brush projects: see here and here, and the references cited therein, or, even better, work in OpenBrush.
0 Anamorphic sculptures in Virtual Ray Tracer
This paper introduced a simple yet visually intriguing way of producing anamorphic sculptures using reflection and refraction. In this project, you will explore the method and find the best way to implement it using real-time ray tracing in Virtual Ray Tracer, an educational tool for teaching/learning ray tracing developed in the SVCG group using Unity 3D, to further enhance its educational (and artistic) potential. For this project, you should own a system with a modern GPU capable of real-time ray tracing via Unity (such as an Nvidia RTX).
0 Virtual Ray Tracer in VR
In this project, you will explore and implement how to visualize, using Virtual Ray Tracer, an educational tool for teaching/learning ray tracing developed in the SVCG group using Unity 3D, ray tracing in virtual reality. You should own a computer able to drive a VR headset. If you do not own a VR headset, you can borrow one from us or the CIT.
0 Molecule visualization and interaction
This is a project in collaboration with Protyon, a startup company based in Groningen, focused on bringing molecular modeling to hospital clinics. Protyon would like to explore Molstar, an open source molecular visualization package, as a web browser-based tool for presenting their molecular analyses. The Molstar package is written in Typescript and already includes all basic and many advanced features for molecular visualization and working with standard file formats.

Some of its basic features include methods and user interface elements for loading and exporting molecular data, various 3D representations for visualizing molecule coordinates, and simple interactive measurements on atoms.

In this project, we aim to build upon the existing feature set and expand the capabilities of Molstar, as follows: 1) Simplified user interface with a customized look. 2) Custom visually appealing appearance of molecules. Both traditional rendering and neural network-based styling can be explored. 3) Ability to render molecular scenes at high quality. 4) Interact with the 3D molecular viewer to select groups of atoms and make connections between them. 5) Save selected groups and their relationships for later loading. 6) Load groups and interactions into the viewer. 7) Non-mouse pointer navigation through key molecular elements (e.g. groups, optional). 8) Explore options for VR (optional).

The project will be carried out in close collaboration with Protyon using an agile methodology. Regular communication of the project status, clarification of required features, and refinement of those features will be an ongoing part of the practical realization of this project.

BSc/MSc SF 0 Volume Raycasting in Virtual Ray Tracer
In this project for two students you will develop support for rendering volumes with different compositing modes + local lighting (one student) and transfer function support (another student) in Virtual Ray Tracer, an educational tool for teaching/learning ray tracing developed in the SVCG group.
0 Optimization of Grids for the Visualization of Large Data Collections
Data collections are often visualized by placing graphical representations of its members (tiles) into a Cartesian grid such that similar ones are close. This project involves implementing alternative strategies to generate hierarchical grids (that can be viewed at different granularities) and evaluate their outcomes for different datasets, including image data bases, stock predictions, and simulations (one student). For fast and efficient execution, optimization approaches should efficiently run in parallel on GPUs (another students). The developed optimization methods and implementations needs to interface with an existing C++ program (python bindings exist as well).

demo: paper:

BSc CT 0 A software tool for visualizing gaze data and saccade predictions
Eye trackers, which provide information about an observer's gaze position on a screen, have proven useful across various visual computing domains. Their increasing affordability and technical capabilities have recently made them integral to computer graphics applications, notably in foveated rendering. Moreover, as practical applications of eye tracking expand, the analysis of eye movements is becoming a fundamental aspect of research utilizing gaze data. A significant area of this research focuses on the online prediction of fast eye movements, known as saccades, which occur when we naturally shift our gaze while observing visual stimuli.

Numerous open-source and proprietary software solutions exist for visually analyzing gaze data from eye trackers. Common visualizations in these tools include heatmaps and scan paths, which illustrate eye movements and fixation durations. However, there is a noticeable absence of software tools specifically designed for analyzing saccades and evaluating the accuracy of saccade predictions, particularly in video contexts. This project aims to fill this gap by developing a software tool that not only records and visualizes gaze data from an eye tracker but also assesses the performance of saccade predictions based on existing research. This project requires physically being present at the university regularly, working in the research lab of the group (Vislab) using the eye tracker, and conducting subjective experiments with human participants.


[1] Duchowski, A. T. (2020). Eye-based interaction in graphical systems: 20 years later gaze applications, analytics, & interaction. ACM SIGGRAPH 2020 Courses, 1-246. DOI:

[2] Santini, T., Fuhl, W., Kübler, T., & Kasneci, E. (2016, March). Bayesian identification of fixations, saccades, and smooth pursuits. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research & Applications (pp. 163-170). DOI:

[3] Kurzhals, K., Heimerl, F., & Weiskopf, D. (2014, March). ISeeCube: Visual analysis of gaze data for video. In Proceedings of the symposium on eye tracking research and applications (pp. 43-50). DOI:

[4] Arabadzhiyska, E., Tursun, O. T., Myszkowski, K., Seidel, H. P., & Didyk, P. (2017). Saccade landing position prediction for gaze-contingent rendering. ACM Transactions on Graphics (TOG), 36(4), 1-12. DOI:

BSc CT 0 Visual saliency of 3D scenes in VR
Visual saliency is defined as regions that attract observer attention in visual stimuli such as images or videos. We expect that observers fixate their gaze on highly salient regions more often than less salient regions. In this project, the goal is to develop a software tool using the Unity game engine on a Varjo XR3 Virtual Reality headset to collect eye-tracking information from the observers while they view a set of 3D scenes with stereo rendering. Then, the Gaussian kernel will be used to compute fixation density maps. Finally, the same experiment will be repeated by rendering the 3D scene on a virtual 2D plane without depth information, and differences between saliency maps will be compared. This project requires physically being present at the university regularly, working in the research lab of the group (Vislab) using the VR headset, and conducting subjective experiments with human participants. In addition, a familiarity with the Unity game engine development (or a desire to learn it independently) is required.


[1] Fang, Y., Wang, J., Narwaria, M., Le Callet, P., & Lin, W. (2014). Saliency detection for stereoscopic images. IEEE Transactions on Image Processing, 23(6), 2625-2636. DOI:

[2] Dittrich, T., Kopf, S., Schaber, P., Guthier, B., & Effelsberg, W. (2013, February). Saliency detection for stereoscopic video. In Proceedings of the 4th ACM Multimedia Systems Conference (pp. 12-23). DOI:

[3] Fan, X., Liu, Z., & Sun, G. (2014, August). Salient region detection for stereoscopic images. In 2014 19th International Conference on Digital Signal Processing (pp. 454-458). IEEE. DOI:

[4] Varjo Unity XR documentation and samples. Online resource accessible at:

[5] Tobii theory on heatmap generation. Online resource available at:

BSc CT 0 Modeling LOD visibility in the peripheral vision
Level-of-detail (LOD) rendering is a technique for rendering complex 3D meshes in computer graphics. The common application of LOD renders the objects closer to the viewer with higher detail, while those further away are rendered in less detail. A recent application of LOD is in the foveated rendering, where the detail is reduced in the peripheral visual field for efficient memory usage and performance optimization. The goal of this project is to model the visually tolerable amount of LOD detail reduction as a function of the distance to the gaze position on a screen (eccentricity) using a simple LOD method such as the one provided in the references. It is required to implement a subjective experiment framework and collect visibility data with different LOD strategies and different viewing parameters from human participants. Later, the collected data will be used to predict the tolerable amount of LOD detail reduction for an arbitrary mesh model. The experiments should be developed for and conducted in the research lab of the group (Vislab). Conducting subjective experiments requires physically being present at the university regularly.


[1] Hoppe, H. (1997, August). View-dependent refinement of progressive meshes. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques (pp. 189-198). DOI:

[2] Luebke, D. (2003). Level of detail for 3D graphics. Morgan Kaufmann. ISBN: 978-1-55860-838-2

[3] Fast functional WebGL library. Online resource: and

[4] Surace, L., Tursun, C., Celikcan, U., & Didyk, P. (2023). Gaze-Contingent Perceptual Level of Detail Prediction. DOI:

MSc CT any Projects in eye tracking, AR/VR, visual perception
If you have an idea about the topics given above, and if you want to propose your own project topic, feel free to contact Cara by email. Cara's webpage can be accessed from here.
BSc CK 0 Exploring the Visual Design Space of Oceanographic Fluid-Flow Visualisations
The computational fluid dynamics (CFD) of oceanic regimes relies on Eulerian modelling of discrete hydrodynamic velocity fields and the Lagrangian modelling of trajectory-integrated particles within the fluid. Those information are very complex, due to the directional tensor layout of the Eulerian fields and the multi-attributed particle trajectories. Communicating insight on the motion and transport of floating particulates in the ocean – be that plastic debris, plankton microbes or spilled oil-droplets – requires an effective visualization. That said, contemporary fluid visualizations fall short on effective communication of physical causes and observable consequences in marine particle transport. In this project, we explore the visualization design space of Eulerian- and Lagrangian visualisations to deduce design principles for effective visual communication.

Feel free to contact Christian by email for detailed project descriptions. A short description is available at here.

BSc/MSc CK & SF 0 Image analysis of polarized, spectral rock thin-sections
The ongoing gas depletion of the Groningen natural gas field leads to bedrock subsidence and associated seismic activity. Estimating the probability of local seismic activity requires a reliable rock model of Groningen’s gas field. In this interdisciplinary project with the Geo-Energy group at the university’s Energy and Sustainability research institute, we together facilitate the rock model with properties derived from microscope scans of the rock’s mineral structure. Those microscope scans – so-called ‘thin-sections’ – are spectral image stacks, which need to be (a) co-registered, (b) segmented, (c) labelled and (d) numerically described. The user-guided outcome of the project then facilitates the increasingly-rapid creation of training references for automated deep-learning procedures in later projects. Thus, the interactive tools developed in this project bridge the large gap in appropriate training data to even make deep learning a viable future vision.

Feel free to contact Christian by email or Steffen by email for detailed project descriptions. Description teasers for the segmentation, registration and classification are available.

CK 0 Explorative Toolbox for Teaching Geostatistical Modelling
Using statistical tools is comparatively easy for mathematicians and computer scientists, due to their educational focus. Yet, the geology and geoscience curriculum – especially on an international level – has little time allocation for mathematical foundations. Consequently, learning geostatistics is a very complicated process for geoscientists in general. In this project, we aim to flatten the learning curve of geostatistics by developing a highly interactive and engaging visual toolbox to experiment with traditional and cutting-edge geostatistical methods. The engagement and fun when using the tools shall be achieved by exploring new means of visual storytelling and gamification of the toolset. The resulting tool will be used in existing- and future geoscientific courses at RUG and abroad.

Feel free to contact Christian by email for detailed project descriptions. The description teaser can be found here.

0 Interactive, Visual Track-and-Trace of Particles on Dutch Coastlines
Have you heard that local shipping accidents happen in increasing frequency near the Dutch coastline ? This is a local hot-topic with high visibility, as the Waddenzee is the cultural heritage of our local region. That said, with the high visibility also comes potential hysteria in the local population. We aim to educate and engage with the public on the topic of coastline pollution by developing an interactive, highly engaging visual platform to explore the fate of marine plastic garbage, cardboard dropping and shipping goods on Dutch coastlines. We use state-of-the-art, physical hydrodynamic data in a 2D fluid simulation. Users can then not only drop particles into the virtual sea but also use various focus-and-context tools in space- and time to investigate the particle transport and follow their particle from drop-off to beaching at the coast. New avenues in Lagrangian particle simulations, 3D interaction and visual storytelling can be explored within your student work on this project.

Feel free to contact Christian by email for detailed project descriptions. The description teaser can be found here.