Join Us

We are always looking for motivated students who want to write their BSc or MSc thesis with us. If you are interested in writing your thesis on a topic that aligns with our research interests, a broad range from low-level optimizations to rendering techniques and algorithms, feel free to contact us at any time.

The non-exhaustive list below is meant to provide you with an impression of what you could be working on with us. You will not have to work on exactly one of those topics (although you could), they are rather meant to help you choose what you want to work on. If you have your own ideas for different topics, we are more than happy to discuss those with you as well!

Moreover, we are always looking for candidates to apply for our open positions as Hiwi or as full-time researcher. Please don't hesitate to ask the respective contact person for more details.

Thesis Topics

Global Illumination and Algorithms
...
If you are interested in one of the following topics, or looking to work on something similar, contact Pascal Grittmann, M.Sc.

We always offer a wide range of thesis topics related to rendering algorithms. Topics range from high-preformance GPU rendering to advanced sampling methods for offline rendering. Theses can be on implementation aspects (focus on software engineering), algorithms and methods (focus on theory and math) or anywhere in-between the two. Reach out with a brief list of your interests and relevant background if you are interested.

Who is this topic for?
  • Students interested in global illumination rendering
Useful skills (not hard requirements):
  • Lectures: Computer graphics and/or Realistic Image Synthesis
Topics on High-Performance Deep Learning
...
If you are interested in one of the following topics, or looking to work on something similar, contact Matthias Kurtenacker, M.Sc.

High performance is a key driver of the recent advancements in deep learning. For example, using GPUs as accelerators for deep learning can speed up training significantly, reducing training times from weeks to days or even hours. For training, the resource management and scheduling decisions are crucial for substantial acceleration. Looking at such strategies in the context of clusters with CPU and GPU hardware is one research direction. For inference, fast implementations taking advantage of specialized hardware instructions, often using a reduced precision, are essential. Examples for this are tensor cores on NVIDIA GPUs and the tensor processing unit (TPU) from Google. Properly utilizing these hardware units for fast inference on embedded hardware is another research topic

Who is this topic for?
  • Students interested in high performance and deep learning
Useful skills (not hard requirements):
  • Experience with Python and C++
  • Experience with deep learning and respective frameworks
  • Experience with distributed computing and performance optimizations

Training neuronal networks through backpropagation requires activations computed during inference to be present during gradient computation. However, storing these values adds a lot of overhead to training, often exceeding memory limitations of current hardware. Checkpointing is a strategy that can be employed to limit memory usage during training, but comes at significant performance cost. Rotor offers an efficient implementation for optimal checkpointing in PyTorch.
Rotor is currently limited to optimizing a single nn.sequential container in PyTorch. The topic of this thesis is to extend Rotor to multi-region codes, with a specific focus on execution of networks on heterogeneous architectures.

Who is this topic for?
  • Students interested in high performance and deep learning
Useful skills (not hard requirements):
  • Experience with Python and C++
  • Experience with deep learning and respective frameworks
  • Experience with distributed computing and performance optimizations
Topics on High-Performance Graphics
...
If you are interested in one of the following topics, or looking to work on something similar, contact Stefan Lemme, M.Sc.

Many image processing and computer vision applications constitute a pipeline that needs to be executed in real-time. To achieve this, implementations are highly optimized and tuned for a given architecture. However, those implementations typically only optimize individual operators and do not exploit the potential when optimizing across multiple stages. New standards like OpenVX allow to describe computer vision applications as a graph, which makes optimization over multiple stages accessible. The goal of this thesis topic is to investigate the optimization potential of image processing pipelines on embedded hardware.

Who is this topic for?
  • Students interested in high performance and computer vision
Useful skills (not hard requirements):
  • Experience with C++, CUDA / OpenCL, or FPGA
  • Experience with performance optimizations
  • Experience with computer vision

HiWi Positions

HiWi for System Administration
...

We are looking for motivated students that want to support our infrastructure at the computer graphics lab. The successful candidate will extend and maintain our hardware and software infrastructure.

HiWi for Revision of CG1-assignment Framework
...

We are looking for motivated students that want to contribute to the ongoing revision of the CG1-assignment framework. The successful candidate will work on the provided ray-tracing framework, unit and integration tests as well as the practical assignments. Perspectively, this may lead to a tutor position for the next winter term 2023/24.

Researcher Positions

Currently, we have no open positions.