Theses
We are always happy to work with students who want to write their thesis at our chair. On this page you will find an overview of the current thesis topics that are available for collaboration, as well as a list of topics that have already been completed. If you have a topic that fits our research profile, we would be happy to talk to you about the possibility of a collaboration.
Open Theses
Enhancing Robot Navigation and Coverage Tasks by Moving Obstacles Autonomously
Robots are increasingly used in unstructured environments, such as homes and factories, where they are required to navigate the environment reliably and efficiently. Among other tasks, mobile robots are expected to perform coverage tasks like cleaning, inspection, etc. Common metrics for coverage tasks are the time it takes to cover the area, the distance traveled, and the percentage of the area that has been covered. Current robots struggle at navigating in particularly cluttered environments, where they drive suboptimal trajectories to avoid obstacles and, in the worst case, they get stuck due to the lack of space to drive to the next goal and trigger recovery strategies to free themselves. This thesis aims to extend the coverage task with obstacle interaction, to allow the robot to push selected obstacles a few centimeters when they prevent the robot from cleaning efficiently with good coverage. Additionally, the implemented method should be integrated into a robotic platform and tested in a real-world scenario.
This thesis is a cooperation of AI-FM and Bosch. A student conducting this thesis is encouraged to have a temporarily internship at Bosch’s research department.
- Master Thesis
- Contact: Marcel Neuhausen
Safe Reinforcement Learning Under Expert Guidance
A common approach to safe reinforcement learning involves transfer-learning: train an agent in a controlled environment where safety violations are allowed, like a simulation or laboratory, and transfer it to the target environment, such as the real world. Our method trains an agent, called the guide, to navigate the controlled environment safely. We leverage the guide’s knowledge to later train a new agent, the student, to navigate the target environment while behaving safely at all times.
This thesis can be pursued on either a theoretical or applied level; or even a combination of both.
- Bachelor/Master Thesis
- Contact: Markel Zubia
Uncertainty Quantification in Deep Neural Networks
Deep neural networks (DNNs) are the backbones of many applications in the field of artificial intelligence (AI) nowadays. The reliability of DNNs and their predictions is essential for building trustworthy AI applications. However, the predictions of DNNs are often subject to uncertainty due to shortcomings of the underlying DNN models and architectures or due to noise in the data. Accordingly, it is crucial to determine the sources, quantify the uncertainty, and process it reasonable to guarantee an application’s trustworthiness.
This thesis can be pursued on either a more theoretical or practical level and may also be combined with topics from the fields of reinforcement learning and decision-making.
- Bachelor/Master Thesis
- Contact: Marcel Neuhausen
Completed Theses

Quantifying Uncertainties in Depth Estimation for Autonomous Driving in CARLA.
This bachelor thesis deals with the topic of distance estimation in the context of autonomous driving. There is a variety of techniques and sensors used in this field, often using multiple combinations and complex pipelines to estimate distances with high precision. In this thesis, I focus on a simple, straightforward architecture using only a monocular camera. For that, I have built a convolutional neural network that takes images from a single camera mounted on the hood of the vehicle as input and outputs a distance estimation. The model is trained with custom generated datasets in the CARLA simulator, which are used to train the model in a supervised manner. Furthermore,

Temporarily Relaxing Constraints in Constrained Markov Decision Processes
This thesis explores the idea of temporarily relaxing constraints in Constrained Markov Decision Processes (CMDPs) to enhance Safe Reinforcement Learning (Safe RL). CMDPs extend standard Markov Decision Processes (MDPs) by introducing constraints on cumulative costs, thereby ensuring that the agent’s actions are within safety limits. One common approach to solve CMDPs is through Lagrangian relaxation, where the maximization of rewards and minimization of costs are balanced by a dynamically adjusting Lagrange multiplier. The primary objective of this thesis is to investigate if and how temporarily constraint relaxation can improve the performance of the trained agent and to determine its impact on the training process. Moreover, it seeks to expand existing

Speeding up reinforcement learning via abstractions of complex environments
This study investigates whether abstracting complex environments (making the continuous state space discrete) can speed up reinforcement learning. A complex environment was abstracted with multiple approaches, to see if it gives a benefit over training without abstractions. It was concluded that the approaches we tried are insufficient at speeding up the training process at a satisfying percentage, while also losing precision. We propose that abstractions should not be constructed on the environment manually, rather the agent should get precise observations and the abstraction ability of the neural network should be improved.