Theses

Thesis topics and application process for the AI and Society chair.

Application Process

We are always looking for motivated students to join us for their thesis work. You can choose from our available topics below or propose your own research idea. Either way, we will collaborate with you to refine your proposal and chart out the course.

We provide full computational support, including access to GPUs and other resources needed for your research.

We want to ensure that your thesis journey is successful. To this end, we look for students who have taken courses in AI/ML and have hands-on experience working with AI/ML models. To ensure a strong match between your interests and our research areas, please send your application to aisoc-office@rub.de with the following information:

  1. Name.
  2. Matriculation number.
  3. Exact name of study program.
  4. Current average grade.
  5. List of AI/ML-related courses have you taken along with your grades.
  6. Proposed start date of your thesis.
  7. The topic you would like to pursue. If the topic is not from our list, add a brief description.

We will get back to you within two weeks after your email.

Available Topics

Bias in Large Language Model Outputs

Develop metrics to measure demographic bias in LLM outputs. Design methods to mitigate biases.

Hallucination Detection in GenAI

Design methods to automatically detect when a GenAI model is hallucinating, e.g., reporting wrong facts.

Characterizing the Effect of Inference Acceleration

Investigate the effect of inference acceleration techniques, e.g., quantization and pruning, on the model performance.

Testing How Well LLMs Understand the World

Conduct controlled experiments like counterfactual tests to explore if LLMs truly understand the subject matter.


Past Theses

  1. On Counterfactual Reasoning Abilities of LLMs
    Tamara Stojanovska, MS, 2025

  2. Characterizing LLM Generations in Diverse Real-world Conditions
    Leon Swazinna, MS, 2025

  3. Comparing Performance of LLMs in Various Languages
    Anas Al Shoker, MS, 2025

  4. Exploring Current and Potential Privacy Risks of LLMs and How They Compare to Existing Taxonomies
    Helen Schmitt, MS, 2025

  5. Bias Mitigation Approaches in LLMs
    Shaharyar Ashraf, BS, 2025

  6. Impact of Common Deployment Strategies on Bias in Large Language Models
    Elisabeth Kirsten, MS, 2024