Thesis Topics

Beyond the theses offered on this page, you can always check out our research projects and contact persons working on stuff that interests you. Feel free to propose topics or ask if there’s any hot topic that isn’t listed down below yet! Just don’t forget to include some information on your background & interests.

Collection Of ThESES

Low-Cost Fault Injection Techniques: A Practical Approach to Hardware Security Testing (Bachelor-Thesis)

With the availability of affordable and ready-to-use hardware, such as the Raspberry Pi Pico [1], performing hardware attacks on embedded devices has become more and more accessible, allowing to conduct security assessments at low cost [2, 3, 4]. One particularly interesting method is the use of crowbars as a glitching mechanism [5]. This method uses a MOSFET circuit is used to perform voltage glitches, which can be found in the ChipWhisperer [6]. Our research interest is to investigate the capabilities of using a Raspberry Pi Pico in combination with a crowbar circuit to performing glitching attacks on embedded devices. To do this, we want to test different types of MOSFETs and understand the extent to which these attacks can be performed at low cost.

The scope of this thesis is as follows:

  • Conduct a literature review on existing fault injection methods using crowbar-glitches and low-cost hardware.
  • Select appropriate MOSFETs for implementing crowbar-glitches capable of manipulating bus signals.
  • Design and assemble a basic hardware setup to perform crowbar-glitch fault injections.
    Experiment with altering data transmission by inducing glitches.
  • Document the experimental procedures and initial findings.

References:

[1] https://www.raspberrypi.com/documentation/microcontrollers/pico-series.html
[2] O’Flynn, Colin. „PicoEMP: A Low-Cost EMFI Platform Compared to BBI and Voltage Fault Injection using TDC & External VCC Measurements.“ _2023 Workshop on Fault Detection and Tolerance in Cryptography (FDTC)_. IEEE, 2023. https://ieeexplore.ieee.org/abstract/document/10495121
[3] https://github.com/stacksmashing/pico-tpmsniffer
[4] https://hackaday.io/project/196357-picoglitcher-v2
[5] O’Flynn, Colin. „Fault injection using crowbars on embedded systems.“ _Cryptology ePrint Archive_ (2016). https://eprint.iacr.org/2016/810
[6] https://www.newae.com/chipwhisperer

Using LLMs to Provide Real-time Phishing Guidance to Non-German Speakers

Contact: Tarini Saka (Email: tarini.saka@rub.de)

This study investigates how large language models (LLMs) can assist non-German speakers in detecting and understanding phishing emails written in German. By combining machine translation, context-aware analysis, and email threat detection, our system provides real-time guidance, highlighting suspicious elements and offering actionable insights. The aim is to bridge the language barrier, enhancing phishing awareness and cybersecurity for international users in Germany. A user study will assess its effectiveness in improving security decision-making and reducing phishing susceptibility.

Examining Prompt Injection Attacks against AI-Powered Email Assistants

Contact: Tarini Saka (Email: tarini.saka@rub.de)

AI-powered email assistants are rapidly transforming digital communication, offering smart summaries, automated replies, and workflow enhancements. However, as these tools rely increasingly on large language models (LLMs), they become vulnerable to a new class of attacks known as prompt injection —where malicious actors embed instructions in email content to manipulate AI behaviour. This study will investigate the feasibility and impact of prompt injection attacks in AI email assistants. The research will include designing and executing controlled experiments on existing AI tools to assess their susceptibility to crafted prompts embedded in email threads. It will also explore how these attacks could lead to data leakage, misinformation, or unauthorized actions. The outcome aims to provide both a threat model and practical recommendations to improve the security posture of AI-enhanced email systems.

[1] Weiss, Roy, Daniel Ayzenshteyn, and Yisroel Mirsky. „What Was Your Prompt? A Remote Keylogging Attack on {AI} Assistants.“ 33rd USENIX Security Symposium (USENIX Security 24). 2024.

[2] Hoang, The-Anh, et al. „Exploring Prompt Injection: Methodologies and Risks with an Interactive Chatbot Demonstration.“ International Symposium on Information and Communication Technology. Singapore: Springer Nature Singapore, 2024.

[3] Debenedetti, Edoardo, et al. „AgentDojo: A Dynamic Environment to Evaluate Prompt Injection Attacks and Defenses for LLM Agents.“ The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 2024.

[4] IBM. (2025, April 17). What is a prompt injection attack?. IBM. https://www.ibm.com/think/topics/prompt-injection

Auditing AI-based Assistive technology (Speech-to-text) for security and privacy concerns

Contact: Tarini Saka (Email: tarini.saka@rub.de)

This study examines the security and privacy risks associated with AI-driven speech-to-text assistive technologies. While these tools enhance communication for the Deaf and hard-of-hearing (DHH) community, they also introduce novel vulnerabilities and risks. The goal is to audit popular speech-to-text systems, identifying potential risks related to user privacy, model biases, and misuse by malicious actors. Through empirical analysis and security testing, we aim to assess their robustness and propose mitigation strategies to enhance privacy protection and secure deployment.

 

[1] Surani, Aishwarya, et al. „Security and privacy of digital mental health: An analysis of web services and mobile applications.“ IFIP Annual Conference on Data and Applications Security and Privacy. Cham: Springer Nature Switzerland, 2023.
[2] Feal, Álvaro, et al. „Angel or devil? a privacy study of mobile parental control apps.“ _Proceedings on Privacy Enhancing Technologies_ (2020).
[3] Gruber, Moritz, et al. „“We may share the number of diaper changes”: A Privacy and Security Analysis of Mobile Child Care Applications.“ _Proceedings on Privacy Enhancing Technologies_ (2022).
[4] Elahi, Haroon, et al. „On the characterization and risk assessment of ai-powered mobile cloud applications.“ _Computer Standards & Interfaces_ 78 (2021): 103538.
[5] AI Security: Risks, Frameworks, and Best Practices
[6] AI Risks in Mobile Apps: How to Protect Your Data and Stay Compliant

Evaluating the Limitations of Automated Accessibility Testing for Detecting Dark Patterns

Automated accessibility tools like Axe Android, A11y Ally, or Accessibility Scanner by Google are widely used. But are they able to flag deceptive interfaces? This thesis investigates the capabilities and limitations of current automated tools in detecting dark patterns that affect visually impaired users. The goal would be to benchmark several tools using a test set of UI designs and compare results to expert/manual evaluations. Other possible goals could be:

  • A defined test suite with common dark pattern examples.

  • A comparison table showing tool performance and coverage gaps.

  • Suggested improvements or tool combinations.

[1] https://dl.acm.org/doi/abs/10.1145/3424953.3426633
[2]  https://dl.acm.org/doi/abs/10.1145/3586183.3606783

[3] https://www.digitala11y.com/free-mobile-accessibility-testing-tools/

Exploring User Interfaces to Support Secure Chatbot Interactions

This study explores HCD (Human-Centered Design) to propose interventions that support secure human-chatbot interactions. With an increasing lack of awareness of privacy mechanisms in chatbots and risks in human-chatbot interactions, there is a need for users to have more control and transparency when it comes to protecting user privacy. The goal would be to explore design and development of visual prototypes in a conversational UI context that could support users with improving awareness and agency of privacy mechanisms. This also aims to bridge a gap between Human-Centered Design and privacy mechanisms in the context of chatbot interactions. The study could also explore:

  • benchmarking comprehensive relevant solutions that support secure chatbot interactions
  • exploring possible features that could support users in decision making for user privacy                                                   
[1] Towards Human-Centered Design of AI ServiceChatbots: Defining the Building Blocks

[2] PriBots: Conversational Privacy with Chatbots

[3] Understanding Users’ Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms

[4] UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library

ANALYZING AND ENHANCING TRAFFIC WATERMARKING ATTACKS AGAINST ANONYMITY SYSTEMS

Traffic watermarking is a network traffic analysis technique can be used to evaluate the resistance of anonymity systems such as Tor to deanonymization attacks. A watermark is a unique pattern embedded in encrypted traffic flows to link traffic at different points in the network. 

While watermarking attacks are known to be highly effective, however, assessing their impact on anonymity systems remains a significant challenge for system developers and researchers. This is due to the variable nature of watermarking schemes including various network conditions, different anonymity systems, configurations and the introduction of new features. 

Your goal would be to explore and evaluate existing watermarking techniques, come up with new ideas, and show how the techniques can be effectively deployed in the network.

[1] RAINBOW: A Robust and Invisible Non-Blind Watermark for Network Flows

[2] SWIRL: A Scalable Watermark to Detect Correlated Network Flows

[3] Inflow: Inverse Network Flow Watermarking for Detecting Hidden Servers

[4] Novel and Practical SDN-based Traceback Technique for Malicious Traffic over Anonymous Networks

[5] FINN: Fingerprinting Network Flows using Neural Networks

FROM REGULATION TO CODE: ENFORCING THE GDPR VIA COMPILE-TIME CHECKS AND LIVE MONITORING

Overview

Compliance to the General Data Protection Regulation (GDPR) [1] is often seen as a legal or organizational task, but many GDPR principles can be translated into technical policies and checks. For example, the GDPR mandates “data protection by design and by default” (Article 25), meaning systems should integrate privacy safeguards like data minimization from the outset. Instead of treating this as a vague legal guideline, we want to enforce it in software. This includes analyzing source code to ensure it only collects the minimum necessary personal data, or deploying a runtime monitor that blocks any unauthorized use of personal data. Recent research shows that such approaches are feasible: static program analyses can detect privacy leaks and policy violations before deployment [2], while runtime enforcement tools can prevent non-compliant actions on-the-fly [3,4]. This thesis will build on these insights to bridge the gap between legal text and technical implementation.

 

Thesis Scope and Objectives

In this thesis, you will systematically examine the GDPR text to pinpoint which legal provisions can be technically enforced. The work is twofold:

  1. Identify and categorize GDPR rules that can be enforced by technology, distinguishing static analysis from runtime enforcement.
  2. Design and implement prototypes to demonstrate enforcement via static analysis and runtime monitoring.

Key objectives include:

  • Review GDPR Requirements Analyze the GDPR to find provisions that are technically enforceable – e.g., data minimization, purpose limitation, consent management, security measures, or data retention.
  • Categorize Enforcement Modes: Determine for each requirement whether compliance can be checked at development time through static code analysis or requires dynamic monitoring at runtime.
  • Develop Proof-of-Concept Tools: Implement static analysis checks and/or runtime enforcement mechanisms.
  • Evaluate Effectiveness: Test your tools on sample applications to validate their ability to catch violations and enforce compliance.

 

References

  1. European Parliament and Council. “Regulation (EU) 2016/679 (General Data Protection Regulation).” Official Journal L119/1, 27 April 2016.
  2. Ferrara, Pietro & Spoto, Fausto. (2018). „Static Analysis for GDPR Compliance.“ 
  3. Hublet et al. „Enforcing the GDPR“
  4. Klein et al. „General Data Protection Runtime: Enforcing Transparent GDPR Compliance for Existing Applications“