Thesis Topics
Beyond the theses offered on this page, you can always check out our research projects and contact persons working on stuff that interests you. Feel free to propose topics or ask if there’s any hot topic that isn’t listed down below yet! Just don’t forget to include some information on your background & interests.
Collection Of ThESES
In the following, you can find a couple of overview pages listing the topics/research interests of people looking for students (further below you find individual topics not covered by this):
Low-Cost Fault Injection Techniques: A Practical Approach to Hardware Security Testing
Contact: Philipp Mackensen (E-Mail: Philipp.Mackensen@rub.de)
See Details
With the availability of affordable and ready-to-use hardware, such as the Raspberry Pi Pico [1], performing hardware attacks on embedded devices has become more and more accessible, allowing to conduct security assessments at low cost [2, 3, 4]. One particularly interesting method is the use of crowbars as a glitching mechanism [5]. This method uses a MOSFET circuit is used to perform voltage glitches, which can be found in the ChipWhisperer [6]. Our research interest is to investigate the capabilities of using a Raspberry Pi Pico in combination with a crowbar circuit to performing glitching attacks on embedded devices. To do this, we want to test different types of MOSFETs and understand the extent to which these attacks can be performed at low cost.
The scope of this thesis is as follows:
- Conduct a literature review on existing fault injection methods using crowbar-glitches and low-cost hardware.
- Select appropriate MOSFETs for implementing crowbar-glitches capable of manipulating bus signals.
- Design and assemble a basic hardware setup to perform crowbar-glitch fault injections.
Experiment with altering data transmission by inducing glitches. - Document the experimental procedures and initial findings.
References:
[1] https://www.raspberrypi.com/documentation/microcontrollers/pico-series.html[2] O’Flynn, Colin. „PicoEMP: A Low-Cost EMFI Platform Compared to BBI and Voltage Fault Injection using TDC & External VCC Measurements.“ _2023 Workshop on Fault Detection and Tolerance in Cryptography (FDTC)_. IEEE, 2023. https://ieeexplore.ieee.org/abstract/document/10495121
[3] https://github.com/stacksmashing/pico-tpmsniffer
[4] https://hackaday.io/project/196357-picoglitcher-v2
[5] O’Flynn, Colin. „Fault injection using crowbars on embedded systems.“ _Cryptology ePrint Archive_ (2016). https://eprint.iacr.org/2016/810
[6] https://www.newae.com/chipwhisperer
Using LLMs to Provide Real-time Phishing Guidance to Non-German Speakers
Contact: Tarini Saka (Email: tarini.saka@rub.de)
See Details
This study investigates how large language models (LLMs) can assist non-German speakers in detecting and understanding phishing emails written in German. By combining machine translation, context-aware analysis, and email threat detection, our system provides real-time guidance, highlighting suspicious elements and offering actionable insights. The aim is to bridge the language barrier, enhancing phishing awareness and cybersecurity for international users in Germany. A user study will assess its effectiveness in improving security decision-making and reducing phishing susceptibility.
Auditing AI-based Assistive technology (Speech-to-text) for security and privacy concerns
Contact: Tarini Saka (Email: tarini.saka@rub.de)
See Details
This study examines the security and privacy risks associated with AI-driven speech-to-text assistive technologies. While these tools enhance communication for the Deaf and hard-of-hearing (DHH) community, they also introduce novel vulnerabilities and risks. The goal is to audit popular speech-to-text systems, identifying potential risks related to user privacy, model biases, and misuse by malicious actors. Through empirical analysis and security testing, we aim to assess their robustness and propose mitigation strategies to enhance privacy protection and secure deployment.
[1] Surani, Aishwarya, et al. „Security and privacy of digital mental health: An analysis of web services and mobile applications.“ IFIP Annual Conference on Data and Applications Security and Privacy. Cham: Springer Nature Switzerland, 2023.
[2] Feal, Álvaro, et al. „Angel or devil? a privacy study of mobile parental control apps.“ _Proceedings on Privacy Enhancing Technologies_ (2020).
[3] Gruber, Moritz, et al. „“We may share the number of diaper changes”: A Privacy and Security Analysis of Mobile Child Care Applications.“ _Proceedings on Privacy Enhancing Technologies_ (2022).
[4] Elahi, Haroon, et al. „On the characterization and risk assessment of ai-powered mobile cloud applications.“ _Computer Standards & Interfaces_ 78 (2021): 103538.
[5] AI Security: Risks, Frameworks, and Best Practices
[6] AI Risks in Mobile Apps: How to Protect Your Data and Stay Compliant
Evaluating the Limitations of Automated Accessibility Testing for Detecting Dark Patterns
Contact: Agata Stanczyk (Email: agata.stanczyk@rub.de )
See Details
Automated accessibility tools like Axe Android, A11y Ally, or Accessibility Scanner by Google are widely used. But are they able to flag deceptive interfaces? This thesis investigates the capabilities and limitations of current automated tools in detecting dark patterns that affect visually impaired users. The goal would be to benchmark several tools using a test set of UI designs and compare results to expert/manual evaluations. Other possible goals could be:
-
A defined test suite with common dark pattern examples.
-
A comparison table showing tool performance and coverage gaps.
-
Suggested improvements or tool combinations.
[2] https://dl.acm.org/doi/abs/10.1145/3586183.3606783 [3] https://www.digitala11y.com/free-mobile-accessibility-testing-tools/
ADAPTIVE AI EXPLORATION IN SELECTIVE MEMORY Contexts
Contact: Ramya Kandula (Email: ramya.kandula@rub.de )
See Details
This study examines transparency and trust in user controllable adaptations in AI. With an increasing lack of awareness of privacy mechanisms in AI models and risks in Human-AI interactions, there is a need for users to have more control in guiding AI behavior. The goal would be to explore development and evaluation of models that could support users with labeling specific interactions in remembering or forgetting them. This also aims to bridge a gap between HCI and privacy mechanisms in the context of AI interactions. The study could also explore:
- Benchmarking comprehensive frameworks that support adaptive AI interactions
- Analyzing user sentiments in adaptive AI scenarios
[1] From explainable to interactive AI: A literature review on current trends in human-AI interaction
[2] Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
[3] Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design
[4] Crafting Human-AI Interaction: A Rhetorical Approach to Adaptive Interaction in Conversational AgentsANALYZING AND ENHANCING TRAFFIC WATERMARKING ATTACKS AGAINST ANONYMITY SYSTEMS
Contact: Dimitri Mankowski (Email: dimitri.mankowski@rub.de )
See Details
Traffic watermarking is a network traffic analysis technique can be used to evaluate the resistance of anonymity systems such as Tor to deanonymization attacks. A watermark is a unique pattern embedded in encrypted traffic flows to link traffic at different points in the network.
While watermarking attacks are known to be highly effective, however, assessing their impact on anonymity systems remains a significant challenge for system developers and researchers. This is due to the variable nature of watermarking schemes including various network conditions, different anonymity systems, configurations and the introduction of new features.
Your goal would be to explore and evaluate existing watermarking techniques, come up with new ideas, and show how the techniques can be effectively deployed in the network.
[1] RAINBOW: A Robust and Invisible Non-Blind Watermark for Network Flows
[2] SWIRL: A Scalable Watermark to Detect Correlated Network Flows
[3] Inflow: Inverse Network Flow Watermarking for Detecting Hidden Servers
[4] Novel and Practical SDN-based Traceback Technique for Malicious Traffic over Anonymous Networks[5] FINN: Fingerprinting Network Flows using Neural Networks