Dr. Tarini Saka, PhD
Chair for Security & Privacy of Ubiquitous Systems
Address:Ruhr University Bochum
Faculty of Computer Science
Security & Privacy of Ubiquitous Systems
Universitätsstr. 150
D-44801 BochumRoom: MC 4.130Email: tarini.saka at ruhr-uni-bochum.dePersonal Website: https://tarinisaka.github.io/

Research
My research focuses on the intersection of artificial intelligence, user behavior, and security. AI technologies and tools are increasingly used in everyday tasks, but they pose new risks that must be considered during their use and integration. There is an urgent need to study current practices for deploying AI technology, understand the resources available to users for safe AI usage, and educate both users and organizations on safer practices. In my research, I examine this interaction from a security and privacy perspective.
During my PhD, I explored organizational phishing mitigation, leveraging AI and natural language processing (NLP) to develop tools for threat detection, attack mitigation, and user guidance. By integrating human-computer interaction (HCI) principles, I designed user-centric solutions to enhance security practices. I am always excited to work in the phishing domain, so feel free to contact me if you’re interested in this topic.
thesis supervision
If you are interested in writing a Bachelor’s or Master’s thesis in my research area, feel free to contact me via email. Please consult our full list of thesis topics .
Using LLMs to Provide Real-time Phishing Guidance to Non-German Speakers
See Details
This study investigates how large language models (LLMs) can assist non-German speakers in detecting and understanding phishing emails written in German. By combining machine translation, context-aware analysis, and email threat detection, our system provides real-time guidance, highlighting suspicious elements and offering actionable insights. The aim is to bridge the language barrier, enhancing phishing awareness and cybersecurity for international users in Germany. A user study will assess its effectiveness in improving security decision-making and reducing phishing susceptibility.
Examining Prompt Injection Attacks against AI-Powered Email Assistants
See Details
AI-powered email assistants are rapidly transforming digital communication, offering smart summaries, automated replies, and workflow enhancements. However, as these tools rely increasingly on large language models (LLMs), they become vulnerable to a new class of attacks known as prompt injection —where malicious actors embed instructions in email content to manipulate AI behaviour. This study will investigate the feasibility and impact of prompt injection attacks in AI email assistants. The research will include designing and executing controlled experiments on existing AI tools to assess their susceptibility to crafted prompts embedded in email threads. It will also explore how these attacks could lead to data leakage, misinformation, or unauthorized actions. The outcome aims to provide both a threat model and practical recommendations to improve the security posture of AI-enhanced email systems.
Auditing AI-based Assistive technology (Speech-to-text) for security and privacy concerns
See Details
This study examines the security and privacy risks associated with AI-driven speech-to-text assistive technologies. While these tools enhance communication for the Deaf and hard-of-hearing (DHH) community, they also introduce novel vulnerabilities and risks. The goal is to audit popular speech-to-text systems, identifying potential risks related to user privacy, model biases, and misuse by malicious actors. Through empirical analysis and security testing, we aim to assess their robustness and propose mitigation strategies to enhance privacy protection and secure deployment.
[1] Surani, Aishwarya, et al. „Security and privacy of digital mental health: An analysis of web services and mobile applications.“ IFIP Annual Conference on Data and Applications Security and Privacy. Cham: Springer Nature Switzerland, 2023.
[2] Feal, Álvaro, et al. „Angel or devil? a privacy study of mobile parental control apps.“ _Proceedings on Privacy Enhancing Technologies_ (2020).
[3] Gruber, Moritz, et al. „“We may share the number of diaper changes”: A Privacy and Security Analysis of Mobile Child Care Applications.“ _Proceedings on Privacy Enhancing Technologies_ (2022).
[4] Elahi, Haroon, et al. „On the characterization and risk assessment of ai-powered mobile cloud applications.“ _Computer Standards & Interfaces_ 78 (2021): 103538.
[5] AI Security: Risks, Frameworks, and Best Practices
[6] AI Risks in Mobile Apps: How to Protect Your Data and Stay Compliant