AI Lecture Series

AI Lecture Series

Artificial Intelligence (AI) has become a ubiquitous and fast-moving field of research. With this series, we strive to expose the audience to the most exciting advances in AI and to foster collaborations among students, researchers, and institutions. Reach out to ai-lecture-series@rub.de if you want to become a speaker or if you have questions or feedback. Make sure to subscribe to our Calendar to always stay up to date.

Attending

The format is offered hybrid via Zoom (join) and on-site in the open space (ground floor) of the MC building on campus. Each session is split into 45 min presentation and 15 min of Q&A and is open to anyone.  If you are new to the campus, check out our Campus Map and Directions.

Speaker

🟢 indicates an upcoming event.

Andreas Schmidt, "Hey Siri, turn up the temperature!" - How to warm your Planet with AI, 22.10.2025 10:00 CEST

Title: „Hey Siri, turn up the temperature!“ – How to warm your Planet with AI

Slides: PDF

Hosted by: Prof. Hönig

Room: Open Space MC, Zoom (join) 

Abstract:
It is the year 2025: Ten years since the Paris Agreement, and over fifty since the Limits to Growth. Today, the headlines are dominated by the latest and greatest in artificial intelligence (AI), but also doomsday predictions on AI taking over everyone’s jobs. In this talk I take a critical look at the environmental implications of AI technology, which in many instances goes beyond what we know from classical, non-AI computing. After a general look at computing’s environmental impact, we take a closer look at the current sustainability of AI, where both embodied as well as operational resource demand and emissions will be investigated. Afterwards, various ways to make AI more sustainable are presented. Finally, a short detour is planned, covering „sustainable data centers“ as the physical hosts for AI training and serving.

Vitae:
Dr. Andreas Schmidt studied Computer Science at Saarland University in Saarbrücken, Germany. He did his PhD at the Telecommunications Lab at Saarland University working with Thorsten Herfet, strongly collaborating with the Distributed Systems and Operating Systems Chair at FAU Erlangen-Nürnberg of Wolfgang Schröder-Preikschat. Most of this work was part of Energy-, Latency- and Resilience-aware Networking (e.LARN), a project within the German Science Foundation’s (DFG) priority programme 1914 „Cyber-Physical Networking“. Afterwards, he joined the Fraunhofer Institute for Experimental Software Engineering (IESE) in Kaiserslautern as a post-doctoral researcher. Since 2022, he has worked at the Dependable Systems and Software Chair of Holger Hermanns as a group leader, while pursuing his Habilitation.

Title: Uncertainty-Aware Foundation Models for Efficient Decision-Making in Science

Slides: PDF

Hosted by: Prof. Fischer

Room: Open Space MC, Zoom (join) 

Abstract:
Probabilistic reasoning is a cornerstone of AI, enabling agents to make decisions under uncertainty. This is particularly critical in scientific discovery, where data is scarce, and experiments are resource-intensive. For instance, in materials discovery, an AI agent can prioritize experiments based on confidence in a molecule’s desirability, reducing wasteful experimentation and computation, and thus sustainably accelerating progress. However, despite the predictive power of large-scale models in science, such as foundation models and deep neural networks, their lack of reliable uncertainty estimates limits their utility in data-scarce, resource-intensive settings, which is typical in scientific discovery. This leads to inefficient progress, which can ultimately cause undesirable environmental and societal impacts. In this talk, I will present our two recent works on accelerating decision-making through foundation models. I will show that these theoretically grounded, universal foundation models can improve the prediction speed of Bayesian surrogate models and ODE solvers multiple-fold. Ultimately, these lead to resource-effective high-throughput Bayesian optimization and molecular dynamics simulation.

Vitae:
Agustinus Kristiadi is an Assistant Professor in the Department of Computer Science at Western University and a Faculty Affiliate at the Vector Institute. Previously, he was a Distinguished Postdoctoral Fellow at the Vector Institute and obtained his PhD from the University of Tuebingen. His research interests are in uncertainty quantification in machine learning, decision-making under uncertainty, and their applications in broader scientific fields.

>>> AI Lecture Series Meets CASA Distinguished Lecture <<<

Title: When Programming Meets Probability Theory

Hosted by: Prof. Jansen

Room: Open Space MC, the in-person talk is also streamed via Zoom (join) 

Abstract:
Probabilistic programs encode randomized algorithms, robot controllers, AI learning components, training data for neural networks, security mechanisms, and much more. They are however hard to grasp. Not only by humans, also by computers: checking elementary properties related to e.g., termination are „more undecidable“ than for ordinary programs.
I will present what probabilistic programs are, show some of their applications, and indicate the state of the art in the semi-automated verification of such programs.

Vitae:
Joost-Pieter Katoen is a distinguished professor at RWTH Aachen University. He is a vice-rector for teaching and studies and leads the Software Modeling and Verification (MOVES) group in the CS department. He is part-time affiliated to the University of Twente. His main research interests are model checking, concurrency theory, program analysis, probabilistic programming, and formal semantics. He co-authored over 350 papers and is known from his joint book with Christel baier „Principles of Model Checking“, a bestseller the field.
He received numerous best paper awards and other honors, most recently the JCL Award 2023 in Dependable Computing. He is a member of the Academia Europaea, the Royal Holland Society of Sciences and Humanities (KHMW) and the German Academy of Sciences „Leopoldina“ (2024). He received an honorary doctorate from Aalborg University (2017) and is an ACM Fellow (2020).

Info:
This event is a joint collaboration with CASA Distinguished Lectures. CASA (Cyber Security in the Age of Large-Scale Adversaries) is a Cluster of Excellence at Ruhr University Bochum that unites leading scientists across disciplines to develop holistic, long-term solutions against powerful, large-scale cyber threats.

Title: Learning Statistical Classes

Hosted by: Prof. Zeume

Room: MC 1.84, Zoom (join) 

Abstract:
Given a hypothesis class, say of real-valued functions,  there are a number of derived classes one can form, based on „random objects from the class“. For example,  we can randomize the parameters that specify individual functions in the class: given an input element, we return the expected value over function in the class, integrating over a distribution on parameters. Conversely,  we can randomize the inputs, obtaining a function that maps a parameter to the expected value across all inputs.  The learning problem changes: instead of learning from samples consisting of outputs of an unknown function in the class, we have examples consisting of means or probabilities over all inputs or all parameters. We discuss „tranfer results“ saying that from learnability of a base hypothesis class, one can derive learnability of the associated randomized classes. There are variations of the result for Probably Approximately Correct (PAC) learning as well for online learning. And there is a distinction between the phenomena one sees for agnostic learning and for realizable learning. The problems studied here were first posed by database researches, motivated by learning database statistics from examples.  But they are closely related to an older line of work in model theory, which concerns „randomizing a structure“.
In this talk I will try not to assume serious machine learning theory or logic background, and I will focus just on explaining the definitions by example, overviewing our results and explaining some of these connnections to other fields informally.
This is joint work with Aaron Anderson.

Vitae:
Michael Benedikt is Professor of Computer Science at Oxford University and a fellow of University College Oxford. He came to Oxford after a decade in US industrial research laboratories, including positions as Distinguished Member of Technical Staff at Bell Laboratories and visiting researcher at Yahoo! Labs. He has worked extensively in mathematical logic, finite model theory, verification, database theory, and database systems, and has served as chair of the ACM’s main database theory conference, Principles of Database Systems. The current focus of his research is Web data management, with recent projects involving the interaction of data management, logic, and machine learning.

Reference:
Anderson, A., & Benedikt, M. (2025). From learnable objects to learnable random objects. arXiv preprint arXiv:2504.00847.

Title: Graph Neural Networks and Arithmetic Circuits

Hosted by: Prof. Zeume

Room: MC 1.54, Zoom (join) 

Abstract:
We characterize the computational power of neural networks that follow the graph neural network (GNN) architecture, not restricted to aggregate-combine GNNs or other particular types. We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic circuits over real numbers. In our results the activation function of the network becomes a gate type in the circuit. Our result holds for families of constant depth circuits and networks, both uniformly and non-uniformly, for all common activation functions.

Vitae:
Laura Strieker is a PhD student in the Theoretical Computer Science group at Leibniz University Hannover (LUH), working under the supervision of Professor Heribert Vollmer. She previously obtained her Bachelor’s and Master’s degree in Computer Science from LUH. Her research interests lay in the intersection of Machine Learning and Computational Complexity, especially focusing on models that perform real-valued computations. Through her research, she aims to contribute to the understanding of how Neural Models can be optimized and understood from a complexity-theoretic perspective, thereby bridging the gap between practical applications and theoretical foundations.

Reference:
Barlag, T., Holzapfel, V., Strieker, L., Virtema, J., & Vollmer, H. (2024). Graph neural networks and arithmetic circuits. Advances in Neural Information Processing Systems, 37, 5410-5428.

Title: Generative strategies to empower physics-based wave propagation with deep learning. Applications to earthquake engineering.

Abstract:
In this work, we provide a quantitative assessment of how largely can earthquake groundmotion simulation benefit from deep learning generative techniques, blended with traditional numerical simulations. Two main frameworks are addressed: conditional generative approaches and neural operators. On one hand, a diffusions model is employed in a time-series super-resolution context. The main task is to improve the outcome of 3D fault-to-site earthquake numerical simulations (accurate up to 5 Hz [1, 2]) at higher frequencies (5-30 Hz), by learning the low-to-high frequency mapping from seismograms recorded worldwide [3, 4]. The generation is conditioned by the numerical simulation synthetic time-histories, enabling fast inference for site-specific probabilistic hazard assessment. Finally, the successful use of neural operators to entirely replace cumbersome 3D elastic wave propagation numerical simulations is described [5,6], showing how this approach can pave the way to real-time large-scale digital twins of earthquake prone regions [6,7].

Vitae:
Filippo Gatti is Maître de Conférences (equiv. to Assistant Prof.) at CentraleSupélec, France since 2019, affiliated to Laboratoire de Mécanique des Sols, Structures et Matériaux (MSSMat) until 2021 and to Laboratoire de Mécanique Paris-Saclay (LMPS) since 2022. He holds a PhD in Civil Engineering from Université Paris-Saclay and Politecnico di Milano (2017), as well as a MSc (2014) and BEng (2011) from Politecnico di Milano. He has been JSPS post-doctoral fellow at the Disaster Prevention Research Institute at Kyoto University (2018) and visiting researcher at the Earthquake Research Institute at The University of Tokyo (2021). Since 2022, he is in charge of Research Operation Jumeaux Hybrides: Simulation, Apprentissage within the LMPS team OMEIR, as well as LMPS representative for research valorization strategies. Filippo Gatti’s research interests cover the physics-based simulation of wave propagation phenomena, focusing on earthquake engineering and ultrasound wave propagation for non-destructive testing. Since 2014, Filippo Gatti co-develops the high-performance software SEM3D and maintains its open-source release since 2023.

References:
[1] Touhami, S.; Gatti, F.; Lopez-Caballero, F.; Cottereau, R.; de Abreu Corrˆea, L.; Aubry, L.; Clouteau, D. SEM3D: A 3D High-Fidelity Numerical Earthquake Simulator for Broadband (0–10 Hz) Seismic Response Prediction at a Regional Scale. Geosciences 2022, 12 (3), 112. https://doi.org/10.3390/geosciences12030112.
[2] Gatti, F.; Carvalho Paludo, L. D.; Svay, A.; Lopez-Caballero, F.-; Cottereau, R.; Clouteau, D. Investigation of the Earthquake Ground Motion Coherence in Heterogeneous Non-Linear Soil Deposits. Procedia Engineering 2017, 199, 2354–2359.
https://doi.org/10.1016/j.proeng.2017.09.232.
[3] Gatti, F.; Clouteau, D. Towards Blending Physics-Based Numerical Simulations and Seismic Databases Using Generative Adversarial Network. Computer Methods in Applied Mechanics and Engineering 2020, 372, 113421. https://doi.org/10.1016/j.cma.2020.113421.
[4] Gabrielidis, H.; Gatti, F.; Vialle, S.; Jacquet, G. Génération conditionnelle et inconditionnelle de signaux sismiques à l’aide de modèles de diffusion. In 16ème Colloque National en Calcul des Structures; Computational Structural Mechanics Association: Giens, 2024; pp 1-9. https://hal.science/hal-04531795.
[5] Lehmann, F.; Gatti, F.; Bertin, M.; Clouteau, D. 3D Elastic Wave Propagation with a Factorized Fourier Neural Operator (F-FNO). Computer Methods in Applied Mechanics and Engineering 2024, 420, 116718. https://doi.org/10.1016/j.cma.2023.116718.
[6] Lehmann, F.; Gatti, F.; Clouteau, D. Multiple-Input Fourier Neural Operator (MIFNO) for Source-Dependent 3D Elastodynamics. Journal of Computational Physics 2025, 527, 113813. https://doi.org/10.1016/j.jcp.2025.113813.
[7] Lehmann, F.; Gatti, F.; Bertin, M.; Clouteau, D. Machine Learning Opportunities to Conduct High-Fidelity Earthquake Simulations in Multi-Scale Heterogeneous Geology. Front. Earth Sci. 2022, 10, 1029160. https://doi.org/10.3389/feart.2022.1029160.

Title: Deciding What to Measure: Evaluation of Active Feature Acquisition Methods

Abstract:
Medical professionals often face tough choices about which tests to order to reach a diagnosis. Tests like MRI scans or biopsies can be expensive, carry risks, and take valuable time, but they also provide critical information. AI systems, known as Active Feature Acquisition (AFA) methods, are being developed to assist in this process by recommending which tests to order. But before such systems can be trusted or improved, reliable methods are needed to evaluate them. This poses a significant challenge, as it requires answering counterfactual questions such as: “Would better diagnoses have been made—or unnecessary or harmful tests avoided—if the AI’s recommendations had been followed?” This talk explores how such questions can be addressed using ideas from causal inference, highlighting the intersection of missing data theory and reinforcement learning.

Vitae: 
Henrik von Kleist is a PhD candidate at Helmholtz Munich and the Technical University of Munich (TUM). He studied Engineering Science (B.Sc.) and Data Engineering & Analytics (M.Sc.) at TUM, and has conducted research at École Polytechnique, Harvard University, and Johns Hopkins University. His doctoral research focuses on improving medical decision-making by developing and evaluating AI systems that recommend which clinical information to acquire. His work integrates methods from causal inference, reinforcement learning, missing data theory, dynamic treatment regimes, and semiparametric theory to support trustworthy and data-efficient decision support in healthcare.

Title: From Risks to Resilience: Protecting Privacy in Adapted Language Models

Abstract: 
As language models (LLMs) underpin various sensitive applications, preserving privacy of their training data is crucial for their trustworthy deployment. This talk will focus on the privacy of LLM adaptation data. We will see how easily sensitive data can leak from the adaptations, putting privacy in risk. We will then dive into designing protection methods, focusing on how we can obtain privacy guarantees for adaptation data, in particular for prompts. We will also compare private adaptations for open LLMs and their closed, proprietary counterparts across different axes, finding that private adaptations for open LLMs yield higher privacy, better performance, and lower costs. Finally, we will discuss how to monitor privacy of adapted LLMs through dedicated auditing. By identifying privacy risks of adapting LLMs, understanding how to mitigate them, and conducting thorough audits, we can ensure that LLMs can be employed for societal benefits without putting individual data at risk.

Vitae: 
Franziska is a tenure-track faculty at the CISPA Helmholtz Center for Information Security where she co-leads the SprintML lab. Before, she was a Postdoctoral Fellow at the University of Toronto and Vector Institute advised by Prof. Nicolas Papernot. Her current research centers around private and trustworthy machine learning. Franziska obtained her Ph.D. at the Computer Science Department at Freie University Berlin, where she pioneered the notion of individualized privacy in machine learning. During her Ph.D., Franziska was a research associate at the Fraunhofer Institute for Applied and Integrated Security (AISEC), Germany. She received a Fraunhofer TALENTA grant for outstanding female early career researchers, the German Industrial Research Foundation prize for her research on machine learning privacy, and the Fraunhofer ICT Dissertation Award 2023, and was named a GI-Junior Fellow in 2024.

Title: Natural Language Processing for the Legal Domain: Challenges and Recent Developments

Abstract:
The field of Law has become an important application domain of Natural Language Processing (NLP) due to the recent proliferation of publicly available legal data, and the socio-economic benefits of mining legal insights. Additionally, the introduction of Large Language Models (LLMs) has brought forth many applications, questions, and concerns in the legal domain. This talk will discuss some of the challenges in processing of legal text, and some popular research problems, including summarization of long legal documents, identifying relevant statutes from fact descriptions, and pre-trained language models for the legal domain.

Vitae:
Saptarshi Ghosh is an Associate Professor of Computer Science and Engineering, at Indian Institute of Technology, Kharagpur (IIT). His research interests include Legal analytics, Natural Language Processing, and Algorithmic bias and fairness. He obtained his Ph.D. in Computer Science from the same institute, and was a Humboldt Post-doctoral Fellow at Max Planck Institute for Software Systems, Germany. He has published more than 100 research papers in reputed conferences and journals, and has investigated more than 15 research projects sponsored by the Government of India and various industries. He presently leads a Max Planck Partner Group focusing on Algorithmic bias and fairness at IIT Kharagpur. He is a Fellow of The Institution of Engineers (India). His works on Law-AI have been published at top AI conferences including AAAI, ACL, EMNLP, and SIGIR, and have been awarded at the top Law-AI conferences, including the Best Paper award at JURIX2019 and Best Student Paper Award at ICAIL2021. He is presently the Section Editor on Legal Information Retrieval for the Artificial Intelligence and Law journal, the most prestigious journal in Law-AI.

Title: Safety in Reinforcement Learning

Abstract: 
Reinforcement Learning (RL) agents can solve general problems based on little to no knowledge of the underlying environment. These agents often learn through experience, using a trial-and-error strategy that can lead to practical innovations, but this randomized process might cause undesirable events. Safe RL studies how to make such agents more reliable and how to ensure they behave appropriately. In this talk, we discuss these issues from online settings, where the agent interacts directly with the environment, to offline settings, where the agent only has access to historical data. We present new RL methods that exploit different types of prior knowledge to provide safety and reliability. Exploiting such prior knowledge, we present reliable offline algorithms that can improve the policy using less data and online algorithms that comply with safety constraints while learning. Besides safety and reliability, we also touch on other challenges preventing the deployment of RL in the real world, such as partial observability, generalization, and high-dimensional data.

Vitae:
Thiago D. Simão (he/him) is an Assistant Professor in the Department of Mathematics and Computer Science at Eindhoven University of Technology (TU/e). He completed his Ph.D. in the Algorithmics Group at Delft University of Technology under the supervision of Dr. Matthijs Spaan. Following his doctoral studies, he worked as a Postdoctoral Researcher in the Department of Software Science at Radboud University Nijmegen, collaborating with Dr. Nils Jansen. His research focuses on enhancing the reliability of AI techniques to enable safe deployment in real-world applications. Specifically, he is interested in safe reinforcement learning, ensuring AI systems meet performance guarantees while preventing catastrophic failures. Through his work, he contributed to the advancement of trustworthy AI solutions in complex and uncertain environments.

Slides: PDF

Title: Orchestrating AI Agents among Humans

Abstract:
As AI agents are deployed in real-world settings, determining when to expose users to AI assistance becomes increasingly critical.  Effective use of AI agents requires invoking the right agent at the right time. We introduce Modiste, an interactive tool for learning personalized decision support policies that dynamically adjust user access to AI agents. Modiste leverages tools from contextual bandits to optimize when and how AI agents provide support to balance performance, cost, and constraints. We further characterize the theoretical conditions under which orchestration between agents is beneficial. Our empirical studies show how selective access to AI agents, including deliberate disengagement from AI, can improve decision outcomes, reduce unnecessary AI use, and align AI agents with real-world constraints. We conclude with a call for human-centered interactive evaluation of AI agents by assessing the effectiveness of AI agents via multi-turn interactions with experts.

Vitae:
Umang Bhatt is an Assistant Professor & Faculty Fellow at the Center for Data Science at New York University and a Senior Research Associate in Safe and Ethical AI at the Alan Turing Institute. He completed his PhD in the Machine Learning Group at the University of Cambridge. His research lies in human-AI collaboration, AI governance, and algorithmic transparency. His work has been supported by a JP Morgan PhD Fellowship and a Mozilla Fellowship. Previously, he was a Research Fellow at the Partnership on AI, a Fellow at Harvard’s Center for Research on Computation and Society, and an Advisor to the Responsible AI Institute. Umang received his MS and BS in Electrical and Computer Engineering from Carnegie Mellon University.

Organizers