Courses
Winter Term 2024/25
This seminar will provide a broad overview of recent efforts to make AI more trustworthy. The students will be able to choose from a broad range of papers that are published in the top AI or Formal Methods conferences like NeurIPS, ICML, ICLR, AAAI, IJCAI, CAV, or TACAS. Potential topics are:
- Safe Reinforcement Learning
- Verification of Neural Networks
- Safe Planning under Uncertainty
- Large Language Models in Combination with Formal Methods
Past few years have seen confluence of two related trends: 1/ A rapid adoption of Artificial Intelligence (AI) and Machine Learning (ML) in a wide range of real-world applications across a variety of domains, e.g., healthcare, engineering. 2/ Development of specialized tooling and design patterns for AI/ML workloads.
The goal of this course is to introduce the students to various AI/ML prediction paradigms, popular frameworks and design patterns. Specifically, we will build code bases involving (shallow) classification / regression models, CNNs and Transformers using frameworks like scikit-learn, PyTorch and Transformers. We will learn about using data loaders to manage large scale dataset and using GPUs to speed up deep learning workloads. We will also learn about best practices like testing and reproducibility.
Complex digital systems have an increasing presence and impact on our lives, making the formal verification of the correctness of these systems crucial. For instance, a spacecraft’s program should not crash, and a networking system should stay operational even if one server goes down. To ensure this, one approach is deductive verification, involving the generation of a collection of logical constraints that the system must satisfy. These constraints can be algorithmically verified through automatic theorem provers such as SAT and SMT solvers. Another popular approach is model checking, which consists of representing the system via mathematical models and exhaustively checking certain properties, often described in temporal logics. This course covers various aspects of formal verification and model checking.
Reinforcement Learning (RL) is a subfield of Machine Learning where an agent learns to make decisions by taking actions in an environment towards a specific learning goal. RL relies on exploration methods, using feedback from its actions to improve its future behavior. This Lab Course addresses problems that are commonly faced when working with, researching and/or applying RL techniques.
Summer Term 2024
This seminar will provide a broad overview of recent efforts to make AI more trustworthy. The students will be able to choose from a broad range of papers that are published in the top AI or Formal Methods conferences like NeurIPS, ICML, ICLR, AAAI, IJCAI, CAV, or TACAS. Potential topics are:
- Safe Reinforcement Learning
- Verification of Neural Networks
- Safe Planning under Uncertainty
- Large Language Models in Combination with Formal Methods
The objective of this course is to provide a comprehensive understanding of major programming paradigms: Imperative and object-oriented programming, functional programming, and logic programming. The course is designed to provide students with practical and state-of-the-art programming exercises that will help them solve real-world programming problems. The focus of the course will be on developing, analyzing, and verifying code that efficiently solves such problems. Additionally, students will learn how to use modern software development tools such as IDEs and DevOps packages as part of the programming exercises.