Paper at ICLR 2025

Our lab had the pleasure of presenting at ICLR 2025 in Singapore! Markel Zubia and Thiago D. Simão represented the team, sharing our latest work on improving the safety of reinforcement learning agents when dropped into unfamiliar territory.
Reinforcement learning is great at learning through trial and error, but that can be a risky approach when deploying agents in real-world environments where safety matters. Our paper tackles this by focusing on safe transfer learning: training an agent in a known, controlled setting and then transferring it to a different, safety-critical environment where things might not behave exactly the same.
The key idea? During training, we introduce action disturbances to help the agent learn to act robustly. This results in a provably safer and more robust agent when faced with previously unseen dynamics. In many cases, this approach leads to completely safe transfers, even when the dynamics of the new environment are quite different.
 
Want to learn more about it? Just read the paper or check out the code!
Safely Transferring RL Agents