Matteo Turchetta

Matteo Turchetta
PhD Student
I believe that safety and robustness of learning systems is one of the most exciting challenges we face to embed smarter and smarter systems in our daily life.

Matteo Turchetta is a postdoctoral researcher at the Learning and Adaptive Systems group at ETH Zurich. His research focus lies at the interplay between control theory and machine learning. In particular, he aims at building  provably safe learning agents capable of making decisions in safety-critical dynamical environments under uncertainty.

Scientific Publications

Published
Learning Long-Term Crop Management Strategies with CyclesGym
36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Published
Near-Optimal Multi-Agent Learning for Safe Coverage Control
36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Published
GoSafeOpt: Scalable Safe Exploration for Global Optimization of Dynamical Systems
Artificial Intelligence
Vol 320 Pages 103922
Safe and Efficient Model-free Adaptive Control via Bayesian Optimization
IEEE International Conference on Robotics and Automation (ICRA 2021)

Research projects as Researcher

Titolo
Principal Investigators

Safe model-based reinforcement learning via causal inference and meta-learning

Sommario

A key limitation of current deep reinforcement learning (RL) approaches is their need for accurate computational models and focus on fully observable domains.  In many real-world domains, such as those considered in the NCCR, only approximate dynamics models exist, and partial observability is paramount. In our research, we will investigate connections between reinforcement learning and causal inference. We explore novel approaches for off-policy evaluation based on ideas from causal inference, and study the use of experimental design for causal discovery for safe exploration in RL. We plan to also utilize ideas from meta-learning in order to transfer inductive biases across multiple related tasks.

Safe model-based reinforcement learning via causal inference and meta-learning

A key limitation of current deep reinforcement learning (RL) approaches is their need for accurate computational models and focus on fully observable domains.  In many real-world domains, such as those considered in the NCCR, only approximate dynamics models exist, and partial observability is paramount. In our research, we will investigate connections between reinforcement learning and causal inference. We explore novel approaches for off-policy evaluation based on ideas from causal inference, and study the use of experimental design for causal discovery for safe exploration in RL. We plan to also utilize ideas from meta-learning in order to transfer inductive biases across multiple related tasks.

144
3c40964f-146e-4bac-bb7e-eefb82eb4a46

Notizie e Informazioni Utili

02 Settembre 2024
Sustainability
Researcher blogs
La tecnologia dei "gemelli digitali" non è solo per l'industria. Inserire in un'app dettagli come le condizioni del suolo e il meteo potrebbe presto aiutare gli agricoltori a ottenere consigli su misura per la gestione delle colture, riducendo le emissioni e aumentando la resa.