Carl-Johann Simon-Gabriel

Simon-Gabriel Carl-Johann
Dr.
Carl-Johann Simon-Gabriel
Alumni
NCCR Automation is a great opportunity to apply my research to energy- and environment-related domains and help the transition towards a low carbon society

Carl-Johann was a post-doc working together with Prof. Andreas Krause on adversarial images, kernel methods, reinforcement learning and causality, between August 2020 and November 2021. He is also very interested in energy and climate change related questions, and hope to be able, one day, to efficiently help reducing our carbon footprint. He did his PhD at the Max Planck Institute for Intelligent Systems in Tübingen (Germany), under the supervision of Prof. Bernhard Schölkopf, where he worked on generative image models (GAN & VAE) and their connections to well-known and/or kernel-based distribution dissimilarities (such as KL-divergences, maximum mean discrepancies, etc.).

Scientific Publications

PopSkipJump: A Decision-based Adversarial Attack for Probabilistic Classifiers
38th International Conference on Machine Learning (ICML 2021)
Vol 139 Pages 9712 – 9721

Research projects as Researcher

Title
Principal Investigators

Safe model-based reinforcement learning via causal inference and meta-learning

Summary

A key limitation of current deep reinforcement learning (RL) approaches is their need for accurate computational models and focus on fully observable domains.  In many real-world domains, such as those considered in the NCCR, only approximate dynamics models exist, and partial observability is paramount. In our research, we will investigate connections between reinforcement learning and causal inference. We explore novel approaches for off-policy evaluation based on ideas from causal inference, and study the use of experimental design for causal discovery for safe exploration in RL. We plan to also utilize ideas from meta-learning in order to transfer inductive biases across multiple related tasks.

Safe model-based reinforcement learning via causal inference and meta-learning

A key limitation of current deep reinforcement learning (RL) approaches is their need for accurate computational models and focus on fully observable domains.  In many real-world domains, such as those considered in the NCCR, only approximate dynamics models exist, and partial observability is paramount. In our research, we will investigate connections between reinforcement learning and causal inference. We explore novel approaches for off-policy evaluation based on ideas from causal inference, and study the use of experimental design for causal discovery for safe exploration in RL. We plan to also utilize ideas from meta-learning in order to transfer inductive biases across multiple related tasks.

144
3c40964f-146e-4bac-bb7e-eefb82eb4a46