Aren Karapetyan

Aren Karapetyan
Aren Karapetyan
PhD Student
I am motivated by the process of understanding the behaviour of dynamical systems and by their automation with data and learning-based control.

Aren Karapetyan is a doctoral student at the Automatic Control Laboratory at ETH Zürich under the supervision of Prof. John Lygeros. He obtained his MEng Engineering Science degree at the University of Oxford in 2020, where he specialised in Information and Control Engineering. He was awarded the Engineering Undergraduate Research Opportunities Program (EUROP) award to undertake a research project at the Oxford Control Group under the supervision of professors A. Papachristodoulou and K. Margellos. He stayed at the same group to complete his Fourth Year Project on distributed control of quadrotors. His current research is on Dynamic Programming and Optimal Control.

Scientific Publications

Published
Online Linear Quadratic Tracking with Regret Guarantees
IEEE Control Systems Letters
Pages 6
Published
On the Finite-Time Behavior of Suboptimal Linear Model Predictive Control
ArXiv
Published
On the Regret of H∞ Control
2022 IEEE Conference on Decision and Control (CDC)
Pages 6181-6186
Published
Implications of Regret on Stability of Linear Dynamical Systems
IFAC World Congress 2023
Published
Performance Bounds of Model Predictive Control for Unconstrained and Constrained Linear Quadratic Problems and Beyond
22nd IFAC World Congress
Vol 56 No 2 Pages 8464-8469

Research projects as Researcher

Title
Principal Investigators

DeepGreen: Approximate dynamic programming and reinforcement learning for extremely high dimensional systems

Summary

The DeepGreen snooker robot project aims to build a robot capable of challenging the best human players. Snooker (similar to billiard) is a game that combines both advanced strategy and physical skills. A single game can be abstractly represented as a zero-sum dynamical game with an extremely highly dimensional state-space (position of each ball, score, and current player), a continuous action space (angle, speed and position of the cue) and nonlinear hybrid dynamics (a combination of the games rules and physics governing the interaction of the cue, the balls and the table). All the above characteristics make snooker the ideal testbed for approximate dynamic programming or reinforcement learning algorithms that have the ambition of filling the “reality gap” and transition from digital simulations to implementation in the real world. The principal objective of the project is the design of the strategy policy for the robotic player. The main challenges are (i) the need of a suitable function approximation scheme for the policy or value function to exploit the characteristics of the game and reduce the dimensionality of the decision problem and (ii) combining the use of data from both a physics engine and the real physical system to reduce the “reality gap” and obtain a strategy that performs well (and improves over time) on the physical robot and not only in a simulated environment.

DeepGreen: Approximate dynamic programming and reinforcement learning for extremely high dimensional systems

The DeepGreen snooker robot project aims to build a robot capable of challenging the best human players. Snooker (similar to billiard) is a game that combines both advanced strategy and physical skills. A single game can be abstractly represented as a zero-sum dynamical game with an extremely highly dimensional state-space (position of each ball, score, and current player), a continuous action space (angle, speed and position of the cue) and nonlinear hybrid dynamics (a combination of the games rules and physics governing the interaction of the cue, the balls and the table). All the above characteristics make snooker the ideal testbed for approximate dynamic programming or reinforcement learning algorithms that have the ambition of filling the “reality gap” and transition from digital simulations to implementation in the real world. The principal objective of the project is the design of the strategy policy for the robotic player. The main challenges are (i) the need of a suitable function approximation scheme for the policy or value function to exploit the characteristics of the game and reduce the dimensionality of the decision problem and (ii) combining the use of data from both a physics engine and the real physical system to reduce the “reality gap” and obtain a strategy that performs well (and improves over time) on the physical robot and not only in a simulated environment.

146
c1c6c72e-37b7-455b-b2b3-2e44659c3006