Symposium - Systems Theory of Algorithms - Part I

Event Type
Seminar
Date
-
Location
ETH Zurich HG E 5

 

Many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems evolving over time with 

  • inputs (real-time signals, historic data samples, or random seeds), 
  • outputs (decision variables or algorithm iterates), and 
  • uncertainties (noise or unknown problem parameters). 


The dynamical systems perspective on such algorithms is key in future automation, AI, and socio-technical systems, where challenges arise from the analysis and design of algorithms

  • interconnected through inputs and outputs (e.g., in machine learning pipelines or optimization-based feedback controllers),
  • operating online (e.g., running algorithms with streaming data inputs), 
  • in resource-constrained environments (e.g., real-time embedded systems or big data settings), 
  • interconnected with humans in the loop (e.g., recommendation systems or autonomous driving),
  • featuring self-interested decision makers that need to share a common limited resource (e.g., infrastructure networks), and
  • that are robust in face of severe uncertainty and disturbances (e.g., though exogenous inputs, intrinsic randomness, or a non-stationary environment).

All these challenges call for a dynamical systems perspective on algorithms, and systems and control theory offers a unique angle of attack.

 

What is this symposium about? 

This two-part symposium will feature an outstanding line-up of world-wide experts in the field who will present their results and answer questions in a panel discussion. The NCCR Automation researchers will prepare for the symposium by means of reading groups focused on the work of the speakers and the broader field. Each part of the workshop will be followed by a reception concluding the day, and individual meetings with researchers on the following day.

If you want to attend this symposium, please register here. More information about the second session on Monday 23rd May here.  

 

Symposium program

14:00 - 14:15: Welcome and introduction (Profs. Florian Dörfler & John Lygeros)

14:15 - 15:00: Policy Optimization for Optimal Control with Guarantees of Robustness - Tamer Basar

15:00 - 15:45: Optimization with Momentum and Constraints: A Perspective from Smooth and Non-smooth Dynamics - Michael Mühlebach

15:45 - 16:15: Coffee break

16:15 - 17:00: On System Theory for Algorithms in Games - Lacra Pavel

17:00 - 17:45: First-order and zeroth-order optimization algorithms as model-free feedback controllers - Saverio Bolognani

17:45 - 18:30: Panel discussion moderated by Niao He

18:30 - 20:00: Apéro

The symposium will be live streamed. Join us virtually here

 

Invited experts

  • Prof. Tamer Basar (University of Illinois) - Policy Optimization for Optimal Control with Guarantees of Robustness

Talk abstract: Policy optimization (PO) is a key ingredient of modern reinforcement learning (RL), and can be used for efficient design of optimal controllers. For control design, certain constraints are generally enforced on the policies to be implemented, such as stability, robustness, and/or safety concerns on the closed-loop system. Hence, PO entails, by its nature, a constrained optimization in most cases, which is also nonconvex, and analysis of its global convergence is generally very challenging. Further, another element that compounds the challenge is that some of the constraints that are safety-critical, such as closed-loop stability or the H-infinity (H∞) norm constraint that guarantees system robustness, can be difficult to enforce on the controller while being learned as the PO methods proceed. We have recently overcome this difficulty for a special class of such problems, which I will discuss in this talk, while also placing this in a broader context.

Specifically, I will introduce the problem of PO for H2 optimal control with a guarantee of robustness according to the H∞ criterion, for both continuous- and discrete-time linear systems. I will argue, with justification, that despite the nonconvexity of the problem, PO methods can enjoy the global convergence property. More importantly, I will show that the iterates of two specific PO methods (namely, natural policy gradient and Gauss-Newton) automatically preserve the H∞ norm (i.e., the robustness) during iterations, thus enjoying what we refer to as “implicit regularization” property. Furthermore, under certain conditions, convergence to the globally optimal policies features globally sub-linear and locally super-linear rates. Due to the inherent connection of this optimal robust control model to risk-sensitive optimal control and linear quadratic (LQ) dynamic games, these results also apply as a byproduct to these settings as well, with however some adjustments. The latter, in particular, entails PO with two agents, and the order in which the updates are carried out becomes a challenging issue, which I will also discuss. The talk will conclude with some informative simulations, and a brief discussion of extensions to the model-free framework and associated sample complexity analyses.

Biography: Tamer Başar has been with the University of Illinois Urbana-Champaign since 1981, where he currently is Swanlund Endowed Chair Emeritus and Center for Advanced Study (CAS) Professor Emeritus of Electrical and Computer Engineering, with also affiliations with the Coordinated Science Laboratory, Information Trust Institute, and Mechanical Science and Engineering. At Illinois, he has also served as Director of CAS (2014-2020), Interim Dean of Engineering (2018), and Interim Director of the Beckman Institute (2008-2010).

He is a member of the US National Academy of Engineering; Fellow of IEEE, IFAC, and SIAM; a past president of the IEEE Control Systems Society (CSS), the founding president of the International Society of Dynamic Games (ISDG), and a past president of the American Automatic Control Council (AACC). He has received several awards and recognitions over the years, including the IEEE Control Systems Award (2014), ISDG Isaacs Award (2010), AACC Bellman Award (2006), IFAC Quazza Medal (2005), IEEE CSS Bode Lecture Prize (2004), and a number of recognitions from academic institutions, including the Wilbur Cross Medal (Yale, 2021), and several international honorary doctorates and professorships, with the most recent one being an honorary doctorate from KTH (Sweden, 2019).

He has around 1,000 publications in systems, control, communications, optimization, networks, and dynamic games, including books on non-cooperative dynamic game theory, robust control, network security, wireless and communication networks, and stochastic networks. He was Editor-in-Chief of the IFAC Journal Automatica between 2004 and 2014, and is currently editor of several book series.

His current research interests include stochastic teams, games, and networks; multi-agent systems and learning; data-driven distributed optimization; epidemics modeling and control over networks; modeling and control of spread of disinformation; security and trust; energy systems; and cyber-physical systems.

 

 

  • Prof. Lacra Pavel (University of Toronto) - On System Theory for Algorithms in Games

Talk abstract: Over the years, a plethora of algorithms/dynamics have been proposed in the game-theoretic literature with the goal of finding a Nash equilibrium. From best-response play, (projected) gradient-play and proximal dynamics to fictitious-play, payoff-based play or Q-learning (reinforcement-learning), the list is long. And yet, most algorithms work for special classes of games only, for example zero-sum games, two-player games, 2 x 2 games, potential games, strictly/strongly monotone games, etc.  Why is it that in certain game settings some algorithms work while others don’t? How can we relax these assumptions? How can we generalize the algorithms in a systematic manner? How can we relax their informational requirements?  

It turns out that system theory can help answer these questions. Herein, we show how dissipativity/passivity theory can inform the analysis and design of game-theoretic algorithms. We consider some popular game-theoretic algorithms and show that they can be cast as instances of a feedback interconnection between some dissipative/passive dynamical system and some specific game mapping. Once this is done, convergence to a Nash equilibrium follows from standard passivity theory, based on simple and concise arguments. We also discuss how passivity-inspired ideas can be used to design novel Nash equilibrium-seeking algorithms and learning dynamics.  

Biography

Lacra Pavel is a professor in the Systems Control group, Department of Electrical and Computer Engineering, University of Toronto. She received the Diploma of Engineer in Automatic Control from Technical University Iasi, Romania and the Ph.D. degree in Electrical Engineering from Queen's University, Canada. She joined University of Toronto in 2002, after a postdoctoral stage at the National Research Council and four years of working in the communication industry.   Her research interests are in game theory and distributed optimization in networks, with emphasis on dynamics, system theoretic and control methods. She is the author of the book “Game Theory for Control of Optical Networks” (Birkhauser-Springer Science). She is a Senior Editor for the IEEE Transactions on Control of Network Systems and the IEEE Open Journal of Control Systems and an Associate Editor for the IEEE Control Systems Conference Editorial Board. 

 

 

  • Prof. Michael Mühlebach (Max Planck Institute) - Optimization with Momentum and Constraints: A Perspective from Smooth and Non-smooth Dynamics

Talk abstract: My talk will highlight connections between dynamical systems and optimization and will be divided into two parts: The first part presents an analysis of accelerated first-order optimization algorithms, where the continuous dependence of iterates with respect to their initial conditions will be exploited for characterizing the convergence rate. The result establishes criteria for accelerated convergence of a large class of momentum-based optimization algorithms. The criteria, which are easily verifiable, are necessary and sufficient and therefore precisely characterize optimization algorithms that are accelerated. The analysis applies to non-convex functions, unifies discrete-time and continuous-time models, and rigorously explains why structure-preserving (symplectic) discretization schemes are important in optimization. The second part introduces a class of first-order methods for constrained optimization that are based on an analogy to non-smooth dynamical systems. The key underlying idea is to express constraints in terms of velocities instead of positions, which has the algorithmic consequence that optimizations over feasible sets at each iteration are replaced with optimizations over local, sparse convex approximations. The result is a simplified suite of algorithms and an expanded range of possible applications in machine learning.

Biography: Michael Muehlebach studied mechanical engineering at ETH Zurich and specialized in robotics, systems, and control during his Master's degree. He received the B.Sc. and the M.Sc. in 2010 and 2013, respectively, before joining the Institute for Dynamic Systems and Control for his Ph.D. He graduated under the supervision of Prof. R. D'Andrea in 2018 and joined the group of Prof. Michael I. Jordan at the University of California, Berkeley as a postdoctoral researcher. In 2021 he started as an independent group leader at the Max Planck Institute for Intelligent Systems in Tuebingen.

He is interested in a wide variety of subjects, including machine learning, dynamical systems, and optimization. During his Ph.D. he developed approximations to the constrained linear quadratic regulator problem and applied them to model predictive control. He also designed control and estimation algorithms for a balancing robot and a flying machine. His more recent work straddles the boundary between machine learning and optimization, and includes the analysis of momentum-based and constrained optimization algorithms from a dynamical systems point of view.

He received the Outstanding D-MAVT Bachelor Award for his Bachelor's degree and the Willi-Studer prize for the best Master's degree. His Ph.D. thesis was awarded with the ETH Medal and the HILTI prize for innovative research. He was also awarded a Branco Weiss Fellowship and an Emmy Noether Fellowship, which funds his research group.

 

 

  • Dr. Saverio Bolognani (ETH Zurich) - First-order and zeroth-order optimization algorithms as model-free feedback controllers 

Talk abstract: Many control problems in the engineering domain can be abstracted as multi-input multi-output steady-state regulation problems, where the system needs to be steered to an efficient operating point that is compatible with the operating limits of the plant. 
We will see that controllers for this class of problems can be designed by tapping into the tools of nonlinear optimization, and in particular first-order and zeroth-order methods. 
Interconnecting an iterative algorithm with a physical system requires however some care in order to guarantee well-posedness of the resulting dynamical system and closed-loop stability. 
On the other hand, this design approach yields robust controllers that are (almost) model-free and successfully tackle this complex steady-state control problems even in the presence of substantial uncertainty on the plant and on the unknown disturbances. 

Biography: Saverio Bolognani received the Ph.D degree in Information Engineering from the University of Padova, Italy, in 2011. In 2006-2007, he was a visiting graduate student at the University of California at San Diego. In 2013-2014, he was a Postdoctoral Associate at the Laboratory for Information and Decision Systems of the Massachusetts Institute of Technology in Cambridge (MA). He is currently a Senior Scientist at the Automatic Control Laboratory at ETH Zurich. His research interests include the application of networked control system theory to power systems, cyber-physical systems, the intersection of nonlinear optimization with feedback control design, multi-agent systems, and game theory.