Symposium - Systems Theory of Algorithms - Part II
Many iterative algorithms in optimization, games, and learning can be viewed as dynamical systems evolving over time with
- inputs (real-time signals, historic data samples, or random seeds),
- outputs (decision variables or algorithm iterates), and
- uncertainties (noise or unknown problem parameters).
The dynamical systems perspective on such algorithms is key in future automation, AI, and socio-technical systems, where challenges arise from the analysis and design of algorithms
- interconnected through inputs and outputs (e.g., in machine learning pipelines or optimization-based feedback controllers),
- operating online (e.g., running algorithms with streaming data inputs),
- in resource-constrained environments (e.g., real-time embedded systems or big data settings),
- interconnected with humans in the loop (e.g., recommendation systems or autonomous driving),
- featuring self-interested decision makers that need to share a common limited resource (e.g., infrastructure networks), and
- that are robust in face of severe uncertainty and disturbances (e.g., though exogenous inputs, intrinsic randomness, or a non-stationary environment).
All these challenges call for a dynamical systems perspective on algorithms, and systems and control theory offers a unique angle of attack.
What is this symposium about?
This two-part symposium will feature an outstanding line-up of world-wide experts in the field who will present their results and answer questions in a panel discussion. The NCCR Automation researchers will prepare for the symposium by means of reading groups focused on the work of the speakers and the broader field. Each part of the workshop will be followed by a reception concluding the day, and individual meetings with researchers on the following day.
If you want to attend this symposium, please register here.
Symposium program
14:00 - 14:15: Welcome and introduction (Profs. Florian Dörfler & John Lygeros)
14:15 - 15:00: Control Theory as a Toolkit for Optimization Algorithm Synthesis: Case Studies and Future Challenges - Christian Ebenbauer
15:00 - 15:45: Accelerating Optimization and Reinforcement Learning with Quasi-Stochastic Approximation - Sean Meyn
15:45 - 16:15: Coffee break
16:15 - 17:00: Robust control perspectives on algorithm analysis and design - Laurent Lessard
17:00 - 17:45: Q-learning through the Lens of Dynamical Systems : from asymptotics to non-asymptotics - Niao He
17:45 - 18:30: Panel discussion
18:30 - 20:00: Apéro
The symposium will be live streamed. Join us virtually here.
Invited experts
- Prof. Christian Ebenbauer (RWTH Aachen University) - Control Theory as a Toolkit for Optimization Algorithm Synthesis: Case Studies and Future Challenges
Talk abstract: Various research activities in the area of analysis and synthesis of optimization algorithms using concepts from systems and control theory have been initialized in recent years. These activities are motivated by the fact that many optimization algorithms are dynamical systems and thus tools from control theory can be used to analyze and design algorithms. A further motivation is the increasing integration of online optimization in complex automated and intelligent systems due to the high availability of online computing power and big data sets. This integration makes it necessary to take the dynamics of optimization algorithms into account in a fundamental fashion in order to achieve reliable and efficient operation of such systems.
In the first part of the talk, examples of designing algorithms based on systems and control theory are presented. In particular, we illustrate ideas how concepts from optimal, nonlinear, robust, and stochastic control can be utilized to synthesis optimization algorithms.
In the second part of the talk, future research questions and challenges towards a systems theory of algorithms are discussed.
Biography: Christian Ebenbauer received the M.S. degree (Dipl.-Ing.) in Telematics (electrical engineering and computer science) from Graz University of Technology, Graz, Austria, in 2000 and the Ph.D. (Dr.-Ing.) in Mechanical Engineering from the University of Stuttgart, Germany, in 2005. After having completed his Ph.D., he was a Postdoctoral Associate and an Erwin Schrödinger Fellow at the Laboratory for Information and Decision Systems, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA. From April 2009 until July 2021, he was a Full Professor of Computations in Control at the Institute for Systems Theory and Automatic Control, University of Stuttgart. Since August 2021, he is a Professor and head of the Chair of Intelligent Control Systems at RWTH Aachen University, Germany. His research interests lie in the areas of dynamical systems, control theory, optimization, and algorithms.
- Prof. Sean Meyn (University of Florida) - Accelerating Optimization and Reinforcement Learning with Quasi-Stochastic Approximation
Talk abstract: The ODE method has been a workhorse for algorithm design and analysis since the introduction of the stochastic approximation. It is now understood that convergence theory amounts to establishing robustness of Euler approximations for ODEs, while theory of rates of convergence requires finer analysis. This talk concerns a parallel theory for quasi-stochastic approximation, based on algorithms in which the “noise” is based on deterministic signals. Part of the motivation is pedagogical: theory for convergence and convergence rates is greatly simplified. The other major motivation is practical: the speed of convergence is remarkably fast when compared to its stochastic counterparts. The talk will survey recent theory and applications to gradient free optimization and reinforcement learning.
- Prof. Niao He (ETH Zurich) - Q-learning through the Lens of Dynamical Systems : from asymptotics to non-asymptotics
Talk abstract: In this talk, we discuss interesting connections between a large family of Q-learning algorithms and switching systems. Building on this connection, we develop new and unified control-theoretic frameworks to analyze Q-learning algorithms, both in terms of their asymptotic and non-asymptotic convergence guarantees. Our analysis also sheds light on sufficient conditions for Q-learning to converge with linear function approximation as well as the overestimation phenomenon. We further illustrate and validate the analysis through numerical simulations.
Biography: Niao He is currently an Assistant Professor in the Department of Computer Science at ETH Zurich, where she leads the Optimization and Decision Intelligence (ODI) Group. She is also an ELLIS Scholar and a core faculty member of ETH AI Center and ETH-Max Planck Center of Learning Systems. Previously, she was an assistant professor at the University of Illinois at Urbana-Champaign from 2016 to 2020. Before that, she received her Ph.D. degree in Operations Research from Georgia Institute of Technology in 2015. Her research interests are in large-scale optimization, machine learning, and reinforcement learning. She is a recipient of AISTATS Best Paper Award, NSF CISE Research Initiation Initiative (CRII) Award, NCSA Faculty Fellowship, and Beckman CAS Fellowship. She regularly serves as an area chair for NeurIPS, ICML, ICLR and other machine learning conferences.
- Prof. Laurent Lessard (Northeastern University) - Robust control perspectives on algorithm analysis and design
Talk abstract: Most complicated optimization problems, in particular those involving a large number of variables, are solved in practice using iterative algorithms. The problem of selecting a suitable algorithm is currently more of an art than a science; a great deal of expertise is required to know which algorithms to try and how to properly tune them. Moreover, there are seldom performance guarantees. In this talk, I will show how the problem of algorithm selection can be approached using tools from robust control theory. By solving simple semidefinite programs (that do not scale with problem size), we can derive robust bounds on convergence rates for popular algorithms such as the gradient method, proximal methods, fast/accelerated methods, and operator-splitting methods such as ADMM. The bounds derived in this manner either match or improve upon the best known bounds from the literature. The bounds also lead to a natural energy dissipation interpretation and an associated Lyapunov function. Finally, our framework can be used to efficiently search for algorithms that meet desired performance specifications, thus establishing a principled methodology for designing new algorithms. We give examples of novel algorithm designs to address distributed optimization and stochastic optimization problems.
Biography: Laurent Lessard is an Associate Professor of Mechanical and Industrial Engineering at Northeastern University. Laurent was previously a Charles Ringrose Assistant Professor of Electrical and Computer Engineering at the University of Wisconsin-Madison and Faculty Member at the Wisconsin Institute for Discovery. He received the B.A.Sc. in Engineering Science from the University of Toronto, and received the M.S. and Ph.D. in Aeronautics and Astronautics at Stanford University. After completing his doctoral work, he was an LCCC Postdoc at Lund University in Sweden, and a postdoctoral researcher at the University of California, Berkeley. Dr. Lessard is a recipient of the Hugo Schuck best paper award and the NSF CAREER award. His research interests include: decentralized control, robust control, and optimization. For more information, see https://laurentlessard.com/