Theory Moonshot

It is our goal to ask the big questions and to take the big risks that are required to initiate a new domain of study in our field. With the game-changing questions that we call the “Theory Moonshot”, we aim to lay the theoretical foundations for an entirely new field of research.  

This is not a top-down process, but a long-term endeavour grounded in spontaneous collaborations between our ambitious, risk-seeking researchers. Through specific processes we push our team to think outside the box, and give them the space and freedom to do so. The following topics have emerged so far.  

 

Beyond Timescale Separation: 

Many of the algorithms in online optimisation, game theory, reinforcement learning and so on rely on timescale separation to establish their theoretical properties. Roughly speaking, this requires one part of the process to (more or less) come to equilibrium before another part makes its move; near-equilibrium properties are then exploited, for example to derive sufficient conditions for the convergence of the method.  

We ask whether timescale separation is in fact necessary, or is a matter of mathematical convenience. For which classes of problems can one develop novel, faster algorithms that do not rely on timescale separation? What do these algorithms look like? And for which classes of problems can we prove impossibility theorems that establish timescale separation as necessary?  

 

Lifelong Learning Control:  

Future control systems will exist in ever-changing environments, in concert with other complex data-driven control systems and human operators. They will constantly need to adapt to changing conditions, e.g., to plant ageing and evolving network structures. Such changes lead to substantial non-stationarity and distribution shifts, violating the i.i.d. assumption central to most machine learning approaches.  

Our high-level goal is to develop a theory of data-driven control adaptivity for complex systems. This will require rethinking adaptive control in the era of modern machine learning, and go beyond stationarity assumptions commonly made in deep reinforcement learning and data-driven control.  

While we are already investigating this topic under the foundational challenge of Automation enablers, we believe that the most important questions in lifelong learning control are yet to be discovered.