Challenge 2: Responsible Automation

With automation having an ever-increasing impact on all aspects of society, our challenge is to lead by guiding these developments in a socially responsible direction. The second foundational challenge of our current research is to develop the guidance, the tools and the practices to ensure the equity, privacy, safety and security of automation solutions.
This challenge is addressed in two themes. The first examines the impact of automation on society, considering responsible design in terms of the broader public good. We shift our perspective from individual control systems, to the wider implications on a societal level. This involves looking to identify the effects of automation on social goals, defining metrics to be optimised (such as fairness and sustainability), and finally creating alternative design methods that incorporate these metrics.
We also investigate core ethical questions underpinning NCCR Automation research. Should all those subject to the decisions of automated systems should be treated equally, or do some groups deserve greater care? For example, pedestrians are more vulnerable road users than car occupants, and children are more vulnerable than adults. In designing and integrating automated systems, should we conform to existing paradigms (for example, around trust and privacy) or develop new ones to suit new possibilities and benefits? What level of compromise is acceptable, or desirable, in balancing values such as privacy against the benefits of data-driven systems?
Theme 2.1: Responsible Design
Theme leads: Andrea Censi, Emilio Frazzoli
Once, automation was seen as a “hidden technology”. In an era where everything is interconnected, electrified, and embedded with intelligence, we need to examine the impact of automation on society and examine whether it is positive or not.
Our approach to “responsible design” can be summarised as follows:
• A broadening of horizons: Our research is committed to examining the impact of automation beyond isolated control systems, investigating its implications at larger infrastructural and societal levels.
• A societal perspective: We shift our vantage point from the performance of a single control system (which often implies a firm’s profit-maximising goals) to one focused on the broader public good.
• Identification of societal goals that are affected by automation, and definition of metrics to optimise, in terms of sustainability, fairness, reduction of externalities, etc.
• Create alternative design methods that incorporate the new societal metrics.
This research spans all the diverse application areas within the scope of NCCR Automation research; we are looking to find what is common between applications. Ideally, disparate work can converge into a single theory of responsible automation that prescribes systematic ways to incorporate responsibility metrics into system design and control problems.
Thread A: Sustainability-driven system co-design
Contributors: A. Censi, E. Frazzoli, S. Mastellone
This thread deals with control and platform co-design in a holistic way to pursue sustainability goals for a system, such as maximising the lifetime or minimising externalities in manufacturing and operation.
Thread B: Equitable mechanisms for sustainable energy systems
Contributors: S. Bolognani, F. Dörfler, A. Hannák, G. Hug
We address key challenges in the emerging context of highly renewable and distributed energy systems, focusing on the need for co-designing automated mechanisms and economic incentives to ensure fair, efficient, and reliable procurement of flexibility from distribution grids.
Thread C: Karma economies for fair and equitable access to public resources
Contributors: S. Bolognani, A. Censi, F. Dörfler, E. Frazzoli, H. Nax
The idea of Karma has emerged in our research as a solution to the fair and efficient allocation of common resources. Karma is a non-monetary public counter that tracks individual resource consumption: It can only be exchanged according to fairness-preserving rules in the context of a bidding process. We want to extend the application of Karma from its origins in mobility and congestion control to all other NCCR Automation domains, and to study the interaction of multiple Karma economies.
Theme 2.2: Developing Ethics for Automation: New Paradigms of Equity & Trust
Theme leads: Bernice S. Elger, David Shaw
One key question emerging from the NCCR Automation’s research is whether all those subject to the decisions of automated systems should be treated equally, or whether the most vulnerable deserve greater protection or prioritisation in the name of justice. For example, pedestrians are more vulnerable road users than car occupants, and children are more vulnerable than adults.
Automated systems can be integrated into society in a way that conforms with pre-existing societal paradigms (including trust, explainability and privacy), or in accordance with new paradigms that may prove more ethical, given the potential benefits of automation. When a trade-off is required to achieve efficiency (sacrificing data privacy, for instance), we must learn what level of compromise is acceptable, or desirable. Under this theme, we investigate two key questions: What is fairness in the context of automation? And how should existing ethical paradigms, including trust, explainability and privacy, be modified for automated systems?
Thread A: Energy and automation literacy impact on local flexibility provision
Contributors: P. Heer, D. Shaw
The transition to a more electrified, more distributed and more automated energy system comes with specific literacy challenges. For example, end users of energy are typically not aware of the impact and implications of their actions for the energy system infrastructure – which limits the potential flexibility of energy consumption. We aim to address the problem from two angles: How can we increase stakeholders’ energy and automation literacy? And how can we make automation behaviour more interpretable?
Thread B: Definitions of fairness in control
Contributors: A. Hannák, D. Shaw
To date, despite extensive attention, there is still no common agreement on how to define fairness in machine learning (that is, how automated decisions affecting individuals can be considered “fair”). We are investigating, not only how to arrive at the most suitable definition, but how to transfer that definition to control theory.
Thread C: Ethical issues in human-informed decision making
Contributors: G. Ferrari Trecate, D. Shaw
Important ethical issues around gathering human input for automated systems range from privacy and consent issues (which constrain efficiency and reliability), to possible fairness concerns if incentives are offered to encourage data sharing. We investigate both what can be accomplished within existing ethical paradigms, and how those paradigms may be modified to facilitate improved system design.
See related publications and data sets and news items.