The deployment of autonomous agents and multiagent systems offers a promising prospect for more reliable and efficient performance in various domains. However, at the current stage, developed multiagent systems (MAS), e.g., logistics and governance, mainly aim for reliability and efficiency but are incapable of reasoning about and distinguishing different notions of responsibility in such sociotechnical systems. We understand sociotechnical systems as “multistakeholder cyber-physical systems.” Ensuring ethical and trustworthy behavior of such systems requires satisfying sociotechnical requirements (that are on one hand technical and on the other hand relate to social concerns). For instance, ensuring that an autonomous vehicle behaves in a responsible and ethical way requires capturing the technical requirements (e.g., the ability to detect barriers) as well as the contextual social norms and values (e.g., the ability to learn and reason about driving norms in a city). In particular, responsibility—in its various forms—is a notion with sociotechnical characteristics as it relates to sociotechnical concepts such as ability, knowledge, task, and norm. Thus, to enable agents in a MAS to reason about responsibility requires distinguishing different forms of responsibility and articulating how each form conceptually relates to strategic ability (and distribution of power); epistemic ability (and distribution of knowledge); tasks (and distribution of obligations); and norms and values (and distribution of preferences).
Abstract:
Ensuring trustworthy performance of autonomous agents and multiagent systems (MAS) requires computational methods and formal tools to support reasoning about different fo...Show MoreMetadata
Abstract:
Ensuring trustworthy performance of autonomous agents and multiagent systems (MAS) requires computational methods and formal tools to support reasoning about different forms of responsibility. In particular, such tools are needed to support identifying agents or agent groups that are responsible, blameworthy, accountable, or sanctionable for outcomes of collective decisions, for fulfilling tasks, or for adhering to norms and social values. As a step toward developing computational frameworks to represent, reason about, and distinguish these forms of responsibility in MAS, for the first time, we present sociotechnical characteristics of these notions of responsibility, identify their requirements, and discuss their applicability for coordinating MAS and ensuring their trustworthiness. This is a step toward establishing a research agenda on how computational techniques for reasoning about and distinguishing different forms of responsibility contribute to the transformation toward ethical and trustworthy autonomous systems.
Published in: IEEE Internet Computing ( Volume: 25, Issue: 6, 01 Nov.-Dec. 2021)