Reasons underdetermination in meaningful human control

Atay Kozlovski*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

7 Downloads (Pure)

Abstract

The rapid proliferation of AI systems has raised many concerns about safety and responsibility in their design and use. The philosophical framework of Meaningful Human Control (MHC) was developed in response to these concerns, and tries to provide a standard for designing and evaluating such systems. While promising, the framework still requires further theoretical and practical refinement. This paper contributes to that effort by drawing on research in axiology and rational decision theory to identify a critical gap in the framework. Specifically, it argues that while ‘reasons’ play a central role in MHC, there has been little discussion of the possibility that, when weighed against each other, reasons may not always point to a single, rationally preferable course of action. I refer to these cases as instances of reasons underdetermination, and this paper discusses the need to address this issue within the MHC framework. The paper begins by providing an overview of the key concepts of the MHC framework and then examines the role of ‘reasons’ in the framework’s two main conditions - Tracking and Tracing. It then discusses the phenomenon of reasons underdetermination and shows how it poses a challenge for the achievement of both Tracking and Tracing.
Original languageEnglish
Article number59
Number of pages15
JournalEthics and Information Technology
Volume27
Issue number4
DOIs
Publication statusPublished - 2025

Keywords

  • Decision Support Systems
  • Incommensurability
  • Meaningful Human Control (MHC)
  • Moral Responsibility
  • Reasons
  • Tracking and Tracing
  • Underdetermination

Fingerprint

Dive into the research topics of 'Reasons underdetermination in meaningful human control'. Together they form a unique fingerprint.

Cite this