Hard choices in artificial intelligence

Roel Dobbe*, Thomas Krendl Gilbert, Yonatan Mintz

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

15 Citations (Scopus)
112 Downloads (Pure)

Abstract

As AI systems are integrated into high stakes social domains, researchers now examine how to design and operate them in a safe and ethical manner. However, the criteria for identifying and diagnosing safety risks in complex social contexts remain unclear and contested. In this paper, we examine the vagueness in debates about the safety and ethical behavior of AI systems. We show how this vagueness cannot be resolved through mathematical formalism alone, instead requiring deliberation about the politics of development as well as the context of deployment. Drawing from a new sociotechnical lexicon, we redefine vagueness in terms of distinct design challenges at key stages in AI system development. The resulting framework of Hard Choices in Artificial Intelligence (HCAI) empowers developers by 1) identifying points of overlap between design decisions and major sociotechnical challenges; 2) motivating the creation of stakeholder feedback channels so that safety issues can be exhaustively addressed. As such, HCAI contributes to a timely debate about the status of AI development in democratic societies, arguing that deliberation should be the goal of AI Safety, not just the procedure by which it is ensured.

Original languageEnglish
Article number103555
JournalArtificial Intelligence
Volume300
DOIs
Publication statusPublished - 2021

Keywords

  • AI ethics
  • AI governance
  • AI regulation
  • AI safety
  • Philosophy of artificial intelligence
  • Sociotechnical systems

Fingerprint

Dive into the research topics of 'Hard choices in artificial intelligence'. Together they form a unique fingerprint.

Cite this