Embedding Values in Artificial Intelligence (AI) Systems

Ibo van de Poel*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

66 Citations (Scopus)
202 Downloads (Pure)

Abstract

Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. This account understands embodied values as the result of design activities intended to embed those values in such systems. AI systems are here understood as a special kind of sociotechnical system that, like traditional sociotechnical systems, are composed of technical artifacts, human agents, and institutions but—in addition—contain artificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system. The specific challenges and opportunities of embedding values in AI systems are discussed, and some lessons for better embedding values in AI systems are drawn.

Original languageEnglish
Pages (from-to)385-409
Number of pages25
JournalMinds and Machines
Volume30
Issue number3
DOIs
Publication statusPublished - 2020

Keywords

  • Artificial agent
  • Artificial intelligence
  • Ethics
  • Institution
  • Multi-agent system
  • Norms
  • Sociotechnical system
  • Value embedding
  • Values

Fingerprint

Dive into the research topics of 'Embedding Values in Artificial Intelligence (AI) Systems'. Together they form a unique fingerprint.

Cite this