System Safety and Artificial Intelligence

R.I.J. Dobbe*

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeChapterScientificpeer-review

76 Downloads (Pure)

Abstract

This chapter formulates seven lessons for preventing harm in artificial intelligence (AI) systems based on insights from the field of system safety for software-based automation in safety-critical domains. New applications of AI across societal domains and public organizations and infrastructures come with new hazards, which lead to new forms of harm, both grave and pernicious. The chapter addresses the lack of consensus for diagnosing and eliminating new AI system hazards. For decades, the field of system safety has dealt with accidents and harm in safety-critical systems governed by varying degrees of software-based automation and decision-making. This field embraces the core assumption of systems and control that AI systems cannot be safeguarded by technical design choices on the model or algorithm alone, instead requiring an end-to-end hazard analysis and design frame that includes the context of use, impacted stakeholders, and the formal and informal institutional environment in which the system operates. Safety and other values are then inherently socio-technical and emergent system properties that require design and control measures to instantiate these across the technical, social, and institutional components of a system. This chapter honors system safety pioneer Nancy Leveson, by situating her core lessons for today’s AI system safety challenges. For every lesson, concrete tools are offered for rethinking and reorganizing the safety management of AI systems, both in design and governance. This history tells us that effective AI safety management requires transdisciplinary approaches and a shared language that allows involvement of all levels of society.
Original languageEnglish
Title of host publicationThe Oxford Handbook of AI Governance
PublisherOxford University Press
PagesC67.S1–C67.S18
ISBN (Electronic)9780197579350
ISBN (Print)9780197579329
DOIs
Publication statusPublished - 2022

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • artificial intelligence
  • harms
  • audits
  • culture
  • safety
  • system safety
  • governance
  • automation
  • systems and control

Fingerprint

Dive into the research topics of 'System Safety and Artificial Intelligence'. Together they form a unique fingerprint.

Cite this