Understanding the Affordances and Constraints of Explainable AI in Safety-Critical Contexts: A Case Study in Dutch Social Welfare

Aleksander Buszydlik*, Patrick Altmeyer, Roel Dobbe, Cynthia C.S. Liem

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

Abstract

We focus on explainability as a desideratum for automated decision-making systems, rather than only models. Although the explainable artificial intelligence (XAI) paradigm offers an impressive variety of solutions to increase the transparency of automated decisions, XAI contributions rarely account for the complete systems—social and institutional environments—where models operate. Our work focuses on one such system in the domain of social welfare, which increasingly turns to automated decision-making to carry out targeted digital surveillance. Specifically, we present a case study of a black-box machine learning model previously used in a major Dutch city to support its officials in the task of detecting fraud. Employing analyses established in the field of system safety, we identify five types of hazards that could have occurred after the introduction of the model. For each of them, we reason about the potential value of XAI interventions as hazard mitigation strategies. The case study illustrates how the deployment of models may impact processes that exist far upstream and downstream from their decision logic, making explainability and/or interpretability insufficient to guarantee the systems’ safe operation. In many cases, XAI techniques may only be able to reasonably address a small fraction of hazards related to the use of algorithms; several major hazards that we identify would have still posed risks if the system had relied on an interpretable model. Thus, we empirically demonstrate that the values, which lie at the heart of XAI research, such as responsibility, safety, or transparency, ultimately necessitate a broader outlook on automated decision-making systems.

Original languageEnglish
Title of host publicationElectronic Participation - 17th IFIP WG 8.5 International Conference, ePart 2025, Proceedings
EditorsSara Hofmann, Lieselot Danneels, Roel Dobbe, Jolien Ubacht, Anna-Sophie Novak, Peter Parycek, Gerhard Schwabe, Vera Spitzer
PublisherSpringer
Pages118-136
Number of pages19
ISBN (Print)9783032025142
DOIs
Publication statusPublished - 2026
Event17th IFIP WG 8.5 International Conference on Electronic Participation, ePart 2025 - Krems, Austria
Duration: 31 Aug 20254 Sept 2025

Publication series

NameLecture Notes in Computer Science
Volume15978 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference17th IFIP WG 8.5 International Conference on Electronic Participation, ePart 2025
Country/TerritoryAustria
CityKrems
Period31/08/254/09/25

Bibliographical note

Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Automated decision-making
  • Explainable artificial intelligence
  • Social welfare
  • System safety
  • Technology audits

Fingerprint

Dive into the research topics of 'Understanding the Affordances and Constraints of Explainable AI in Safety-Critical Contexts: A Case Study in Dutch Social Welfare'. Together they form a unique fingerprint.

Cite this