The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making

Hans de Bruijn, Martijn Warnier, Marijn Janssen*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

3 Citations (Scopus)
21 Downloads (Pure)

Abstract

Governments look at explainable artificial intelligence's (XAI) potential to tackle the criticisms of the opaqueness of algorithmic decision-making with AI. Although XAI is appealing as a solution for automated decisions, the wicked nature of the challenges governments face complicates the use of XAI. Wickedness means that the facts that define a problem are ambiguous and that there is no consensus on the normative criteria for solving this problem. In such a situation, the use of algorithms can result in distrust. Whereas there is much research advancing XAI technology, the focus of this paper is on strategies for explainability. Three illustrative cases are used to show that explainable, data-driven decisions are often not perceived as objective by the public. The context might raise strong incentives to contest and distrust the explanation of AI, and as a consequence, fierce resistance from society is encountered. To overcome the inherent problems of XAI, decisions-specific strategies are proposed to lead to societal acceptance of AI-based decisions. We suggest strategies to embrace explainable decisions and processes, co-create decisions with societal actors, move away from an instrumental to an institutional approach, use competing and value-sensitive algorithms, and mobilize the tacit knowledge of professionals
Original languageEnglish
Article number101666
Number of pages8
JournalGovernment Information Quarterly
Volume39
Issue number2
DOIs
Publication statusPublished - 2022

Keywords

  • Accountability
  • Algorithms
  • Artificial intelligence
  • Computational intelligence
  • Data-driven decision
  • E-government
  • Socio-tech
  • Transparency
  • Trust
  • XAI

Fingerprint

Dive into the research topics of 'The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making'. Together they form a unique fingerprint.

Cite this