Why and How Should We Explain AI?

Stefan Buijsman*

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

45 Downloads (Pure)

Abstract

Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these arguments, showing that there is still room to think that explainability is required for responsible AI. With that in mind, the second part of the paper looks at how we might achieve this end goal. I proceed not from technical tools in explainability, but rather highlight accounts of explanation in philosophy that might inform what those technical tools should ultimately deliver. While there is disagreement here on what constitutes an explanation, the three accounts surveyed offer a good overview of the current theoretical landscape in philosophy and of what information might constitute an explanation. As such, they can hopefully inspire improvements to the technical explainability tools.
Original languageEnglish
Title of host publicationHuman-Centered Artificial Intelligence - Advanced Lectures
EditorsMohamed Chetouani, Virginia Dignum, Paul Lukowicz, Carles Sierra
PublisherSpringer
Pages196-215
Number of pages20
ISBN (Print)9783031243486
DOIs
Publication statusPublished - 2023
Event18th European Advanced Course on Artificial Intelligence, ACAI 2021 - Berlin, Germany
Duration: 11 Oct 202115 Oct 2021

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13500 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference18th European Advanced Course on Artificial Intelligence, ACAI 2021
Country/TerritoryGermany
CityBerlin
Period11/10/2115/10/21

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • AI ethics
  • Explainability
  • Trust

Fingerprint

Dive into the research topics of 'Why and How Should We Explain AI?'. Together they form a unique fingerprint.

Cite this