Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Explanatory AI (XAI) is on the rise, gaining enormous traction with the computational community, policymakers, and philosophers alike. This article contributes to this debate by first distinguishing scientific XAI (sXAI) from other forms of XAI. It further advances the structure for bona fide sXAI, while remaining neutral regarding preferences for theories of explanations. Three core components are under study, namely, i) the structure for bona fide sXAI, consisting in elucidating the explanans, the explanandum, and the explanatory relation for sXAI: ii) the pragmatics of explanation, which includes a discussion of the role of multi-agents receiving an explanation and the context within which the explanation is given; and iii) a discussion on Meaningful Human Explanation, an umbrella concept for different metrics required for measuring the explanatory power of explanations and the involvement of human agents in sXAI. The kind of AI systems of interest in this article are those utilized in medicine and the healthcare system. The article also critically addresses current philosophical and computational approaches to XAI. Amongst the main objections, it argues that there has been a long-standing interpretation of classifications as explanation, when these should be kept separate.

Original languageEnglish
Article number103498
JournalArtificial Intelligence
Volume297
DOIs
Publication statusPublished - 2021

Keywords

  • Explainable AI
  • Interpretable AI
  • Medical AI
  • Scientific explanation
  • sXAI

Fingerprint Dive into the research topics of 'Dissecting scientific explanation in AI (sXAI): A case for medicine and healthcare'. Together they form a unique fingerprint.

Cite this