The ethics and epistemology of explanatory AI in medicine and healthcare

Juan M. Durán*, Martin Sand, Karin Jongsma

*Corresponding author for this work

Research output: Contribution to journalEditorialScientificpeer-review

2 Citations (Scopus)

Abstract

AI is believed to have the potential to radically change modern medicine. Medical AI systems are developed to improve diagnosis, prediction, and treatment of a wide array of medical conditions. It is assumed to enable more accurate and efficient ways to diagnose diseases and “to restore the precious and time-honored connection and trust – the human touch – between patients and doctors“ (Topol, 2019, p. 18), by enabling health care professionals to spend more time with their patients. Sophisticated self-learning AI systems that do not follow predetermined decision rules – often referred to as black-boxes (Esteva et al 2019; Shortliffe et al 2018) – have spawned philosophical debate: the black-box nature of AI systems is believed to be a major ethical challenge for the use of these systems in medicine and it remains disputed whether explainability is philosophically and computationally possible. This special issue focuses on the ethics and epistemology of explainability in medical AI broadly construed.
Original languageEnglish
Article number42
JournalEthics and Information Technology
Volume24
Issue number4
DOIs
Publication statusPublished - 2022

Fingerprint

Dive into the research topics of 'The ethics and epistemology of explanatory AI in medicine and healthcare'. Together they form a unique fingerprint.

Cite this