Harmful epistemic dependence on medical machine learning and its moral implications

Giorgia Pozzi*, Stefan Buijsman, Jeroen van den Hoven

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

The advances in machine learning (ML)-based systems in medicine give rise to pressing epistemological and ethical questions. Clinical decisions are increasingly taken in highly digitised work environments, which we call artificial epistemic niches. By considering the case of ML systems in life-critical healthcare settings, we investigate (1) when users’ reliance on these systems can be characterised as epistemic dependence and (2) how this dependence turns into what we refer to as harmful epistemic dependence of clinical professionals on medical ML. The latter occurs when the impossibility of critically assessing the soundness of a system’s output in situ implies a moral obligation to comply with its recommendation since a failure to do so constitutes a moral risk that cannot be justified then and there. We analyse the epistemic and moral consequences of harmful epistemic dependence on the status of medical professionals. We conclude by assessing how a suitable design of the epistemic niche can address the problem.

Original languageEnglish
Article numberjme-2024-110552
JournalJournal of medical ethics
DOIs
Publication statusPublished - 2025

Keywords

  • Ethics
  • Philosophy

Fingerprint

Dive into the research topics of 'Harmful epistemic dependence on medical machine learning and its moral implications'. Together they form a unique fingerprint.

Cite this