Testimonial injustice in medical machine learning

Research output: Contribution to journalArticleScientificpeer-review

19 Citations (Scopus)
40 Downloads (Pure)

Abstract

Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems' role in mediating patient-physician relations. I thereby consider how ML systems may silence patients' voices and relativise the credibility of their opinions, which undermines their overall credibility status without valid moral and epistemic justification. More specifically, I argue that withholding credibility due to how ML systems operate can be particularly harmful to patients and, apart from adverse outcomes, qualifies as a form of testimonial injustice. I make my case for testimonial injustice in medical ML by considering ML systems currently used in the USA to predict patients' risk of misusing opioids (automated Prediction Drug Monitoring Programmes, PDMPs for short). I argue that the locus of testimonial injustice in ML-mediated medical encounters is found in the fact that these systems are treated as markers of trustworthiness on which patients' credibility is assessed. I further show how ML-based PDMPs exacerbate and further propagate social inequalities at the expense of vulnerable social groups.
Original languageEnglish
Pages (from-to)536-540
Number of pages5
JournalJournal of medical ethics
Volume49
Issue number8
DOIs
Publication statusPublished - 2023

Keywords

  • Ethics- Medical

Fingerprint

Dive into the research topics of 'Testimonial injustice in medical machine learning'. Together they form a unique fingerprint.

Cite this