The imperative of diversity and equity for the adoption of responsible AI in healthcare

Denise E. Hilling, Imane Ihaddouchen, Stefan Buijsman, Reggie Townsend, Diederik Gommers, Michel E. van Genderen*

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

10 Downloads (Pure)

Abstract

Artificial Intelligence (AI) in healthcare holds transformative potential but faces critical challenges in ethical accountability and systemic inequities. Biases in AI models, such as lower diagnosis rates for Black women or gender stereotyping in Large Language Models, highlight the urgent need to address historical and structural inequalities in data and development processes. Disparities in clinical trials and datasets, often skewed toward high-income, English-speaking regions, amplify these issues. Moreover, the underrepresentation of marginalized groups among AI developers and researchers exacerbates these challenges. To ensure equitable AI, diverse data collection, federated data-sharing frameworks, and bias-correction techniques are essential. Structural initiatives, such as fairness audits, transparent AI model development processes, and early registration of clinical AI models, alongside inclusive global collaborations like TRAIN-Europe and CHAI, can drive responsible AI adoption. Prioritizing diversity in datasets and among developers and researchers, as well as implementing transparent governance will foster AI systems that uphold ethical principles and deliver equitable healthcare outcomes globally.

Original languageEnglish
Article number1577529
JournalFrontiers in Artificial Intelligence
Volume8
DOIs
Publication statusPublished - 2025

Keywords

  • artificial intelligence
  • bias
  • diversity
  • equity
  • healthcare

Fingerprint

Dive into the research topics of 'The imperative of diversity and equity for the adoption of responsible AI in healthcare'. Together they form a unique fingerprint.

Cite this