Human–AI Relationship in Healthcare

Mukta Joshi, Nicola Pezzotti, Jacob T. Browne

Research output: Chapter in Book/Conference proceedings/Edited volumeChapterScientific


In the age of machine learning, deep learning and artificial intelligence (AI) are expected to improve our lives. Particularly in the field of medicine and medical imaging, AI can make sense of tens if not hundreds of different parameters and find patterns and correlations that are difficult for humans to process. AI is expected to assist doctors in improving patient care and reducing burden. Despite many papers showing how AI algorithms can match or outperform humans in different domains of medicine, not many have been adopted into practice (Kelly et al., 2019). One of the major challenges is trust and acceptance of AI results. These are important issues that are complex. Confidence, trust, and uncertainty influence the way humans make decisions using AI. AI (deep learning algorithms in particular) is a “black box” to users and even the creators of these algorithms, making it very difficult to adopt. Should humans trust AI? Do humans overly trust AI? This chapter explores the human–AI relationship. It starts with a discussion on trust and human interactions. The expert–apprentice model is described to inform how AI could interact with clinicians. Recent technological developments and experience design aspects are detailed, giving an outline of recommendations for designing explainable AI, or XAI.

Original languageEnglish
Title of host publicationExplainable AI in Healthcare
Subtitle of host publicationUnboxing Machine Learning for Biomedicine
PublisherCRC Press
Number of pages22
ISBN (Electronic)9781000906394
ISBN (Print)9781032367118
Publication statusPublished - 2023


Dive into the research topics of 'Human–AI Relationship in Healthcare'. Together they form a unique fingerprint.

Cite this