TY - JOUR
T1 - Task-Technology Fit of Artificial Intelligence-based clinical decision support systems
T2 - a review of qualitative studies
AU - Parsons, C.S.
AU - Zuiderwijk-van Eijk, A.M.G.
AU - Orchard, N.A.O.
AU - Oosterhoff, J.H.F.
AU - de Reuver, Mark
PY - 2025
Y1 - 2025
N2 - Machine learning algorithms show promise in assisting clinical decision-making; however, only a few have been successfully implemented in practice. To bridge this gap, it is essential to analyse the clinicians’ perspective on the compatibility of Artificial Intelligence-based clinical decision support systems (AI-CDSSs) with their clinical tasks. We therefore conducted a literature review of 21 empirical qualitative studies that examined the interaction between health professionals and AI-CDSSs. We synthesised the research through the lens of the Task-Technology Fit (TTF) model, analysing task, technology and individual characteristics of AI-CDSS applications, to identify design elements that are (mis)aligned with clinicians’ needs. Three key findings emerged from our analysis. First, clinicians often expressed scepticism about the clinical judgements of AI-CDSSs, particularly questioning the system’s ability to compete with clinical expertise in the absence of contextual information. Users valued AI primarily for specific strengths, such as identifying trends in patient trajectories, consolidating large datasets and pattern recognition, and comparing similar patient cases, but were hesitant to rely on it for clinical decisions. Second, actionability emerged as a desired characteristic of AI-CDSSs. For instance, clinicians particularly appreciated features of AI-CDSSs that enabled them to explore how different clinical actions might influence outcomes, as well as Explainable AI for identifying modifiable variables that impacted prediction scores, allowing them to take informed action. Third, we identified various ways AI-CDSSs could be used in clinical practice, including for patient prioritisation, patient monitoring, care acceleration, risk communication and workflow efficiency. In essence, AI-CDSSs functioned either as an alert system, preventing oversights, or as a tool for more informed decision making. Our analysis challenges the assumption that AI-CDSSs add little value when clinicians disregard its predictions, as it frequently prompts them to critically reassess their judgments through additional testing, consultation with colleagues, and other actions. Overall, our findings underscore the importance of an in-depth understanding of how AI-CDSSs are used in clinical practice. To optimise for effectiveness, the design of AI-CDSSs should prioritise supporting clinicians’ cognitive processes and information needs. This approach ensures that we move beyond the hype, focusing on the responsible integration of AI-CDSSs, and ultimately enhancing patient care.
AB - Machine learning algorithms show promise in assisting clinical decision-making; however, only a few have been successfully implemented in practice. To bridge this gap, it is essential to analyse the clinicians’ perspective on the compatibility of Artificial Intelligence-based clinical decision support systems (AI-CDSSs) with their clinical tasks. We therefore conducted a literature review of 21 empirical qualitative studies that examined the interaction between health professionals and AI-CDSSs. We synthesised the research through the lens of the Task-Technology Fit (TTF) model, analysing task, technology and individual characteristics of AI-CDSS applications, to identify design elements that are (mis)aligned with clinicians’ needs. Three key findings emerged from our analysis. First, clinicians often expressed scepticism about the clinical judgements of AI-CDSSs, particularly questioning the system’s ability to compete with clinical expertise in the absence of contextual information. Users valued AI primarily for specific strengths, such as identifying trends in patient trajectories, consolidating large datasets and pattern recognition, and comparing similar patient cases, but were hesitant to rely on it for clinical decisions. Second, actionability emerged as a desired characteristic of AI-CDSSs. For instance, clinicians particularly appreciated features of AI-CDSSs that enabled them to explore how different clinical actions might influence outcomes, as well as Explainable AI for identifying modifiable variables that impacted prediction scores, allowing them to take informed action. Third, we identified various ways AI-CDSSs could be used in clinical practice, including for patient prioritisation, patient monitoring, care acceleration, risk communication and workflow efficiency. In essence, AI-CDSSs functioned either as an alert system, preventing oversights, or as a tool for more informed decision making. Our analysis challenges the assumption that AI-CDSSs add little value when clinicians disregard its predictions, as it frequently prompts them to critically reassess their judgments through additional testing, consultation with colleagues, and other actions. Overall, our findings underscore the importance of an in-depth understanding of how AI-CDSSs are used in clinical practice. To optimise for effectiveness, the design of AI-CDSSs should prioritise supporting clinicians’ cognitive processes and information needs. This approach ensures that we move beyond the hype, focusing on the responsible integration of AI-CDSSs, and ultimately enhancing patient care.
KW - Artificial intelligence
KW - Clinical Decision Support System
KW - Task-Technology Fit
KW - Human-AI collaboration
KW - AI adoption
KW - Attitude of health personnel
UR - http://www.scopus.com/inward/record.url?scp=105020169802&partnerID=8YFLogxK
U2 - 10.1186/s12911-025-03237-8
DO - 10.1186/s12911-025-03237-8
M3 - Review article
SN - 1472-6947
VL - 25
JO - BMC Medical Informatics and Decision Making
JF - BMC Medical Informatics and Decision Making
IS - 1
M1 - 397
ER -