Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework

Anna-Sophie Ulfert, Eleni Georganta, Carolina Centeio Jorge, Siddharth Mehrotra, Myrthe Tielman

Research output: Contribution to journalArticleScientificpeer-review

17 Downloads (Pure)

Abstract

Intelligent systems are increasingly entering the workplace, gradually moving away from technologies supporting work processes to artificially intelligent (AI) agents becoming team members. Therefore, a deep understanding of effective human-AI collaboration within the team context is required. Both psychology and computer science literature emphasize the importance of trust when humans interact either with human team members or AI agents. However, empirical work and theoretical models that combine these research fields and define team trust in human-AI teams are scarce. Furthermore, they often lack to integrate central aspects, such as the multilevel nature of team trust and the role of AI agents as team members. Building on an integration of current literature on trust in human-AI teaming across different research fields, we propose a multidisciplinary framework of team trust in human-AI teams. The framework highlights different trust relationships that exist within human-AI teams and acknowledges the multilevel nature of team trust. We discuss the framework’s potential for human-AI teaming research and for the design and implementation of trustworthy AI team members.
Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalEuropean Journal of Work and Organizational Psychology
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework'. Together they form a unique fingerprint.

Cite this