Artificial Trust as a Tool in Human-AI Teams

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

24 Downloads (Pure)


Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote appropriate trust from the humans, but also know when to trust a human teammate to perform a certain task. In this project, we study trust as a tool for artificial agents to achieve better team work. In particular, we want to build mental models of humans so that agents can understand human trustworthiness in the context of human-AI teamwork, taking into account factors such as human teammates', task's and environment's characteristics.
Original languageEnglish
Title of host publicationHRI '22
Subtitle of host publicationProceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction
Place of PublicationPiscataway
PublisherIEEE Press
Number of pages3
ISBN (Print)978-1-5386-8554-9
Publication statusPublished - 2022
EventHRI 2022: 17th Annual ACM/IEEE International Conference on Human-Robot Interaction - Online, Sapporo, Hokkaido, Japan
Duration: 7 Mar 202210 Mar 2022


ConferenceHRI 2022: 17th Annual ACM/IEEE International Conference on Human-Robot Interaction
Abbreviated titleHRI '22
CitySapporo, Hokkaido
Internet address

Bibliographical note

Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project

Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.


  • HART
  • trustworthiness
  • trust
  • human-robot teams
  • human-agent
  • human-AI
  • hybrid intelligence
  • intelligent agents

Cite this