Assessing artificial trust in human-agent teams: A conceptual model

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
66 Downloads (Pure)

Abstract

As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study.
Original languageEnglish
Title of host publicationIVA 2022 - Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents
Pages1-3
ISBN (Electronic)978-1-4503-9248-8
DOIs
Publication statusPublished - 2022

Publication series

NameIVA 2022 - Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents

Keywords

  • trustworthiness
  • artificial trust
  • intelligent agents
  • human-agent collaboration
  • Human-agent interaction
  • trust
  • Trust metrics
  • Human-Agent Teaming
  • Human-agent teamwork

Fingerprint

Dive into the research topics of 'Assessing artificial trust in human-agent teams: A conceptual model'. Together they form a unique fingerprint.

Cite this