More Similar Values, More Trust? - the Effect of Value Similarity on Trust in Human-Agent Interaction

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

6 Downloads (Pure)

Abstract

As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this paper studies how human and agent Value Similarity (VS) influences a human's trust in that agent. To explore this, 89 participants teamed up with five different agents, which were designed with varying levels of value similarity to that of the participants. In a within-subjects, scenario-based experiment, agents gave suggestions on what to do when entering the building to save a hostage. We analyzed the agent's scores on subjective value similarity, trust and qualitative data from open-ended questions. Our results show that agents rated as having more similar values also scored higher on trust, indicating a positive effect between the two. With this result, we add to the existing understanding of human-agent trust by providing insight into the role of value-similarity.
Original languageEnglish
Title of host publicationAIES '21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
PublisherACM DL
Pages777-783
Number of pages7
ISBN (Electronic)978-1-450-8473-5
DOIs
Publication statusPublished - 21 Jul 2021

Keywords

  • artificial agents
  • human-AI interaction
  • human-computer interaction
  • intelligent agents
  • trust
  • value similarity
  • values

Fingerprint

Dive into the research topics of 'More Similar Values, More Trust? - the Effect of Value Similarity on Trust in Human-Agent Interaction'. Together they form a unique fingerprint.

Cite this