Modelling Trust in Human-AI Interaction: Doctoral Consortium

Research output: Contribution to conferenceAbstractScientific

354 Downloads (Pure)

Abstract

Trust is an important element of any interaction, but especially when we are interacting with a piece of technology which does not think like we do. Therefore, AI systems need to understand how humans trust them, and what to do to promote appropriate trust. The aim of this research is to study trust through both a formal and social lens. We will be working on formal models of trust, but with a focus on the social nature of trust in order to represent how humans trust AI. We will then employ methods from human computer interaction research to study if these models work in practice, and what would eventually be necessary for systems to elicit appropriate levels of trust from their users. The context of this research will be AI agents which interact with their users to offer personal support.
Original languageEnglish
Pages1826-1828
Publication statusPublished - 3 May 2021
Event20th International Conference on Autonomous Agentsand Multiagent Systems - Virtual/online event due to COVID-19
Duration: 3 May 20217 May 2021
Conference number: 20

Conference

Conference20th International Conference on Autonomous Agentsand Multiagent Systems
Abbreviated titleAAMAS 2021
Period3/05/217/05/21

Bibliographical note

The author thanks Myrthe L. Tielman and Catholijn M. Jonker for their supervision and support.

Keywords

  • Trust
  • AI agents
  • Values
  • Value Similarity
  • Social Situations

Fingerprint

Dive into the research topics of 'Modelling Trust in Human-AI Interaction: Doctoral Consortium'. Together they form a unique fingerprint.

Cite this