Code underlying the publication: Integrity-based Explanations for Fostering Appropriate Trust in AI Agents



This repository includes the software code that was developed for the publication titled "Integrity-based Explanations for Fostering Appropriate Trust in AI Agents".

Research Objective: How does the expression of different principles of integrity through explanation affect the appropriateness of human’s trust in the AI agent?
Type of research: Empirical
Code Environment: Microsoft Power Platform - Power Apps
Type of Code: Power Apps Solution. Solutions are the mechanism for implementing application lifecycle management (ALM) in Power Apps.
Method of data collection: Participants were provided guest credentials to login in Power Apps platform and interact with an AI agent to estimate calories of a food plate. The AI agent provided different types of integrity-laden explanations to help the participant in estimating the calories. Data was collected regarding reliance of the participant on the AI agent in form of 0 (not relied) and 1 (relied).

Bibliographical note

contributor: TU Delft, Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS), Department of Intelligent Systems
Date made available4 Apr 2024
PublisherTU Delft - 4TU.ResearchData
Date of data production2024 -

Cite this