Description
As machines’ autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human’s trust, which in turn affects how the human collaborates too. With the goal of maintaining mutual appropriate trust between the human and the machine in mind, we ran a user study to investigate the role of task-based willingness (e.g. human preferences on tasks) and its communication in AT-based decision-making. This user study involved the interaction with a 2D grid-world where the participants interacted and collaborated with an artificial agent. During the experiment, objective metrics were collected. Both before the experiment and after each interaction subjective metrics (through questionnaires) and answers to open questions were also collected. We share the data collected and used in our analysis in this repository.
This data is used for the publication "I Know You're Capable, But Are You Willing?": Allocating Tasks in Human-Machine Teams.
This data is used for the publication "I Know You're Capable, But Are You Willing?": Allocating Tasks in Human-Machine Teams.
| Date made available | 3 Oct 2025 |
|---|---|
| Publisher | TU Delft - 4TU.ResearchData |
Cite this
- DataSetCite