TY - GEN
T1 - Questionnaire Items for Evaluating Artificial Social Agents - Expert Generated, Content Validated and Reliability Analysed
AU - Fitrianie, Siska
AU - Bruijnes, Merijn
AU - Li, Fengxiang
AU - Brinkman, Willem Paul
PY - 2021
Y1 - 2021
N2 - In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 90 researchers worldwide, researching the IVA community interests and practice in evaluating human interaction with an artificial social agent (ASA). The joint efforts have previously generated a unified set of 19 constructs that capture more than 80% of constructs used in empirical studies published in the IVA conference between 2013 to 2018. In this paper, we present expert-content-validated 131 questionnaire items for the constructs and their dimensions, and investigate the level of reliability. We establish this in three phases. Firstly, eight experts generated 431 potential construct items. Secondly, 20 experts rated whether items measure (only) their intended construct, resulting in 207 content-validated items. Next, a reliability analysis was conducted, involving 192 crowd-workers who were asked to rate a human interaction with an ASA, which resulted in 131 items (about 5 items per measurement, with Cronbach's alpha ranged [.60 - .87]). These are the starting points for the questionnaire instrument of human-ASA interaction.
AB - In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 90 researchers worldwide, researching the IVA community interests and practice in evaluating human interaction with an artificial social agent (ASA). The joint efforts have previously generated a unified set of 19 constructs that capture more than 80% of constructs used in empirical studies published in the IVA conference between 2013 to 2018. In this paper, we present expert-content-validated 131 questionnaire items for the constructs and their dimensions, and investigate the level of reliability. We establish this in three phases. Firstly, eight experts generated 431 potential construct items. Secondly, 20 experts rated whether items measure (only) their intended construct, resulting in 207 content-validated items. Next, a reliability analysis was conducted, involving 192 crowd-workers who were asked to rate a human interaction with an ASA, which resulted in 131 items (about 5 items per measurement, with Cronbach's alpha ranged [.60 - .87]). These are the starting points for the questionnaire instrument of human-ASA interaction.
KW - Artificial social agent
KW - evaluation instrument
KW - questionnaire
KW - reliability analysis
KW - user study
UR - http://www.scopus.com/inward/record.url?scp=85115744043&partnerID=8YFLogxK
U2 - 10.1145/3472306.3478341
DO - 10.1145/3472306.3478341
M3 - Conference contribution
AN - SCOPUS:85115744043
T3 - Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, IVA 2021
SP - 84
EP - 86
BT - Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, IVA 2021
PB - Association for Computing Machinery (ACM)
T2 - 21st ACM International Conference on Intelligent Virtual Agents, IVA 2021
Y2 - 14 September 2021 through 17 September 2021
ER -