TY - GEN
T1 - Situation-Aware Emotion Regulation of Conversational Agents with Kinetic Earables
AU - Katayama, Shin
AU - Mathur, Akhil
AU - Van Den Broeck, Marc
AU - Okoshi, Tadashi
AU - Nakazawa, Jin
AU - Kawsar, Fahim
PY - 2019
Y1 - 2019
N2 - Conversational agents are increasingly becoming digital partners of our everyday computing experiences offering a variety of purposeful information and utility services. Although rich on competency, these agents are entirely oblivious to their users' situational and emotional context today and incapable of adjusting their interaction style and tone contextually. To this end, we present a mixed-method study that informs the design of a situation-and emotion-aware conversational agent for kinetic earables. We surveyed 280 users, and qualitatively interviewed 12 users to understand their expectation from a conversational agent in adapting the interaction style. Grounded on our findings, we develop a first-of-its-kind emotion regulator for a conversational agent on kinetic earable that dynamically adjusts its conversation style, tone, volume in response to users emotional, environmental, social and activity context gathered through speech prosody, motion signals and ambient sound. We describe these context models, the end-to-end system including a purpose-built kinetic earable and their real-world assessment. The experimental results demonstrate that our regulation mechanism invariably elicits better and affective user experience in comparison to baseline conditions in different real-world settings.
AB - Conversational agents are increasingly becoming digital partners of our everyday computing experiences offering a variety of purposeful information and utility services. Although rich on competency, these agents are entirely oblivious to their users' situational and emotional context today and incapable of adjusting their interaction style and tone contextually. To this end, we present a mixed-method study that informs the design of a situation-and emotion-aware conversational agent for kinetic earables. We surveyed 280 users, and qualitatively interviewed 12 users to understand their expectation from a conversational agent in adapting the interaction style. Grounded on our findings, we develop a first-of-its-kind emotion regulator for a conversational agent on kinetic earable that dynamically adjusts its conversation style, tone, volume in response to users emotional, environmental, social and activity context gathered through speech prosody, motion signals and ambient sound. We describe these context models, the end-to-end system including a purpose-built kinetic earable and their real-world assessment. The experimental results demonstrate that our regulation mechanism invariably elicits better and affective user experience in comparison to baseline conditions in different real-world settings.
KW - Context Awareness
KW - Conversational Agent
KW - Earables
KW - Emotion Regulation
UR - http://www.scopus.com/inward/record.url?scp=85077787400&partnerID=8YFLogxK
U2 - 10.1109/ACII.2019.8925449
DO - 10.1109/ACII.2019.8925449
M3 - Conference contribution
T3 - 2019 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019
BT - 2019 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019
PB - IEEE
T2 - 8th International Conference on Affective Computing and Intelligent Interaction, ACII 2019
Y2 - 3 September 2019 through 6 September 2019
ER -