Abstract
This paper focuses on the automatic classification of self-assessed personality traits from the HEXACO inventory during crowded mingle scenarios. We exploit acceleration and proximity data from a wearable device hung around the neck. Unlike most state-of-the-art studies, addressing personality estimation during mingle scenarios provides a challenging social context as people interact dynamically and freely in a face-to-face setting. While many former studies use audio to extract speech-related features, we present a novel method of extracting an individual’s speaking status from a single body worn triaxial accelerometer which scales easily to large populations. Moreover, by fusing both speech and movement energy related cues from just acceleration, our experimental results show improvements on the estimation of Humility over features extracted from a single behavioral modality. We validated our method on 71 participants where we obtained an accuracy of 69% for Honesty, Conscientiousness and Openness to Experience. To our knowledge, this is the largest validation of personality estimation carried out in such a social context with simple wearable sensors.
Original language | English |
---|---|
Title of host publication | Proceeding ICMI 2016 The 18th ACM International Conference on Multimodal Interaction |
Editors | Y. Nakano, E. Andre, T. Nishida |
Place of Publication | New York |
Publisher | Association for Computing Machinery (ACM) |
Pages | 238-242 |
Number of pages | 5 |
ISBN (Print) | 978-1-4503-4556-9 |
DOIs | |
Publication status | Published - 2016 |
Event | ICMI 2016 The 18th ACM International Conference on Multimodal Interaction - Tokyo, Japan Duration: 12 Nov 2016 → 16 Nov 2016 |
Conference
Conference | ICMI 2016 The 18th ACM International Conference on Multimodal Interaction |
---|---|
Country/Territory | Japan |
City | Tokyo |
Period | 12/11/16 → 16/11/16 |
Keywords
- wearable acceleration
- proximity
- speaking turn
- personality