Abstract
To recognize emotions using less obtrusive wearable sensors, we present a novel emotion recognition method that uses only pupil diameter (PD) and skin conductance (SC). Psychological studies show that these two signals are related to the attention level of humans exposed to visual stimuli. Based on this, we propose a feature extraction algorithm that extract correlation-based features for participants watching the same video clip. To boost performance given limited data, we implement a learning system without a deep architecture to classify arousal and valence. Our method outperforms not only state-of-art approaches, but also widely-used traditional and deep learning methods.
Original language | English |
---|---|
Title of host publication | ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction |
Subtitle of host publication | Proceedings of the 2019 International Conference on Multimodal Interaction |
Editors | Wen Gao, Helen Mei Ling Meng, Matthew Turk, Susan R. Fussell, Bjorn Schuller, Bjorn Schuller, Yale Song, Kai Yu |
Place of Publication | New York |
Publisher | Association for Computing Machinery (ACM) |
Pages | 404-408 |
Number of pages | 5 |
ISBN (Electronic) | 9781450368605 |
ISBN (Print) | 978-1-4503-6860-5 |
DOIs | |
Publication status | Published - 2019 |
Event | 21st ACM International Conference on Multimodal Interaction, ICMI 2019 - Suzhou, China Duration: 14 Oct 2019 → 18 Oct 2019 |
Publication series
Name | ICMI 2019 - Proceedings of the 2019 International Conference on Multimodal Interaction |
---|
Conference
Conference | 21st ACM International Conference on Multimodal Interaction, ICMI 2019 |
---|---|
Country/Territory | China |
City | Suzhou |
Period | 14/10/19 → 18/10/19 |
Keywords
- Emotion recognition
- Machine learning
- MAHNOB-HCI database
- Pupil diameter
- Skin conductance response