Abstract
Empirical evidence suggests that the emotional meaning of facial behavior in isolation is often ambiguous in real-world conditions. While humans complement interpretations of others' faces with additional reasoning about context, automated approaches rarely display such context-sensitivity. Empirical findings indicate that the personal memories triggered by videos are crucial for predicting viewers' emotional response to such videos ?- in some cases, even more so than the video's audiovisual content. In this article, we explore the benefits of personal memories as context for facial behavior analysis. We conduct a series of multimodal machine learning experiments combining the automatic analysis of video-viewers' faces with that of two types of context information for affective predictions: \beginenumerate∗[label=(\arabic∗)] \item self-reported free-text descriptions of triggered memories and \item a video's audiovisual content \endenumerate∗. Our results demonstrate that both sources of context provide models with information about variation in viewers' affective responses that complement facial analysis and each other.
Original language | English |
---|---|
Title of host publication | ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction |
Place of Publication | New York |
Publisher | Association for Computing Machinery (ACM) |
Pages | 153-162 |
Number of pages | 10 |
ISBN (Print) | 978-1-4503-7581-8 |
DOIs | |
Publication status | Published - 2020 |
Event | 22nd ACM International Conference on Multimodal Interaction, ICMI 2020 - Virtual, Online, Netherlands Duration: 25 Oct 2020 → 29 Oct 2020 Conference number: 22 |
Conference
Conference | 22nd ACM International Conference on Multimodal Interaction, ICMI 2020 |
---|---|
Abbreviated title | ICMI 2020 |
Country/Territory | Netherlands |
Period | 25/10/20 → 29/10/20 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-careOtherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
Keywords
- affect detection
- context-awareness
- emotion recognition