Exploring Retrospective Annotation in Long-videos for Emotion Recognition

Patricia Bota, Pablo Cesar, Ana Fred, Hugo Placido da Silva

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Emotion Recognition systems are typically trained to classify a given psychophysiological state into emotion categories. Current platforms for emotion ground-truth collection show limitations for real-world scenarios of long-duration content (e.g., > 10m), namely: 1) Real-time annotation tools are distracting and become exhausting in a longer video; 2) Perform retrospective annotation of the whole content in bulk (providing highly coarse annotations); or 3) Are performed by external experts (depending on the number of annotators and their subjective experience). We explore a novel approach, the EmotiphAI Annotator, that allows undisturbed content visualisation and simplifies the annotation process by using segmentation algorithms that select brief clips for emotional annotation retrospectively. We compare three methods for content segmentation based on physiological data (Electrodermal Activity (EDA), emotion-based), scene (time-based), and random (control) selection. The EmotiphAI Annotator attained a B+ System Usability Scale score and low-average mental workload as per the NASA Task Load Index (40%). The reliability of the self-report was analysed by the inter-rater agreement (STD < 0.75), coherence across time segmentation methods (STD < 0.17), comparison against the SoA ground-truth (STD < 0.7), and correlation to EDA (> 0.3 to 0.8), where the method based on EDA obtained the overall best performance.

Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Affective Computing
DOIs
Publication statusAccepted/In press - 2024

Keywords

  • Emotion recognition
  • Annotation
  • Physiological signals
  • Retrospective

Fingerprint

Dive into the research topics of 'Exploring Retrospective Annotation in Long-videos for Emotion Recognition'. Together they form a unique fingerprint.

Cite this