TDMER: A Task-Driven Method for Multimodal Emotion Recognition

Qian Xu, Yu Gu*, Chenyu Li, He Zhang, Hai Xiang Lin, Linsong Liu

*Corresponding author for this work

Research output: Contribution to journalConference articleScientificpeer-review

4 Downloads (Pure)

Abstract

In multimodal emotion recognition, disentangled representation learning method effectively address the inherent heterogeneity among modalities. To facilitate the flexible integration of enhanced disentangled features into multimodal emotional features, we propose a task-driven multimodal emotion recognition method TDMER. Its Cross-Modal Learning module promotes adaptive cross-modal learning of features disentangled into modality-invariant and modality-specific subspaces, based on their contributions to emotional classification probabilities. The Task-Contribution Fusion mechanism then assigns controllable weights to the enhanced features according to their task objectives, generating multimodal fusion features that improve the emotion classifier's discriminative ability. The proposed TDMER approach has been evaluated on two widely-used multimodal emotion recognition benchmarks and demonstrated significant performance improvements compared with other state-of the-art methods.

Bibliographical note

Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Cross- Modal Attention Learning
  • Disentangled Representation Learning
  • MultiModal Fusion

Fingerprint

Dive into the research topics of 'TDMER: A Task-Driven Method for Multimodal Emotion Recognition'. Together they form a unique fingerprint.

Cite this