Abstract
In multimodal emotion recognition, disentangled representation learning method effectively address the inherent heterogeneity among modalities. To facilitate the flexible integration of enhanced disentangled features into multimodal emotional features, we propose a task-driven multimodal emotion recognition method TDMER. Its Cross-Modal Learning module promotes adaptive cross-modal learning of features disentangled into modality-invariant and modality-specific subspaces, based on their contributions to emotional classification probabilities. The Task-Contribution Fusion mechanism then assigns controllable weights to the enhanced features according to their task objectives, generating multimodal fusion features that improve the emotion classifier's discriminative ability. The proposed TDMER approach has been evaluated on two widely-used multimodal emotion recognition benchmarks and demonstrated significant performance improvements compared with other state-of the-art methods.
| Original language | English |
|---|---|
| Number of pages | 5 |
| Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025 - Hyderabad, India Duration: 6 Apr 2025 → 11 Apr 2025 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Keywords
- Cross- Modal Attention Learning
- Disentangled Representation Learning
- MultiModal Fusion