Abstract
Speech signals contain rich information, such as textual content, emotion, and speaker identity. To extract these features more efficiently, researchers are investigating joint training across multiple tasks, like Speech Emotion Recognition (SER) and Speaker Verification (SV), aiming to improve performance by decoupling task-specific knowledge. Traditional multitask decoupling methods in SER typically use orthogonalization to increase the distance between parameter vectors in the feature space. In this paper, we introduce a novel Hybrid instance-level Contrastive Decoupling Loss. This method leverages supervised labels to effectively decouple SER and SV. Unlike previous approaches, it is not restricted to dual-stream models with identical architectures and can be easily integrated with leading models for each sub-task. Experimental results show that our proposed Hybrid Contrastive Learning Decoupling (HCLD) method significantly outperforms traditional orthogonal decoupling approaches.
| Original language | English |
|---|---|
| Number of pages | 5 |
| Journal | ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings |
| DOIs | |
| Publication status | Published - 2025 |
| Event | 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025 - Hyderabad, India Duration: 6 Apr 2025 → 11 Apr 2025 |
Bibliographical note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Keywords
- feature decoupling
- speaker verification
- speech emotion recognition