Trusted Loss Correction for Noisy Multi-Label Learning

Amirmasoud Ghiassi, Cosmin Octavian Pene, Robert Birke, Lydia Y. Chen

Research output: Contribution to journalConference articleScientificpeer-review

6   Link opens in a new tab Citations (SciVal)
23 Downloads (Pure)

Abstract

Noisy and corrupted labels are shown to significantly undermine the performance of multi-label learning, which has multiple labels in each image. Correcting the loss via a label corruption matrix is effective in improving the robustness of single-label classification against noisy labels. However, estimating the corruption matrix for multi-label problems is no mean feat due to the unbalanced distributions of labels and the presence of multiple objects that may be mapped into the same labels. In this paper, we propose a robust multi-label classifier against label noise, TLCM, which corrects the loss based on a corruption matrix estimated on trusted data. To overcome the challenge of unbalanced label distribution and multi-object mapping, we use trusted single-label data as regulators to correct the multi-label corruption matrix. Empirical evaluation on real-world vision and object detection datasets, i.e., MS-COCO, NUS-WIDE, and MIRFLICKR, shows that our method under medium (30%) and high (60%) corruption levels outperforms state-of-the-art multi-label classifier (ASL) and noise-resilient multi-label classifier (MPVAE), by on average 12.5% and 26.3% mean average precision (mAP) points, respectively.

Original languageEnglish
Pages (from-to)343-358
Number of pages16
JournalProceedings of Machine Learning Research
Volume189
Publication statusPublished - 2022
Event14th Asian Conference on Machine Learning, ACML 2022 - Hyderabad, India
Duration: 12 Dec 202214 Dec 2022

Keywords

  • Corrupted Labels
  • Corruption Matrix Estimation
  • Deep Neural Network
  • Multi Label Learning

Fingerprint

Dive into the research topics of 'Trusted Loss Correction for Noisy Multi-Label Learning'. Together they form a unique fingerprint.

Cite this