Nuclear discrepancy for single-shot batch active learning

Research output: Contribution to journalArticleScientificpeer-review

1 Citation (Scopus)
3 Downloads (Pure)

Abstract

Active learning algorithms propose what data should be labeled given a pool of unlabeled data. Instead of selecting randomly what data to annotate, active learning strategies aim to select data so as to get a good predictive model with as little labeled samples as possible. Single-shot batch active learners select all samples to be labeled in a single step, before any labels are observed.We study single-shot active learners that minimize generalization bounds to select a representative sample, such as the maximum mean discrepancy (MMD) active learner.We prove that a related bound, the discrepancy, provides a tighter worst-case bound. We study these bounds probabilistically, which inspires us to introduce a novel bound, the nuclear discrepancy (ND). The ND bound is tighter for the expected loss under optimistic probabilistic assumptions. Our experiments show that the MMD active learner performs better than the discrepancy in terms of the mean squared error, indicating that tighter worst case bounds do not imply better active learning performance. The proposed active learner improves significantly upon the MMD and discrepancy in the realizable setting and a similar trend is observed in the agnostic setting, showing the benefits of a probabilistic approach to active learning. Our study highlights that assumptions underlying generalization bounds can be equally important as bound-tightness, when it comes to active learning performance. Code for reproducing our experimental results can be found at https://github.com/tomviering/ NuclearDiscrepancy.
Original languageEnglish
Pages (from-to)1561-1599
Number of pages39
JournalMachine Learning
Volume108
Issue number8-9
DOIs
Publication statusPublished - Sep 2019

Keywords

  • Active learning
  • Discrepancy
  • Kernel methods
  • Maximum mean discrepancy

Cite this