Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification

Research output: Contribution to journalArticleScientificpeer-review

44 Citations (Scopus)
16 Downloads (Pure)


Improvement guarantees for semi-supervised classifiers can currently only be given under restrictive conditions on the data. We propose a general way to perform semi-supervised parameter estimation for likelihood-based classifiers for which, on the full training set, the estimates are never worse than the supervised solution in terms of the log-likelihood. We argue, moreover, that we may expect these solutions to really improve upon the supervised classifier in particular cases. In a worked-out example for LDA, we take it one step further and essentially prove that its semi-supervised version is strictly better than its supervised counterpart. The two new concepts that form the core of our estimation principle are contrast and pessimism. The former refers to the fact that our objective function takes the supervised estimates into account, enabling the semi-supervised solution to explicitly control the potential improvements over this estimate. The latter refers to the fact that our estimates are conservative and therefore resilient to whatever form the true labeling of the unlabeled data takes on. Experiments demonstrate the improvements in terms of both the log-likelihood and the classification error rate on independent test sets.
Original languageEnglish
Pages (from-to)462-475
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number3
Publication statusPublished - 2016


  • Maximum likelihood
  • semi-supervised learning
  • contrast
  • pessimism
  • linear discriminant analysis


Dive into the research topics of 'Contrastive Pessimistic Likelihood Estimation for Semi-Supervised Classification'. Together they form a unique fingerprint.

Cite this