A soft-labeled self-training approach

Alexander Mey, Marco Loog

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

8 Citations (Scopus)

Abstract

Semi-supervised classification methods try to improve a supervised learned classifier with the help of unlabeled data. In many cases one assumes a certain structure on the data, as for example the manifold assumption, the smoothness assumption or the cluster assumption. Self-training is a method that does not need any assumptions on the data itself. The idea is to use the supervised trained classifier to label the unlabeled points and to enlarge this way the training data. This paper aims to show that a self-training approach with soft-labeling is preferable in many cases in terms of expected loss (risk) minimization. The main idea is to use a soft-labeling to minimize the risk on labeled and unlabeled data together, in which the hard-labeled self-training is an extreme case.
Original languageEnglish
Title of host publication2016 23rd International Conference on Pattern Recognition (ICPR)
Place of PublicationPiscataway, NJ
PublisherIEEE
Pages2604-2609
Number of pages6
ISBN (Electronic)978-1-5090-4847-2
ISBN (Print)978-1-5090-4848-9
DOIs
Publication statusPublished - 2016
EventICPR 2016: 23rd International Conference on Pattern Recognition - Cancún, Mexico
Duration: 4 Dec 20168 Dec 2016
Conference number: 23

Conference

ConferenceICPR 2016
Country/TerritoryMexico
CityCancún
Period4/12/168/12/16

Keywords

  • Labeling
  • Minimization
  • Linear programming
  • Probability distribution
  • Mathematical model
  • Pattern recognition
  • Risk management

Fingerprint

Dive into the research topics of 'A soft-labeled self-training approach'. Together they form a unique fingerprint.

Cite this