LESS: A model-based classifier for sparce subspaces

CJ Veenman, DMJ Tax

Research output: Contribution to journalArticleScientificpeer-review

30 Citations (Scopus)

Abstract

In this paper, we specifically focus on high-dimensional data sets for which the number of dimensions is an order of magnitude higher than the number of objects. From a classifier design standpoint, such small sample size problems have some interesting challenges. The first challenge is to find, from all hyperplanes that separate the classes, a separating hyperplane which generalizes well for future data. A second important task is to determine which features are required to distinguish the classes. To attack these problems, we propose the LESS (Lowest Error in a Sparse Subspace) classifier that efficiently finds linear discriminants in a sparse subspace. In contrast with most classifiers for high-dimensional data sets, the LESS classifier incorporates a (simple) data model. Further, by means of a regularization parameter, the classifier establishes a suitable trade-off between subspace sparseness and classification accuracy. In the experiments, we show how LESS performs on several high-dimensional data sets and compare its performance to related state-of-the-art classifiers like, among others, linear ridge regression with the LASSO and the Support Vector Machine. It turns out that LESS performs competitively while using fewer dimensions.
Original languageUndefined/Unknown
Pages (from-to)1496-1500
Number of pages5
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume27
Issue number9
DOIs
Publication statusPublished - 2005

Bibliographical note

50/50 iss01 en iss06

Keywords

  • academic journal papers
  • ZX CWTS 1.00 <= JFIS < 3.00

Cite this