Abstract
In one-class classification, one class of data, called the target class, has to be distinguished
from the rest of the feature space. It is assumed that only examples of the target class
are available.This classifier has to be constructed such that objects not originating from
the target set, by definition outlier objects, are not classified as target objects. In previous
research the support vector data description (SVDD) is proposed to solve the problem
of one-class classification. It models a hypersphere around the target set, and by the
introduction of kernel functions, more flexible descriptions are obtained. In the original
optimization of the SVDD, two parameters have to be given beforehand by the user. To
automatically optimize the values for these parameters, the error on both the target and
outlier data has to be estimated. Because no outlier examples are available, we propose
a method for generating artificial outliers, uniformly distributed in a hypersphere. An
(relative) efficient estimate for the volume covered by the one-class classifiers is obtained,
and so an estimate for the outlier error. Results are shown for artificial data and for real
world data.
Keywords: Support vector classi¿ers, one-class classi¿cation, novelty detection, outlier
detection
Original language | Undefined/Unknown |
---|---|
Pages (from-to) | 155-173 |
Number of pages | 19 |
Journal | Journal of Machine Learning Research |
Volume | 2 |
Issue number | 2 |
Publication status | Published - 2001 |
Bibliographical note
Special Issue on Kernel Methods, phpub 3Keywords
- academic journal papers
- ZX CWTS JFIS < 1.00