Kernel Whitening for One-Class Classification

DMJ Tax, P Juszczak

    Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

    19 Citations (Scopus)


    In one-class classification one tries to describe a class of target data and to distinguish it from all other possible outlier objects. Obvious applications are areas where outliers are very diverse or very difficult or expensive to measure, such as in machine diagnostics or in medical applications. In order to have a good distinction between the target objects and the outliers, good representation of the data is essential. The performance of many one-class classifiers critically depends on the scaling of the data and is often harmed by data distributions in (nonlinear) subspaces. This paper presents a simple preprocessing method which actively tries to map the data to a spherical symmetric cluster and is almost insensitive to data distributed in subspaces. It uses techniques from Kernel PCA to rescale the data in a kernel feature space to unit variance. This transformed data can now be described very well by the Support Vector Data Description, which basically fits a hypersphere around the data. The paper presents the methods and some preliminary experimental results.
    Original languageUndefined/Unknown
    Title of host publicationPattern Recognition with Support Vector Machines, Proceedings SVM2002
    EditorsS.-W. Lee, A. Verri
    Place of PublicationBerlin
    Number of pages13
    ISBN (Print)3-540-44016-X
    Publication statusPublished - 2002

    Publication series

    PublisherSpringer Verlag
    NameLecture Notes in Computer Science
    ISSN (Print)0302-9743

    Bibliographical note

    ISSN 0302-9743, phpub 40


    • conference contrib. refereed
    • ZX CWTS JFIS < 1.00

    Cite this