Supervised scale-regularized linear convolutionary filters

Marco Loog, François Lauze

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)
41 Downloads (Pure)

Abstract

We start by demonstrating that an elementary learning task—learning a linear filter from training data by means of regression—can be solved very efficiently for feature spaces of very high dimensionality. In a second step, firstly, acknowledging that such high-dimensional learning tasks typically benefit from some form of regularization and, secondly, arguing that the problem of scale has not been taken care of in a very satisfactory manner, we come to a combined resolution of both of these shortcomings by proposing a technique that we coin scale regularization. This regularization problem can also be solved relatively efficient. All in all, the idea is to properly control the scale of a trained filter, which we solve by introducing a specific regularization term into the overall objective function. We demonstrate, on an artificial filter learning problem, the capabilities of our basic filter. In particular, we demonstrate that it clearly outperforms the de facto standard Tikhonov regularization, which is the one employed in ridge regression or Wiener filtering.

Original languageEnglish
Title of host publicationBritish Machine Vision Conference 2017, BMVC 2017
PublisherBMVA Press
Number of pages11
ISBN (Electronic)190172560X, 9781901725605
Publication statusPublished - 2017
Event28th British Machine Vision Conference, BMVC 2017 - London, United Kingdom
Duration: 4 Sept 20177 Sept 2017

Conference

Conference28th British Machine Vision Conference, BMVC 2017
Country/TerritoryUnited Kingdom
CityLondon
Period4/09/177/09/17

Fingerprint

Dive into the research topics of 'Supervised scale-regularized linear convolutionary filters'. Together they form a unique fingerprint.

Cite this