Making Learners (More) Monotone

Tom Julian Viering*, Alexander Mey, Marco Loog

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)
43 Downloads (Pure)


Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MTHT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines. Our code is available at

Original languageEnglish
Title of host publicationAdvances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Proceedings
EditorsMichael R. Berthold, Ad Feelders, Georg Krempl
Place of PublicationCham
Number of pages13
ISBN (Electronic)978-3-030-44584-3
ISBN (Print)978-3-030-44583-6
Publication statusPublished - 2020
Event18th International Conference on Intelligent Data Analysis, IDA 2020 - Konstanz, Germany
Duration: 27 Apr 202029 Apr 2020
Conference number: 18

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


Conference18th International Conference on Intelligent Data Analysis, IDA 2020
Abbreviated titleIDA 2020
OtherVirtual/online event due to COVID-19

Bibliographical note

Virtual/online event due to COVID-19


  • Learning curve
  • Learning theory
  • Model selection


Dive into the research topics of 'Making Learners (More) Monotone'. Together they form a unique fingerprint.

Cite this