Abstract
Learning performance can show non-monotonic behavior. That is, more data does not necessarily lead to better models, even on average. We propose three algorithms that take a supervised learning model and make it perform more monotone. We prove consistency and monotonicity with high probability, and evaluate the algorithms on scenarios where non-monotone behaviour occurs. Our proposed algorithm MTHT makes less than 1% non-monotone decisions on MNIST while staying competitive in terms of error rate compared to several baselines. Our code is available at https://github.com/tomviering/monotone.
Original language | English |
---|---|
Title of host publication | Advances in Intelligent Data Analysis XVIII - 18th International Symposium on Intelligent Data Analysis, IDA 2020, Proceedings |
Editors | Michael R. Berthold, Ad Feelders, Georg Krempl |
Place of Publication | Cham |
Publisher | SpringerOpen |
Pages | 535-547 |
Number of pages | 13 |
Volume | 12080 |
ISBN (Electronic) | 978-3-030-44584-3 |
ISBN (Print) | 978-3-030-44583-6 |
DOIs | |
Publication status | Published - 2020 |
Event | 18th International Conference on Intelligent Data Analysis, IDA 2020 - Konstanz, Germany Duration: 27 Apr 2020 → 29 Apr 2020 Conference number: 18 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12080 |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 18th International Conference on Intelligent Data Analysis, IDA 2020 |
---|---|
Abbreviated title | IDA 2020 |
Country/Territory | Germany |
City | Konstanz |
Period | 27/04/20 → 29/04/20 |
Other | Virtual/online event due to COVID-19 |
Bibliographical note
Virtual/online event due to COVID-19Keywords
- Learning curve
- Learning theory
- Model selection