Efficient Training of Robust Decision Trees Against Adversarial Examples

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Downloads (Pure)

Abstract

Recently it has been shown that many machine learning models are vulnerable to adversarial examples: perturbed samples that trick the model into misclassifying them. Neural networks have received much attention but decision trees and their ensembles achieve state-of-the-art results on tabular data, motivating research on their robustness. Recently the first methods have been proposed to train decision trees and their ensembles robustly [4, 3, 2, 1] but the state-of-the-art methods are expensive to run.
Original languageEnglish
Title of host publicationBNAIC/BeneLearn 2021
Subtitle of host publication33rd Benelux Conference on Artificial Intelligence and 30th Belgian-Dutch Conference on Machine Learning
EditorsEdit Luis A. Leiva, Cédric Pruski, Réka Markovich, Amro Najjar, Christoph Schommer
Pages702-703
Publication statusPublished - 2021
Event33rd Benelux Conference on Artificial Intelligence and
30th Belgian-Dutch Conference on Machine Learning
- Esch-sur-Alzette, Luxembourg
Duration: 10 Nov 202112 Nov 2021

Conference

Conference33rd Benelux Conference on Artificial Intelligence and
30th Belgian-Dutch Conference on Machine Learning
Abbreviated titleBNAIC/BeneLearn 2021
CountryLuxembourg
CityEsch-sur-Alzette
Period10/11/2112/11/21

Fingerprint

Dive into the research topics of 'Efficient Training of Robust Decision Trees Against Adversarial Examples'. Together they form a unique fingerprint.

Cite this