Abstract
Neural-network classifiers are trained to achieve high prediction accuracy. However, their performance still suffers from frequently appearing inputs of unknown classes. As a component of a cyber-physical system, the classifier in this case can no longer be reliable and is typically retrained. We propose an algorithmic framework for monitoring reliability of a neural network. In contrast to static detection, a monitor wrapped in our framework operates in parallel with the classifier, communicates interpretable labeling queries to the human user, and incrementally adapts to their feedback.
Original language | English |
---|---|
Title of host publication | BNAIC/BeneLearn 2021 |
Subtitle of host publication | 33rd Benelux Conference on Artificial Intelligence and 30th Belgian-Dutch Conference on Machine Learning |
Editors | Edit Luis A. Leiva, Cédric Pruski, Réka Markovich, Amro Najjar, Christoph Schommer |
Pages | 685-687 |
Publication status | Published - 2021 |
Event | 33rd Benelux Conference on Artificial Intelligence and 30th Belgian-Dutch Conference on Machine Learning - Esch-sur-Alzette, Luxembourg Duration: 10 Nov 2021 → 12 Nov 2021 |
Conference
Conference | 33rd Benelux Conference on Artificial Intelligence and 30th Belgian-Dutch Conference on Machine Learning |
---|---|
Abbreviated title | BNAIC/BeneLearn 2021 |
Country/Territory | Luxembourg |
City | Esch-sur-Alzette |
Period | 10/11/21 → 12/11/21 |
Keywords
- monitoring
- neural networks
- novelty detection