Abstract
Increasingly sophisticated mathematical modelling processes from Machine Learning are being used to analyse complex data. However, the performance and explainability of these models within practical critical systems requires a rigorous and continuous verification of their safe utilisation. Working towards addressing this challenge, this paper presents a principled novel safety argument framework for critical systems that utilise deep neural networks. The approach allows various forms of predictions, e.g., future reliability of passing some demands, or confidence on a required reliability level. It is supported by a Bayesian analysis using operational data and the recent verification and validation techniques for deep learning. The prediction is conservative – it starts with partial prior knowledge obtained from lifecycle activities and then determines the worst-case prediction. Open challenges are also identified.
Original language | English |
---|---|
Title of host publication | Computer Safety, Reliability, and Security - 39th International Conference, SAFECOMP 2020, Proceedings |
Editors | António Casimiro, Pedro Ferreira, Frank Ortmeier, Friedemann Bitsch |
Publisher | Springer Open |
Pages | 244-259 |
Number of pages | 16 |
ISBN (Print) | 9783030545482 |
DOIs | |
Publication status | Published - 2020 |
Externally published | Yes |
Event | 39th International Conference on Computer Safety, Reliability and Security, SAFECOMP 2020 - Lisbon, Portugal Duration: 16 Sep 2020 → 18 Sep 2020 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12234 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 39th International Conference on Computer Safety, Reliability and Security, SAFECOMP 2020 |
---|---|
Country | Portugal |
City | Lisbon |
Period | 16/09/20 → 18/09/20 |
Keywords
- Assurance arguments
- Bayesian inference
- Deep learning verification
- Quantitative claims
- Reliability claims
- Safe AI
- Safety cases