Abstract
Artificial neural networks are currently used for many tasks, including safety critical ones such as automated driving. Hence, it is very important to protect them against faults and fault attacks. In this work, we propose two fault injection attack detection mechanisms: one based on using output labels for a reference input, and the other on the activations of neurons. First, we calibrate our detectors during normal conditions. Thereafter, we verify them to maximize fault detection performance. To prove the effectiveness of our solution, we consider highly employed neural networks (AlexNet, GoogleNet, and VGG) with their associated dataset ImageNet. Our results show that for both detectors we are able to obtain a high rate of coverage against faults, typically above 96%. Moreover, the hardware and software implementations of our detector indicate an extremely low area and time overhead.
Original language | English |
---|---|
Title of host publication | 2021 18th International Conference on Privacy, Security and Trust (PST) |
Place of Publication | Piscataway |
Publisher | IEEE |
Pages | 1-10 |
Number of pages | 10 |
ISBN (Electronic) | 978-1-6654-0184-5 |
ISBN (Print) | 978-1-6654-0185-2 |
DOIs | |
Publication status | Published - 2021 |
Event | 18th Annual International Conference on Privacy, Security and Trust (PST2021) - Virtual at Auckland, New Zealand Duration: 13 Dec 2021 → 15 Dec 2021 Conference number: 18 |
Publication series
Name | 2021 18th International Conference on Privacy, Security and Trust, PST 2021 |
---|
Conference
Conference | 18th Annual International Conference on Privacy, Security and Trust (PST2021) |
---|---|
Abbreviated title | PST2021 |
Country/Territory | New Zealand |
City | Virtual at Auckland |
Period | 13/12/21 → 15/12/21 |
Keywords
- Fault Injection
- Countermeasures
- Artificial neural networks
- Machine learning