Abstract
The interplay between machine learning and security is becoming more prominent. New applications using machine learning also bring new security risks. Here, we show it is possible to reverse-engineer the inputs to a neural network with only a single-shot side-channel measurement assuming the attacker knows the neural network architecture being used.
Original language | English |
---|---|
Title of host publication | CCS '19 |
Subtitle of host publication | Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security |
Place of Publication | New York |
Publisher | Association for Computing Machinery (ACM) |
Pages | 2657-2659 |
Number of pages | 3 |
ISBN (Electronic) | 978-1-4503-6747-9 |
DOIs | |
Publication status | Published - 2019 |
Event | 26th ACM SIGSAC Conference on Computer and Communications Security, CCS 2019 - London, United Kingdom Duration: 11 Nov 2019 → 15 Nov 2019 |
Conference
Conference | 26th ACM SIGSAC Conference on Computer and Communications Security, CCS 2019 |
---|---|
Country/Territory | United Kingdom |
City | London |
Period | 11/11/19 → 15/11/19 |
Keywords
- Input recovery
- Neural networks
- Side-channel analysis