TY - GEN
T1 - On Reverse Engineering Neural Network Implementation on GPU
AU - Chmielewski, Łukasz
AU - Weissbart, Léo
PY - 2021
Y1 - 2021
N2 - In recent years machine learning has become increasingly mainstream across industries. Additionally, Graphical Processing Unit (GPU) accelerators are widely deployed in various neural network (NN) applications, including image recognition for autonomous vehicles and natural language processing, among others. Since training a powerful network requires expensive data collection and computing power, its design and parameters are often considered a secret intellectual property of their manufacturers. However, hardware accelerators can leak crucial information about the secret neural network designs through side-channels, like Electro-Magnetic (EM) emanations, power consumption, or timing. We propose and evaluate non-invasive and passive reverse engineering methods to recover NN designs deployed on GPUs through EM side-channel analysis. We employ a well-known technique of simple EM analysis and timing analysis of NN layers execution. We consider commonly used NN architectures, namely Multilayer Perceptron and Convolutional Neural Networks. We show how to recover the number of layers and neurons as well as the types of activation functions. Our experimental results are obtained on a setup that is as close as possible to a real-world device in order to properly assess the applicability and extendability of our methods. We analyze the NN execution of a PyTorch python framework implementation running on Nvidia Jetson Nano, a module computer embedding a Tegra X1 SoC that combines an ARM Cortex-A57 CPU and a 128-core GPU within a Maxwell architecture. Our results show the importance of side-channel protections for NN accelerators in real-world applications.
AB - In recent years machine learning has become increasingly mainstream across industries. Additionally, Graphical Processing Unit (GPU) accelerators are widely deployed in various neural network (NN) applications, including image recognition for autonomous vehicles and natural language processing, among others. Since training a powerful network requires expensive data collection and computing power, its design and parameters are often considered a secret intellectual property of their manufacturers. However, hardware accelerators can leak crucial information about the secret neural network designs through side-channels, like Electro-Magnetic (EM) emanations, power consumption, or timing. We propose and evaluate non-invasive and passive reverse engineering methods to recover NN designs deployed on GPUs through EM side-channel analysis. We employ a well-known technique of simple EM analysis and timing analysis of NN layers execution. We consider commonly used NN architectures, namely Multilayer Perceptron and Convolutional Neural Networks. We show how to recover the number of layers and neurons as well as the types of activation functions. Our experimental results are obtained on a setup that is as close as possible to a real-world device in order to properly assess the applicability and extendability of our methods. We analyze the NN execution of a PyTorch python framework implementation running on Nvidia Jetson Nano, a module computer embedding a Tegra X1 SoC that combines an ARM Cortex-A57 CPU and a 128-core GPU within a Maxwell architecture. Our results show the importance of side-channel protections for NN accelerators in real-world applications.
KW - Deep neural network
KW - Reverse engineering
KW - Side-channel analysis
KW - Simple power analysis
UR - http://www.scopus.com/inward/record.url?scp=85113463340&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-81645-2_7
DO - 10.1007/978-3-030-81645-2_7
M3 - Conference contribution
AN - SCOPUS:85113463340
SN - 9783030816445
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 96
EP - 113
BT - Applied Cryptography and Network Security Workshops - ACNS 2021 Satellite Workshops, AIBlock, AIHWS, AIoTS, CIMSS, Cloud S and P, SCI, SecMT, and SiMLA, 2021, Proceedings
A2 - Zhou, Jianying
A2 - Ahmed, Chuadhry Mujeeb
A2 - Batina, Lejla
A2 - Chattopadhyay, Sudipta
A2 - Gadyatskaya, Olga
A2 - Jin, Chenglu
A2 - Lin, Jingqiang
A2 - Losiouk, Eleonora
A2 - Luo, Bo
A2 - Majumdar, Suryadipta
A2 - Maniatakos, Mihalis
A2 - Mashima, Daisuke
A2 - Meng, Weizhi
A2 - Picek, Stjepan
A2 - Shimaoka, Masaki
A2 - Su, Chunhua
A2 - Wang, Cong
PB - Springer
T2 - satellite workshops held around the 19th International Conference on Applied Cryptography and Network Security, ACNS 2021, 3rd International Workshop on Application Intelligence and Blockchain Security, AIBlock 2021, 2nd International Workshop on Artificial Intelligence in Hardware Security, AIHWS 2021, 3rd International Workshop on Artificial Intelligence and Industrial IoT Security, AIoTS 2021, 1st International Workshop on Critical Infrastructure and Manufacturing System Security, CIMSS 2021, 3rd International Workshop on Cloud Security and Privacy, Cloud S and P 2021, 2nd International Workshop on Secure Cryptographic Implementation, SCI 2021, 2nd International Workshop on Security in Mobile Technologies, SecMT 2021, 3rd International Workshop on Security in Machine Learning and its Applications, SiMLA 2021
Y2 - 21 June 2021 through 24 June 2021
ER -