TY - JOUR
T1 - Regularizers to the rescue
T2 - fighting overfitting in deep learning-based side-channel analysis
AU - Rezaeezade, Azade
AU - Batina, Lejla
PY - 2024
Y1 - 2024
N2 - Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying 4 powerful and easy-to-use regularization techniques to 8 combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L1, L2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L1 and L2 are the most effective. Finally, if training time matters, early stopping is the best technique.
AB - Despite considerable achievements of deep learning-based side-channel analysis, overfitting represents a significant obstacle in finding optimized neural network models. This issue is not unique to the side-channel domain. Regularization techniques are popular solutions to overfitting and have long been used in various domains. At the same time, the works in the side-channel domain show sporadic utilization of regularization techniques. What is more, no systematic study investigates these techniques’ effectiveness. In this paper, we aim to investigate the regularization effectiveness on a randomly selected model, by applying 4 powerful and easy-to-use regularization techniques to 8 combinations of datasets, leakage models, and deep learning topologies. The investigated techniques are L1, L2, dropout, and early stopping. Our results show that while all these techniques can improve performance in many cases, L1 and L2 are the most effective. Finally, if training time matters, early stopping is the best technique.
KW - AES
KW - ASCON
KW - Deep learning
KW - Overfitting
KW - Regularization
KW - Side-channel analysis
UR - http://www.scopus.com/inward/record.url?scp=85200659975&partnerID=8YFLogxK
U2 - 10.1007/s13389-024-00361-5
DO - 10.1007/s13389-024-00361-5
M3 - Article
AN - SCOPUS:85200659975
SN - 2190-8508
JO - Journal of Cryptographic Engineering
JF - Journal of Cryptographic Engineering
ER -