Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies’ similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs.
|Title of host publication
|Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN)
|Place of Publication
|Number of pages
|Published - 2023
|2023 International Joint Conference on Neural Networks (IJCNN) - Gold Coast, Australia
Duration: 18 Jun 2023 → 23 Jun 2023
|Proceedings of the International Joint Conference on Neural Networks
|2023 International Joint Conference on Neural Networks (IJCNN)
|18/06/23 → 23/06/23
Bibliographical noteGreen Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.
- backdoor attack
- trigger-injecting position
- graph neural networks