Connecting the dots: Exploring backdoor attacks on graph neural networks

Research output: ThesisDissertation (TU Delft)

28 Downloads (Pure)

Abstract

Deep Neural Networks (DNNs) have found extensive applications across diverse fields, such as image classification, speech recognition, and natural language processing. However, their susceptibility to various adversarial attacks, notably the backdoor attack, has repeatedly been demonstrated in recent years.
The backdoor attack aims to misclassify inputs with specific trigger pattern(s) into the pre-determined label(s) by training the model on the poisoned dataset. Backdoor attacks on DNNs can lead to severe real-world consequences, e.g., a deep leaning-based classifier in a self-driving car can be backdoored to misclassify a stop sign as a speed limit sign.

With an increasing of real-world data being represented as graphs, Graph Neural Networks (GNNs), a subset of the DNNs, have demonstrated remarkable performance in processing graph data. Despite their efficiency, GNNs, similar to other DNNs, are also vulnerable to backdoor attacks, which can lead to severe results, especially when GNNs are applied in security-related scenarios. Although backdoor attacks have been extensively studied in the image domain, we still need dedicated efforts for the graph domain due to the difference between graph data and other data, e.g., images.

This thesis embarks on an exploration of backdoor attacks on GNNs. Chapter 2 focuses on designing and investigating backdoor attacks on centralized GNNs.
Specifically, we explore the influence of trigger injecting position on the backdoor attack performance on GNNs. To explore this impact, we propose approaches based on explanation techniques on GNNs, which contributes to exploring the interaction between the explainability and robustness of GNNs. Furthermore, we design a clean-label backdoor attack on GNNs to make the poisoned inputs more challenging to be detected.

Considering the growing privacy concern, we focus on backdoor attacks on federated GNNs in Chapter 3. We propose a label-only membership inference attack on GNNs in the scenario that the attacker can only get label output from the GNN models. Moreover, we investigate centralized and distributed backdoor attacks on federated GNNs.

Besides designing efficient backdoor attacks on GNNs, we also explore the possibility of leveraging backdoor attacks for defensive purposes for GNNs. Chapter 4 introduces a watermarking framework for GNNs based on backdoor attacks.
Our research outcomes will deepen the understanding of backdoor attacks on GNNs and push the GNN model designers to develop more secure models.
Original languageEnglish
QualificationDoctor of Philosophy
Awarding Institution
  • Delft University of Technology
Supervisors/Advisors
  • Lagendijk, R.L., Supervisor
  • Picek, S., Supervisor
  • Oliehoek, F.A., Supervisor
Thesis sponsors
Award date13 Apr 2025
Publisher
Print ISBNs978-94-6469-918-0
Publication statusUnpublished - 10 Apr 2024

Keywords

  • Backdoor attacks
  • Graph Neural Networks
  • Security and privacy

Fingerprint

Dive into the research topics of 'Connecting the dots: Exploring backdoor attacks on graph neural networks'. Together they form a unique fingerprint.

Cite this