Poster: Clean-label Backdoor Attack on Graph Neural Networks

Jing Xu*, Stjepan Picek

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
236 Downloads (Pure)

Abstract

Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a new kind of backdoor attack, i.e., a clean-label backdoor attack, on GNNs. Unlike prior backdoor attacks on GNNs in which the adversary can introduce arbitrary, often clearly mislabeled, inputs to the training set, in a clean-label backdoor attack, the resulting poisoned inputs appear to be consistent with their label and thus are less likely to be filtered as outliers. The initial experimental results illustrate that the adversary can achieve a high attack success rate (up to 98.47%) with a clean-label backdoor attack on GNNs for the graph classification task. We hope our work will raise awareness of this attack and inspire novel defenses against it.

Original languageEnglish
Title of host publicationCCS 2022 - Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security
PublisherAssociation for Computing Machinery (ACM)
Pages3491-3493
Number of pages3
ISBN (Electronic)9781450394505
DOIs
Publication statusPublished - 2022
Event28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022 - Los Angeles, United States
Duration: 7 Nov 202211 Nov 2022

Publication series

NameProceedings of the ACM Conference on Computer and Communications Security
ISSN (Print)1543-7221

Conference

Conference28th ACM SIGSAC Conference on Computer and Communications Security, CCS 2022
Country/TerritoryUnited States
CityLos Angeles
Period7/11/2211/11/22

Bibliographical note

.

Keywords

  • backdoor attacks
  • graph classification
  • graph neural networks

Fingerprint

Dive into the research topics of 'Poster: Clean-label Backdoor Attack on Graph Neural Networks'. Together they form a unique fingerprint.

Cite this