Label-Only Membership Inference Attack against \\Node-Level Graph Neural Networks

M. Conti, Jiaxin Li*, S. Picek, J. Xu

*Corresponding author for this work

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

2 Citations (Scopus)
43 Downloads (Pure)

Abstract

Graph Neural Networks (GNNs), inspired by Convolutional Neural Networks (CNNs), aggregate the message of nodes' neighbors and structure information to acquire expressive representations of nodes for node classification, graph classification, and link prediction. Previous studies have indicated that node-level GNNs are vulnerable to Membership Inference Attacks (MIAs), which infer whether a node is in the training data of GNNs and leak the node's private information, like the patient's disease history. The implementation of previous MIAs takes advantage of the models' probability output, which is infeasible if GNNs only provide the prediction label (label-only) for the input.

In this paper, we propose a label-only MIA against GNNs for node classification with the help of GNNs' flexible prediction mechanism, e.g., obtaining the prediction label of one node even when neighbors' information is unavailable. Our attacking method achieves around 60\% accuracy, precision, and Area Under the Curve (AUC) for most datasets and GNN models, some of which are competitive or even better than state-of-the-art probability-based MIAs implemented under our environment and settings. Additionally, we analyze the influence of the sampling method, model selection approach, and overfitting level on the attack performance of our label-only MIA. All of those three factors have an impact on the attack performance. Then, we consider scenarios where assumptions about the adversary's additional dataset (shadow dataset) and extra information about the target model are relaxed. Even in those scenarios, our label-only MIA achieves a better attack performance in most cases. Finally, we explore the effectiveness of possible defenses, including Dropout, Regularization, Normalization, and Jumping knowledge. None of those four defenses prevent our attack completely.
Original languageEnglish
Title of host publicationAISec 2022 - Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2022
PublisherAssociation for Computing Machinery (ACM)
Pages1–12
Number of pages12
ISBN (Electronic)978-1-4503-9880-0
DOIs
Publication statusPublished - 2022
Event15th ACM Workshop on Artificial Intelligence and Security - Los Angeles, United States
Duration: 11 Nov 202211 Nov 2022

Publication series

NameAISec 2022 - Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, co-located with CCS 2022

Workshop

Workshop15th ACM Workshop on Artificial Intelligence and Security
Country/TerritoryUnited States
CityLos Angeles
Period11/11/2211/11/22

Keywords

  • Machine learning
  • Membership inference attack
  • Graph neural networks

Fingerprint

Dive into the research topics of 'Label-Only Membership Inference Attack against \\Node-Level Graph Neural Networks'. Together they form a unique fingerprint.

Cite this