AGIC: Approximate Gradient Inversion Attack on Federated Learning

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

1 Citation (Scopus)
20 Downloads (Pure)

Abstract

Federated learning is a private-by-design distributed learning paradigm where clients train local models on their own data before a central server aggregates their local updates to compute a global model. Depending on the aggregation method used, the local updates are either the gradients or the weights of local learning models, e.g., FedAvg aggregates model weights. Unfortunately, recent reconstruction attacks apply a gradient inversion optimization on the gradient update of a single mini- batch to reconstruct the private data used by clients during training. As the state-of-the-art reconstruction attacks solely focus on single update, realistic adversarial scenarios are over- looked, such as observation across multiple updates and updates trained from multiple mini-batches. A few studies consider a more challenging adversarial scenario where only model updates based on multiple mini-batches are observable, and resort to computationally expensive simulation to untangle the underlying samples for each local step. In this paper, we propose AGIC, a novel Approximate Gradient Inversion Attack that efficiently and effectively reconstructs images from both model or gradient updates, and across multiple epochs. In a nutshell, AGIC (i) approximates gradient updates of used training samples from model updates to avoid costly simulation procedures, (ii) leverages gradient/model updates collected from multiple epochs, and (iii) assigns increasing weights to layers with respect to the neural network structure for reconstruction quality. We extensively evaluate AGIC on three datasets, namely CIFAR-10, CIFAR- 100 and ImageNet. Our results show that AGIC increases the peak signal-to-noise ratio (PSNR) by up to 50% compared to two representative state-of-the-art gradient inversion attacks. Furthermore, AGIC is faster than the state-of-the-art simulation- based attack, e.g., it is 5x faster when attacking FedAvg with 8 local steps in between model updates.
Original languageEnglish
Title of host publication41st International Symposium on Reliable Distributed Systems (SRDS 2022 )
EditorsCristina Ceballos, Hector Torres
Pages12-22
Number of pages11
DOIs
Publication statusPublished - 2023
Event41st International Symposium on Reliable Distributed Systems - Vienna, Austria
Duration: 19 Sept 202222 Sept 2022
Conference number: 41
https://srds-conference.org/

Conference

Conference41st International Symposium on Reliable Distributed Systems
Abbreviated titleSRDS 2022
Country/TerritoryAustria
CityVienna
Period19/09/2222/09/22
Internet address

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • Reconstruction attack
  • Federated Learning
  • Federated Averaging

Fingerprint

Dive into the research topics of 'AGIC: Approximate Gradient Inversion Attack on Federated Learning'. Together they form a unique fingerprint.

Cite this