Accurate Differentially Private Deep Learning on the Edge

Rui Han, Dong Li, Junyan Ouyang, Chi Harold Liu, Guoren Wang, Dapeng Oliver Wu, Lydia Y. Chen

Research output: Contribution to journalArticleScientificpeer-review

4 Citations (Scopus)
938 Downloads (Pure)

Abstract

Deep learning (DL) models are increasingly built on federated edge participants holding local data. To enable insight extractions without the risk of information leakage, DL training is usually combined with differential privacy (DP). The core theme is to tradeoff learning accuracy by adding statistically calibrated noises, particularly to local gradients of edge learners, during model training. However, this privacy guarantee unfortunately degrades model accuracy due to edge learners' local noises, and the global noise aggregated at the central server. Existing DP frameworks for edge focus on local noise calibration via gradient clipping techniques, overlooking the heterogeneity and dynamic changes of local gradients, and their aggregated impact on accuracy. In this article, we present a systematical analysis that unveils the influential factors capable of mitigating local and aggregated noises, and design PrivateDL to leverage these factors in noise calibration so as to improve model accuracy while fulfilling privacy guarantee. PrivateDL features on: (i) sampling-based sensitivity estimation for local noise calibration and (ii) combining large batch sizes and critical data identification in global training. We implement PrivateDL on the popular Laplace/Gaussian DP mechanisms and demonstrate its effectiveness using Intel BigDL workloads, i.e., considerably improving model accuracy by up to 5X when comparing against existing DP frameworks.

Original languageEnglish
Article number9372811
Pages (from-to)2231-2247
Number of pages17
JournalIEEE Transactions on Parallel and Distributed Systems
Volume32
Issue number9
DOIs
Publication statusPublished - 2021

Keywords

  • Biological system modeling
  • Data models
  • Deep learning
  • Differential privacy
  • differential privacy
  • federated learning
  • model accuracy
  • Privacy
  • Sensitivity
  • Servers
  • Training
  • Model accuracy
  • Federated learning

Fingerprint

Dive into the research topics of 'Accurate Differentially Private Deep Learning on the Edge'. Together they form a unique fingerprint.

Cite this