ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning

Research output: Contribution to journalArticleScientificpeer-review

16 Downloads (Pure)


Federated Learning (FL) has emerged as a privacy-preserving paradigm for collaborative deep learning model training across distributed data silos. Despite its importance, FL faces challenges such as high latency and less effective global models. In this paper, we propose ShuffleFL, an innovative framework stemming from the hierarchical FL, which introduces a user layer between the FL devices and the FL server. ShuffleFL naturally groups devices based on their affiliations, e.g., belonging to the same user, to ease the strict privacy restriction-"data at the FL devices cannot be shared with others", thereby enabling the exchange of local samples among them. The user layer assumes a multi-faceted role, not just aggregating local updates but also coordinating data shuffling within affiliated devices. We formulate this data shuffling as an optimization problem, detailing our objectives to align local data closely with device computing capabilities and to ensure a more balanced data distribution at the intra-user devices. Through extensive experiments using realistic device profiles and five non-IID datasets, we demonstrate that ShuffleFL can improve inference accuracy by 2.81% to 7.85% and speed up the convergence by 4.11x to 36.56x when reaching the target accuracy.
Original languageEnglish
Article number85
Number of pages34
JournalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Issue number2
Publication statusPublished - 2024


  • Data Heterogeneity
  • Data Shuffling
  • Federated Learning
  • IoT
  • System Heterogeneity


Dive into the research topics of 'ShuffleFL: Addressing Heterogeneity in Multi-Device Federated Learning'. Together they form a unique fingerprint.

Cite this