FedViT: Federated continual learning of vision transformer at edge

Xiaojiang Zuo, Yaxin Luopan, Rui Han*, Qinglong Zhang, Chi Harold Liu, Guoren Wang, Lydia Y. Chen

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral part of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain themselves according to the tasks on different edge devices. Federated continual learning (FCL) is a promising technique that offers partial solutions but yet to overcome the following difficulties: the significant accuracy loss due to the limited on-device processing, the negative knowledge transfer caused by the limited communication of non-IID (non-Independent and Identically Distributed) data, and the limited scalability on the tasks and edge devices. Moreover, existing FCL techniques are designed for convolutional neural networks (CNNs), which have not utilized the full potential of newly emerged powerful vision transformers (ViTs). Considering ViTs depend heavily on training data diversity and volume, we hypothesize ViTs are well-suited for FCL where data arrives continually. In this paper, we propose FedViT, an accurate and scalable federated continual learning framework for ViT models, via a novel concept of signature task knowledge. FedViT is a client-side solution that continuously extracts and integrates the knowledge of signature tasks which are highly influenced by the current task. Each client of FedViT is composed of a knowledge extractor, a gradient restorer and, most importantly, a gradient integrator. Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients’ current tasks through the global model. We implement FedViT in PyTorch and extensively evaluate it against state-of-the-art techniques using popular federated continual learning benchmarks. Extensive evaluation results on heterogeneous edge devices show that FedViT improves model accuracy by 88.61% without increasing model training time, reduces communication cost by 61.55%, and achieves more improvements under difficult scenarios such as large numbers of tasks or clients, and training different complex ViT models.

Original languageEnglish
Pages (from-to)1-15
Number of pages15
JournalFuture Generation Computer Systems
Volume154
DOIs
Publication statusPublished - 2024

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Funding

This paper was supported by the National Key R&D Program of China (No. 2021YFB3301503 ), and the National Natural Science Foundation of China (Grant No. 62272046 , 62132019 , 61872337 ).

Keywords

  • Catastrophic forgetting
  • Continual learning
  • Edge computing
  • Federated learning
  • Knowledge transfer negative
  • Vision transformer

Fingerprint

Dive into the research topics of 'FedViT: Federated continual learning of vision transformer at edge'. Together they form a unique fingerprint.

Cite this