TY - GEN
T1 - An Exploratory Analysis on Users' Contributions in Federated Learning
AU - Huang, Jiyue
AU - Talbi, Rania
AU - Zhao, Zilong
AU - Boucchenak, Sara
AU - Chen, Lydia Y.
AU - Roos, Stefanie
PY - 2020
Y1 - 2020
N2 - Federated Learning is an emerging distributed collaborative learning paradigm adopted by many of today's applications, e.g., keyboard prediction and object recognition. Its core principle is to learn from large amount of users data while preserving data privacy by design as collaborative users only need to share the machine learning models and keep data locally. The main challenge for such systems is to provide incentives to users to contribute high-quality models trained from their local data. In this paper, we aim to answer how well incentives recognize (in)accurate local models from honest and malicious users, and perceive their impacts on the model accuracy of federated learning systems. We first present a thorough survey on two contrasting perspectives: incentive mechanisms to measure the contribution of local models by honest users, and malicious users to deliberately degrade the overall model. We conduct simulation experiments to empirically demonstrate if existing contribution measurement schemes can disclose low-quality models from malicious users. Our results show there exists a clear tradeoff among measurement schemes in terms of the computational efficiency and effectiveness to distill the impact of malicious participants. We conclude this paper by discussing the research directions to design resilient contribution incentives.
AB - Federated Learning is an emerging distributed collaborative learning paradigm adopted by many of today's applications, e.g., keyboard prediction and object recognition. Its core principle is to learn from large amount of users data while preserving data privacy by design as collaborative users only need to share the machine learning models and keep data locally. The main challenge for such systems is to provide incentives to users to contribute high-quality models trained from their local data. In this paper, we aim to answer how well incentives recognize (in)accurate local models from honest and malicious users, and perceive their impacts on the model accuracy of federated learning systems. We first present a thorough survey on two contrasting perspectives: incentive mechanisms to measure the contribution of local models by honest users, and malicious users to deliberately degrade the overall model. We conduct simulation experiments to empirically demonstrate if existing contribution measurement schemes can disclose low-quality models from malicious users. Our results show there exists a clear tradeoff among measurement schemes in terms of the computational efficiency and effectiveness to distill the impact of malicious participants. We conclude this paper by discussing the research directions to design resilient contribution incentives.
KW - Adversarial Behavior
KW - Contribution Measurement
KW - Federated Learning
KW - Incentive Mechanisms
UR - http://www.scopus.com/inward/record.url?scp=85100388622&partnerID=8YFLogxK
U2 - 10.1109/TPS-ISA50397.2020.00014
DO - 10.1109/TPS-ISA50397.2020.00014
M3 - Conference contribution
AN - SCOPUS:85100388622
T3 - Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
SP - 20
EP - 29
BT - Proceedings - 2020 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
PB - IEEE
T2 - 2nd IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications, TPS-ISA 2020
Y2 - 1 December 2020 through 3 December 2020
ER -