Abstract:
Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de fact...Show MoreMetadata
Abstract:
Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients’ reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients’ models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.
Published in: IEEE Transactions on Information Forensics and Security ( Early Access )
Differential Privacy in Federated Learning Using Noise Multipliers: An Analysis on MNIST Dataset
Yuvraj Panchal,Rashid Sheikh,Kamal Kumar Sethi
Minimum Gaussian Noise Variance of Federated Learning in the Presence of Mutual Information Based Differential Privacy
Hua He,Zheng He
Towards Robust Differential Privacy in Adaptive Federated Learning Architectures
Zengwang Jin,Chenhao Xu,Zhen Wang,Changyin Sun
Balancing Privacy and Accuracy Using Significant Gradient Protection in Federated Learning
Benteng Zhang,Yingchi Mao,Xiaoming He,Huawei Huang,Jie Wu
Exploring the Privacy-Accuracy Trade-off Using Adaptive Gradient Clipping in Federated Learning
Benteng Zhang,Yingchi Mao,Xiaoming He,Ping Ping,Huawei Huang,Jie Wu
Model Level Contrastive Federated Learning with Differential Privacy
Peihao Liu,Yongliang Xu,Meiqing Wang,Peng Cao
A Survey of Differential Privacy Techniques for Federated Learning
Wang Xin,Li Jiaqian,Ding Xueshuang,Zhang Haoji,Sun Lianshan
AI Model Training Data Privacy Protection Scheme Based on Local Differential Privacy
Yue Zhang,Lin Li,Cong Hou,Min Li,Xiaotian Xu
Enhancing Privacy and Security in Recommender Systems Through Federated Learning and Differential Privacy
Zhigang Yang,Tafadzwa Mbodza
Federated Learning With Sparsified Model Perturbation: Improving Accuracy Under Client-Level Differential Privacy
Rui Hu,Yuanxiong Guo,Yanmin Gong