Abstract:
Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de fact...Show MoreMetadata
Abstract:
Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients’ reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients’ models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.
Published in: IEEE Transactions on Information Forensics and Security ( Early Access )