Abstract:
In Differentially Private Federated Learning (DP-FL), gradient clipping can prevent excessive noise from being added to the gradient and ensure that the impact of noise i...Show MoreMetadata
Abstract:
In Differentially Private Federated Learning (DP-FL), gradient clipping can prevent excessive noise from being added to the gradient and ensure that the impact of noise is within a controllable range. However, state-of-the-art methods adopt fixed or imprecise clipping thresholds for gradient clipping, which is not adaptive to the changes in the gradients. This issue can lead to a significant degradation in accuracy while training the global model. To this end, we propose Differential Privacy Federated Adaptive gradient Clipping based on gradient Norm (DP-FedACN). DP-FedACN can calculate the decay rate of the clipping threshold by considering the overall changing trend of the gradient norm. Furthermore, DP-FedACN can accurately adjust the clipping threshold for each training round according to the actual changes in gradient norm, clipping loss, and decay rate. Experimental results demonstrate that DP-FedACN can maintain privacy protection performance similar to that of DP-FedAvg under member inference attacks and model inversion attacks. DP-FedACN significantly outperforms DP-FedAGNC and DP-FedDDC in privacy protection metrics. Additionally, the test accuracy of DP-FedACN is approximately 2.61%, 1.01%, and 1.03% higher than the other three baseline methods, respectively. DP-FedACN can improve the global model training accuracy while ensuring the privacy protection of the model. All experimental results demonstrate that the proposed DP-FedACN can help find a fine-grained privacy-accuracy trade-off in DP-FL.
Published in: IEEE Transactions on Network Science and Engineering ( Early Access )