Model Level Contrastive Federated Learning with Differential Privacy | IEEE Conference Publication | IEEE Xplore

Model Level Contrastive Federated Learning with Differential Privacy


Abstract:

Federated Learning, as a typical paradigm of collaborative learning, effectively mediates the contradictions between model training and data privacy. However, the data he...Show More

Abstract:

Federated Learning, as a typical paradigm of collaborative learning, effectively mediates the contradictions between model training and data privacy. However, the data heterogeneity and privacy leakage problems undermine the availability of federated learning. Existing studies focus solely on one of the problems. In this work, we manage to handle these two challenges simultaneously, by utilizing contrastive learning and differential privacy in local training, the federated learning system can stay safe and robust. The contrastive loss item pushes the local model towards the global model and a random noise generated by differential privacy is added to the model to protect the sensitive information hidden in the models. Extensive experiment results demonstrate that our method can exceed the baseline around 1%~2% in term of test accuracy.
Date of Conference: 18-20 October 2024
Date Added to IEEE Xplore: 21 November 2024
ISBN Information:
Conference Location: Hangzhou, China

Funding Agency:


I. Introduction

Federate Learning (FL) [1]–[3] is an emerging paradigm of distributed machine learning frameworks, which allows different data owners (i.e., clients) commonly train a global model under the organization of a central server. In FL, each client utilizes its own dataset to solely train a local model with the help of popular machine learning methods, and then sends information about the local model (i.e., model weights [4] [5], or gradients [1]) to the central server, which will aggregate received local models to a global model and distribute it to each participant, the above process is named a global round in FL, and the clients utilize the global model to continue training a new local model for the next round. The FL training stops until the predetermined training round or accuracy is reached. FL has gained great success in many practical scenarios, such as smart healthcare [6], advertisement [7], and autonomous driving [8].

Contact IEEE to Subscribe

References

References is not available for this document.