Loading [MathJax]/extensions/MathMenu.js
Model Level Contrastive Federated Learning with Differential Privacy | IEEE Conference Publication | IEEE Xplore

Model Level Contrastive Federated Learning with Differential Privacy


Abstract:

Federated Learning, as a typical paradigm of collaborative learning, effectively mediates the contradictions between model training and data privacy. However, the data he...Show More

Abstract:

Federated Learning, as a typical paradigm of collaborative learning, effectively mediates the contradictions between model training and data privacy. However, the data heterogeneity and privacy leakage problems undermine the availability of federated learning. Existing studies focus solely on one of the problems. In this work, we manage to handle these two challenges simultaneously, by utilizing contrastive learning and differential privacy in local training, the federated learning system can stay safe and robust. The contrastive loss item pushes the local model towards the global model and a random noise generated by differential privacy is added to the model to protect the sensitive information hidden in the models. Extensive experiment results demonstrate that our method can exceed the baseline around 1%~2% in term of test accuracy.
Date of Conference: 18-20 October 2024
Date Added to IEEE Xplore: 21 November 2024
ISBN Information:
Conference Location: Hangzhou, China

Funding Agency:

References is not available for this document.

I. Introduction

Federate Learning (FL) [1]–[3] is an emerging paradigm of distributed machine learning frameworks, which allows different data owners (i.e., clients) commonly train a global model under the organization of a central server. In FL, each client utilizes its own dataset to solely train a local model with the help of popular machine learning methods, and then sends information about the local model (i.e., model weights [4] [5], or gradients [1]) to the central server, which will aggregate received local models to a global model and distribute it to each participant, the above process is named a global round in FL, and the clients utilize the global model to continue training a new local model for the next round. The FL training stops until the predetermined training round or accuracy is reached. FL has gained great success in many practical scenarios, such as smart healthcare [6], advertisement [7], and autonomous driving [8].

Select All
1.
B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Arcas, "Communication-efficient learning of deep networks from decentralized data" in Artificial intelligence and statistics, PMLR, pp. 1273-1282, April 2017.
2.
W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y. C. Liang, Q. Yang, et al., "Federated learning in mobile edge networks: A comprehensive survey", IEEE communications surveys & tutorials, vol. 22, no. 3, pp. 2031-2063, 2020.
3.
P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, et al., "Advances and open problems in federated learning", Foundations and trends® in machine learning, vol. 14, no. 1–2, pp. 1-210, 2021.
4.
Y. J. Cho, A. Manoel, G. Joshi, R. Sim and D. Dimitriadis, "Heterogeneous ensemble knowledge transfer for training large models in federated learning", arXiv preprint, 2022.
5.
Q. Li, B. He and D. Song, "Model-contrastive federated learning", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10713-10722, 2021.
6.
R. Wang, J. Lai, Z. Zhang, X. Li, P Vijayakumar and M. Karuppiah, "Privacy-preserving federated learning for internet of medical things under edge computing", IEEE journal of biomedical and health informatics 2, vol. 7, no. 2, pp. 854-865, 2022.
7.
J. Bian, J. Huang, S. Ji, Y. Liao, X. Li, Q. Wang, et al., "Feynman: Federated learning-based advertising for ecosystems-oriented mobile apps recommendation", IEEE Transactions on Services Computing, vol. 16, no. 5, pp. 3361-3372, 2023.
8.
Y. Li, X. Tao, X. Zhang, J. Liu and J. Xu, "Privacy-preserved federated learning for autonomous driving", IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 7, pp. 8423-8434, 2021.
9.
H. Liang, Y. Li, C. Zhang, X. Liu and L. Zhu, "Egia: An external gradient inversion attack in federated learning", IEEE Transactions on Information Forensics and Security, 2023.
10.
J. Geiping, H. Bauermeister, H. Dröge and M. Moeller, "Inverting gradients-how easy is it to break privacy in federated learning?", Advances in neural information processing systems, vol. 33, pp. 16937-16947, 2020.
11.
S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich and A. T. Suresh, "Scaffold: Stochastic controlled averaging for federated learning", International conference on machine learning, pp. 5132-5143, 2020, November.
12.
C. Dwork and A. Roth, "The algorithmic foundations of differential privacy. Foundations and Trends®", Theoretical Computer Science, vol. 9, no. 3–4, pp. 211-407, 2014.
13.
X. Li, K. Huang, W. Yang, S. Wang and Z. Zhang, "On the convergence of fedavg on non-iid data", arXiv preprint, 2019.
14.
T. Chen, S. Kornblith, M. Norouzi and G. Hinton, "A simple framework for contrastive learning of visual representations", International conference on machine learning, pp. 1597-1607.
15.
X. Li, M. Jiang, X. Zhang, M. Kamp and Q. Dou, "Fedbn: Federated learning on non-iid features via local batch normalization", arXiv preprint, 2021.
16.
Y. Aono, T. Hayashi, L. Wang and S. Moriai, "Privacy-preserving deep learning via additively homomorphic encryption", IEEE transactions on information forensics and security, vol. 13, no. 5, pp. 1333-1345, 2017.
17.
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, et al., "Practical secure aggregation for privacy-preserving machine learning", proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175-1191, 2017, October.
18.
K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, et al., "Federated learning with differential privacy: Algorithms and performance analysis", IEEE transactions on information forensics and security, vol. 15, pp. 3454-3469, 2020.

Contact IEEE to Subscribe

References

References is not available for this document.