Loading [MathJax]/extensions/MathMenu.js
RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference Attacks | IEEE Conference Publication | IEEE Xplore

RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference Attacks


Abstract:

Federated learning (FL) offers a solution for mitigating the issue of data silo. However, FL faces threats to both robustness and privacy, which hinder the widespread app...Show More

Abstract:

Federated learning (FL) offers a solution for mitigating the issue of data silo. However, FL faces threats to both robustness and privacy, which hinder the widespread application of FL. Most existing approaches focus on one of these threats or require significant resources to tackle both simultaneously. To meet the requirements of robustness and privacy, we propose a robust and privacy-preserving FL (RPFL) based on random selection and lightweight sharing. Our random selection method effectively invalidates malicious models to protect the integrity of the global model. On the other hand, we employ the technique of multi-party computation (MPC) to enhance privacy. To mitigate additional communication overhead and computation overhead introduced by MPC, we propose lightweight sharing. Besides, we adopt compressed sensing and parameter-clipping to further improve the communication efficiency and robustness of RPFL. We prove the performance of RPFL in terms of robustness, privacy, as well as efficiency. The extensive experimental results demonstrate that RPFL effectively improves the robustness and privacy of FL with only a negligible performance penalty.
Date of Conference: 17-21 December 2023
Date Added to IEEE Xplore: 26 March 2024
ISBN Information:

ISSN Information:

Conference Location: Ocean Flower Island, China

Funding Agency:

References is not available for this document.

I. Introduction

Large-scale and high-quality datasets have become essential for achieving high-precision learning tasks. However, privacy or competition concerns often hinder data holders from sharing their data, resulting in data silo. Federated learning (FL) [1] provides a solution to data silo by enabling the acquisition and processing of data to be carried out locally on clients. In FL, instead of transmitting the original data, only model updates are sent to the server, thereby ensuring privacy protection without direct exposure. Nevertheless, FL still faces challenges. Adversaries exploit model updates to infer clients’ privacy [2], [3], and attempt to poison the global model [4], [5]. To address these challenges, researchers have proposed various strategies.

Select All
1.
B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data", Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, pp. 1273-1282, 2017.
2.
L. Zhu, Z. Liu and S. Han, "Deep leakage from gradients", Advances in Neural Information Processing Systems, pp. 14 747-14 756, 2019.
3.
B. Zhao, K. R. Mopuri and H. Bilen, "idlg: Improved deep leakage from gradients", 2020.
4.
T. Gu, K. Liu, B. Dolan-Gavitt and S. Garg, "Badnets: Evaluating backdooring attacks on deep neural networks", IEEE Access, vol. 7, pp. 47 230-47 244, 2019.
5.
Y. Li, Y. Jiang, Z. Li and S.-T. Xia, "Backdoor learning: A survey", IEEE Transactions on Neural Networks and Learning Systems, pp. 1-18, 2022.
6.
C. Li, G. Li and P. K. Varshney, "Communication-efficient federated learning based on compressed sensing", IEEE Internet of Things Journal, vol. 8, no. 20, pp. 15 531-15 541, 2021.
7.
K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, et al., "Federated learning with differential privacy: Algorithms and performance analysis", IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454-3469, 2020.
8.
K. Pan, M. Gong, K. Feng and K. Wang, "Differentially private regression analysis with dynamic privacy allocation", Knowledge-Based Systems, vol. 217, pp. 106795, 2021.
9.
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, et al., "Practical secure aggregation for privacy-preserving machine learning", proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 1175-1191, 2017.
10.
X. Guo, Z. Liu, J. Li, J. Gao, B. Hou, C. Dong, et al., "Verifl: Communication-efficient and fast verifiable aggregation for federated learning", IEEE Transactions on Information Forensics and Security, vol. 16, pp. 1736-1751, 2020.
11.
G. Xu, H. Li, S. Liu, K. Yang and X. Lin, "Verifynet: Secure and verifiable federated learning", IEEE Transactions on Information Forensics and Security, vol. 15, pp. 911-926, 2019.
12.
E. Sotthiwat, L. Zhen, Z. Li and C. Zhang, "Partially encrypted multiparty computation for federated learning", 2021 IEEE/ACM 21st International Symposium on Cluster Cloud and Internet Computing (CCGrid), pp. 828-835, 2021.
13.
P. Rieger, T. D. Nguyen, M. Miettinen and A.-R. Sadeghi, "Deepsight: Mitigating backdoor attacks in federated learning through deep model inspection", Proceedings 2022 Network and Distributed System Security Symposium, 2022.
14.
C. Fung, C. J. Yoon and I. Beschastnikh, "The limitations of federated learning in sybil settings", 23rd International Symposium on Research in Attacks Intrusions and Defenses (RAID 2020), pp. 301-316, 2020.
15.
X. Cao, M. Fang, J. Liu and N. Z. Gong, "Fltrust: Byzantine-robust federated learning via trust bootstrapping", 2020.
16.
T. D. Nguyen, P. Rieger, R. De Viti, H. Chen, B. B. Brandenburg, H. Yalame, H. Möllering, H. Fereidooni, S. Marchal, M. Miettinen et al., "{FLAME}: Taming backdoors in federated learning", 31st USENIX Security Symposium (USENIX Security 22), pp. 1415-1432, 2022.
17.
J. Gao, B. Zhang, X. Guo, T. Baker, M. Li and Z. Liu, "Secure partial aggregation: Making federated learning more robust for industry 4.0 applications", IEEE Transactions on Industrial Informatics, vol. 18, no. 9, pp. 6340-6348, 2022.
18.
M. Li, D. Xiao, H. Huang and B. Zhang, "Multi-level video quality services and security guarantees based on compressive sensing in sensor-cloud system", Journal of Network and Computer Applications, vol. 205, pp. 103456, 2022.
19.
J. A. Tropp and A. C. Gilbert, "Signal recovery from random measurements via orthogonal matching pursuit", IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655-4666, 2007.
20.
W. Dai and O. Milenkovic, "Subspace pursuit for compressive sensing signal reconstruction", IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2230-2249, 2009.
21.
T. Blumensath and M. E. Davies, "Iterative hard thresholding for compressed sensing", Applied and Computational Harmonic Analysis, vol. 27, no. 3, pp. 265-274, 2009.
22.
E. J. Candes and T. Tao, "Near-optimal signal recovery from random projections: Universal encoding strategies?", IEEE Transactions on Information Theory, vol. 52, no. 12, pp. 5406-5425, 2006.
23.
Y. Oh, N. Lee, Y.-S. Jeon and H. V. Poor, "Communication-efficient federated learning via quantized compressed sensing", IEEE Transactions on Wireless Communications, vol. 22, no. 2, pp. 1087-1100, 2022.
24.
Y. Wu, S. Cai, X. Xiao, G. Chen and B. C. Ooi, "Privacy preserving vertical federated learning for tree-based models", vol. 13, no. 12, pp. 2090-2103, 2020.
25.
X. Wang, S. Ranellucci and J. Katz, "Global-scale secure multiparty computation", Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 39-56, 2017.
26.
R. Xu, N. Baracaldo, Y. Zhou, A. Anwar and H. Ludwig, "Hybridalpha: An efficient approach for privacy-preserving federated learning", Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, pp. 13-23, 2019.
27.
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin and V. Shmatikov, "How to backdoor federated learning", International Conference on Artificial Intelligence and Statistics, vol. 108, pp. 2938-2948, 2020.
28.
J. Geiping, H. Bauermeister, H. Dröge and M. Moeller, "Inverting gradients-how easy is it to break privacy in federated learning?", Advances in Neural Information Processing Systems, vol. 33, pp. 16 937-16 947, 2020.
29.
M. C. Doganay, T. B. Pedersen, Y. Saygin, E. Savaş and A. Levi, "Distributed privacy preserving k-means clustering with additive secret sharing", Proceedings of the 2008 International Workshop on Privacy and Anonymity in Information Society, pp. 3-11, 2008.
30.
P. Blanchard, E. M. El Mhamdi, R. Guerraoui and J. Stainer, "Machine learning with adversaries: Byzantine tolerant gradient descent", Advances in Neural Information Processing Systems, vol. 30, pp. 119-129, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.