Loading [MathJax]/extensions/MathMenu.js
Partially Encrypted Multi-Party Computation for Federated Learning | IEEE Conference Publication | IEEE Xplore

Partially Encrypted Multi-Party Computation for Federated Learning


Abstract:

Multi-party computation (MPC) allows distributed machine learning to be performed in a privacy-preserving manner so that end-hosts are unaware of the true models on the c...Show More

Abstract:

Multi-party computation (MPC) allows distributed machine learning to be performed in a privacy-preserving manner so that end-hosts are unaware of the true models on the clients. However, the standard MPC algorithm also triggers additional communication and computation costs, due to those expensive cryptography operations and protocols. In this paper, instead of applying heavy MPC over the entire local models for secure model aggregation, we propose to encrypt critical part of model (gradients) parameters to reduce communication cost, while maintaining MPC’s advantages on privacy-preserving without sacrificing accuracy of the learnt joint model. Theoretical analysis and experimental results are provided to verify that our proposed method could prevent deep leakage from gradients attacks from reconstructing original data of individual participants. Experiments using deep learning models over the MNIST and CIFAR-10 datasets empirically demonstrate that our proposed partially encrypted MPC method can reduce the communication and computation cost significantly when compared with conventional MPC, and it achieves as high accuracy as traditional distributed learning which aggregates local models using plain text.
Date of Conference: 10-13 May 2021
Date Added to IEEE Xplore: 02 August 2021
ISBN Information:
Conference Location: Melbourne, Australia
References is not available for this document.

1. Introduction

Machine learning, especially deep learning, has made significant breakthroughs in many domains of science, business and government, such as manufacturing, transportation, finance, and healthcare [1], [2]. The centralised learning mainly contributes to these remarkable successes on large-scale datasets. With the popularity of modern technologies of edge computing [3] and the Internet of Things [4], [5], machine learning has witnessed a dramatic change in the way it computes. Data in many real-world scenarios are naturally distributed and owned by different organisations/users. Due to the competition of different organisations, data privacy security, and administrative regulations, it is almost impossible to upload the data across countries and institutions for centralised learning [2], [6].

Select All
1.
Y. LeCun, Y. Bengio and G. Hinton, "Deep learning", nature, vol. 521, no. 7553, pp. 436-444, 2015.
2.
Q. Yang, Y. Liu, T. Chen and Y. Tong, "Federated machine learning: Concept and applications", ACM Trans. on Intelligent Systems and Technology, vol. 10, no. 2, pp. 1-19, 2019.
3.
W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, "Edge computing: Vision and challenges", IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637-646, 2016.
4.
K. Ashton et al., "That ‘internet of things’ thing", RFID journal, vol. 22, no. 7, pp. 97-114, 2009.
5.
L. Atzori, A. Iera and G. Morabito, "The internet of things: A survey", Computer Networks, vol. 54, no. 15, pp. 2787-2805, 2010.
6.
L. Cheng, Y. Wang, Q. Liu, D. H. Epema, C. Liu, Y. Mao, et al., "Network-aware locality scheduling for distributed data operators in data centers", IEEE Transactions on Parallel and Distributed Systems, [online] Available: https://ieeexplore.ieee.org/document/9329172.
7.
A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, et al., "Federated learning for mobile keyboard prediction", CoRR, 2018, [online] Available: https://arxiv.org/abs/1811.03604.
8.
M. Nasr, R. Shokri and A. Houmansadr, "Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning", IEEE symposium on security and privacy, pp. 739-753, 2019.
9.
L. Melis, C. Song, E. De Cristofaro and V. Shmatikov, "Exploiting unintended feature leakage in collaborative learning", IEEE Symposium on Security and Privacy, pp. 691-706, 2019.
10.
B. Hitaj, G. Ateniese and F. Perez-Cruz, "Deep models under the gan: information leakage from collaborative deep learning", ACM SIGSAC Conference on Computer and Communications Security, pp. 603-618, 2017.
11.
L. Zhu, Z. Liu and S. Han, "Deep leakage from gradients", Advances in Neural Information Processing Systems, pp. 14 774-14 784, 2019.
12.
B. Zhao, K. R. Mopuri and H. Bilen, "iDLG: Improved deep leakage from gradients", CoRR, 2020, [online] Available: https://arxiv.org/pdf/2001.02610.
13.
A. C.-C. Yao, "How to generate and exchange secrets", Annual Symposium on Foundations of Computer Science, pp. 162-167, 1986.
14.
C. Zhao, S. Zhao, M. Zhao, Z. Chen, C.-Z. Gao, H. Li, et al., "Secure multi-party computation: theory practice and applications", Information Sciences, vol. 476, pp. 357-372, 2019.
15.
R. C. Geyer, T. Klein and M. Nabi, "Differentially private federated learning: A client level perspective", CoRR, 2017, [online] Available: https://arxiv.org/abs/1712.07557.
16.
H. B. McMahan, D. Ramage, K. Talwar and L. Zhang, "Learning differentially private recurrent language models", CoRR, 2017, [online] Available: https://arxiv.org/abs/1710.06963.
17.
R. L. Rivest, L. Adleman, M. L. Dertouzos et al., "On data banks and privacy homomorphisms", Foundations of Secure Computation, vol. 4, no. 11, pp. 169-180, 1978.
18.
R. Canetti, U. Feige, O. Goldreich and M. Naor, "Adaptively secure multi-party computation", ACM symposium on Theory of Computing, pp. 639-648, 1996.
19.
O. Goldreich, "Secure multi-party computation", Manuscript. Preliminary version, vol. 78, 1998.
20.
W. Du and M. J. Atallah, "Secure multi-party computation problems and their applications: a review and open problems", Workshop on New Security Paradigms, pp. 13-22, 2001.
21.
P. Bogetoft, D. L. Christensen, I. Damgård, M. Geisler, T. Jakobsen, M. Krøigaard, J. D. Nielsen, J. B. Nielsen, K. Nielsen, J. Pagter et al., "Secure multiparty computation goes live", International Conference on Financial Cryptography and Data Security, pp. 325-343, 2009.
22.
I. Damgård, V. Pastro, N. Smart and S. Zakarias, "Multiparty computation from somewhat homomorphic encryption", Proceedings of the Annual Cryptology Conference, pp. 643-662, 2012.
23.
S. Goldwasser, "Multi party computations: past and present", ACM Symposium on Principles of Distributed Computing, pp. 1-6, 1997.
24.
A. Shamir, "How to share a secret", Communications of the ACM, vol. 22, no. 11, pp. 612-613, 1979.
25.
K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, et al., "Towards federated learning at scale: System design", Proceedings of the Conference on Machine Learning and Systems, 2019.
26.
R. Kanagavelu, Z. Li, J. Samsudin, Y. Yang, F. Yang, R. S. M. Goh, et al., "Two-phase multi-party computation enabled privacy-preserving federated learning", IEEE/ACM International Symposium on Cluster Cloud and Internet Computing, 2020.
27.
J. Geiping, H. Bauermeister, H. Dröge and M. Moeller, "Inverting gradients–how easy is it to break privacy in federated learning?", CoRR, 2020, [online] Available: https://arxiv.org/abs/2003.14053.
28.
M. Rigaki and S. Garcia, "A survey of privacy attacks in machine learning", CoRR, 2020, [online] Available: https://arxiv.org/abs/2007.07646.
29.
L. Lyu, H. Yu, J. Zhao and Q. Yang, Threats to Federated Learning, Cham: Springer International Publishing, pp. 3-16, 2020.
30.
Y. LeCun, L. Bottou, Y. Bengio, P. Haffner et al., "Gradient-based learning applied to document recognition", Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
Contact IEEE to Subscribe

References

References is not available for this document.