Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/Typewriter/Regular/Main.js
DegaFL: Decentralized Gradient Aggregation for Cross-Silo Federated Learning | IEEE Journals & Magazine | IEEE Xplore

DegaFL: Decentralized Gradient Aggregation for Cross-Silo Federated Learning


Abstract:

Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML). An important type of FL is cross-silo FL, which enables a moderate ...Show More

Abstract:

Federated learning (FL) is an emerging promising paradigm of privacy-preserving machine learning (ML). An important type of FL is cross-silo FL, which enables a moderate number of organizations to cooperatively train a shared model by keeping confidential data locally and aggregating gradients on a central parameter server. However, the central server may be vulnerable to malicious attacks or software failures in practice. To address this issue, in this paper, we propose \mathtt{DegaFL} , a novel decentralized gradient aggregation approach for cross-silo FL. \mathtt{DegaFL} eliminates the central server by aggregating gradients on each participant, and maintains and synchronizes gradients of only the current training round. Besides, we propose \mathtt{AdaAgg} to adaptively aggregate correct gradients from honest nodes and use HotStuff to ensure the consistency of the training round number and gradients among all nodes. Experimental results show that \mathtt{DegaFL} defends against common threat models with minimal accuracy loss, and achieves up to 50\times reduction in storage overhead and up to 13\times reduction in network overhead, compared to state-of-the-art decentralized FL approaches.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 36, Issue: 2, February 2025)
Page(s): 212 - 225
Date of Publication: 18 November 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

In many real-world scenarios, such as e-commerce, clinical services, social networks, and the Internet of Things (IoT), data are distributed among devices or organizations and it is a common practice to gather these data and train a global model for intelligent services, such as recommendation and anomaly detection. Undoubtedly, it raises concerns about data ownership, privacy, security, and monopolies. Federated Learning (FL) [1] mitigates some of these concerns by training a global model without gathering confidential data from each participating node. Cross-silo FL [2], [3] is an important type of FL where a moderate number of organizations collectively train a model on a central parameter server provided by a third party. All participants assume the central server to be trusted and reliable. However, this assumption may not hold in practice. For example, the central server could be malicious, leading to poisoning the model [4], [5], [6], [7], or skewing the model by favoring particular participants [8], [9]. Besides, fatal crashes in the central server could lead to an accuracy drop, convergence time increase, or even training procedure abortion.

Select All
1.
B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Y. Arcas, "Communication-efficient learning of deep networks from decentralized data", Proc. 20th Int. Conf. Artif. Intell. Statist. Fort Lauderdale FL USA, pp. 1273-1282, 2017.
2.
C. Zhang, S. Li, J. Xia, W. Wang, F. Yan and Y. Liu, "Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning", Proc. 2020 USENIX Annu. Tech. Conf., pp. 493-506, 2020.
3.
O. Marfoq, C. Xu, G. Neglia and R. Vidal, "Throughput-optimal topology design for cross-silo federated learning", Proc. Adv. Neural Inf. Process. Syst., pp. 19 478-19 487, 2020.
4.
C. Fung, C. J. M. Yoon and I. Beschastnikh, "The limitations of federated learning in sybil settings", Proc. 23rd Int. Symp. Res. Attacks Intrusions Defenses, pp. 301-316, 2020.
5.
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin and V. Shmatikov, "How to backdoor federated learning", Proc. 23rd Int. Conf. Artif. Intell. Statist., pp. 2938-2948, 2020.
6.
C. Xie, K. Huang, P. Chen and B. Li, "DBA: Distributed backdoor attacks against federated learning", Proc. 8th Int. Conf. Learn. Representations, pp. 1-19, 2020.
7.
C. Xie, S. Koyejo and I. Gupta, "Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance", Proc. 36th Int. Conf. Mach. Learn., pp. 6893-6901, 2019.
8.
L. Lyu, X. Xu, Q. Wang and H. Yu, "Collaborative fairness in federated learning" in Federated Learning - Privacy and Incentive, Berlin, Germany:Springer, vol. 12500, pp. 189-204, 2020.
9.
H. Yu et al., "A fairness-aware incentive scheme for federated learning", Proc. AAAI/ACM Conf. AI Ethics Soc., pp. 393-399, 2020.
10.
S. Warnat-Herresthal et al., "Swarm learning for decentralized and confidential clinical machine learning", Nature, vol. 594, no. 7862, pp. 265-270, 2021.
11.
J. Li et al., "Blockchain assisted decentralized federated learning (BLADE-FL): Performance analysis and resource allocation", IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 10, pp. 2401-2415, Oct. 2022.
12.
Y. Li, C. Chen, N. Liu, H. Huang, Z. Zheng and Q. Yan, "A blockchain-based decentralized federated learning framework with committee consensus", IEEE Netw., vol. 35, no. 1, pp. 234-241, Jan./Feb. 2021.
13.
J. Han, Y. Ma and Y. Han, "Demystifying swarm learning: A new paradigm of blockchain-based decentralized federated learning", 2022.
14.
M. Shayan, C. Fung, C. J. M. Yoon and I. Beschastnikh, "Biscotti: A blockchain system for private and secure federated learning", IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 7, pp. 1513-1525, Jul. 2021.
15.
P. Ramanan and K. Nakayama, "BAFFLE : Blockchain based aggregator free federated learning", Proc. IEEE Int. Conf. Blockchain, pp. 72-81, 2020.
16.
X. Bao, C. Su, Y. Xiong, W. Huang and Y. Hu, "Flchain: A blockchain for auditable federated learning with trust and incentive", Proc. 5th Int. Conf. Big Data Comput. Commun., pp. 151-159, 2019.
17.
S. R. Pokhrel and J. Choi, "A decentralized federated learning approach for connected autonomous vehicles", Proc. 2020 IEEE Wirel. Commun. Netw. Conf. Workshops, pp. 1-6, 2020.
18.
Y. Lu, X. Huang, K. Zhang, S. Maharjan and Y. Zhang, "Blockchain and federated learning for 5G beyond", IEEE Netw., vol. 35, no. 1, pp. 219-225, Jan./Feb. 2021.
19.
K. Toyoda, J. Zhao, A. N. Zhang and P. T. Mathiopoulos, "Blockchain-enabled federated learning with mechanism design", IEEE Access, vol. 8, pp. 219 744-219 756, 2020.
20.
J. D. Harris and B. Waggoner, "Decentralized and collaborative AI on blockchain", Proc. IEEE Int. Conf. Blockchain, pp. 368-375, 2019.
21.
P. Blanchard, E. M. E. Mhamdi, R. Guerraoui and J. Stainer, "Machine learning with adversaries: Byzantine tolerant gradient descent", Proc. 30th Annu. Conf. Neural Inf. Process. Syst., pp. 119-129, 2017.
22.
X. Cao, M. Fang, J. Liu and N. Z. Gong, "Fltrust: Byzantine-robust federated learning via trust bootstrapping", Proc. 28th Annu. Netw. Distrib. Syst. Secur. Symp., pp. 1-18, 2021.
23.
M. Yin, D. Malkhi, M. K. Reiter, G. Golan-Gueta and I. Abraham, "Hotstuff: BFT consensus with linearity and responsiveness", Proc. 2019 ACM Symp. Princ. Distrib. Comput., pp. 347-356, 2019.
24.
K. A. Bonawitz et al., "Towards federated learning at scale: System design", Proc. Mach. Learn. Syst., pp. 374-388, 2019.
25.
Q. Yang, Y. Liu, T. Chen and Y. Tong, "Federated machine learning: Concept and applications", ACM Trans. Intell. Syst. Technol., vol. 10, no. 2, pp. 12:1-12:19, 2019.
26.
Y. Jin, X. Wei, Y. Liu and Q. Yang, "Towards utilizing unlabeled data in federated learning: A survey and prospective", 2020.
27.
X. Wu, X. Yao and C. Wang, "FedSCR: Structure-based communication reduction for federated learning", IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 7, pp. 1565-1577, Jul. 2021.
28.
W. Liu, L. Chen, Y. Chen and W. Zhang, "Accelerating federated learning via momentum gradient descent", IEEE Trans. Parallel Distrib. Syst., vol. 31, no. 8, pp. 1754-1766, Aug. 2020.
29.
L. Lyu et al., "Towards fair and privacy-preserving federated deep models", IEEE Trans. Parallel Distrib. Syst., vol. 31, no. 11, pp. 2524-2541, Nov. 2020.
30.
C. Wang, Y. Yang and P. Zhou, "Towards efficient scheduling of federated mobile devices under computational and statistical heterogeneity", IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 2, pp. 394-410, Feb. 2021.

Contact IEEE to Subscribe

References

References is not available for this document.