Snowball: Energy Efficient and Accurate Federated Learning With Coarse-to-Fine Compression Over Heterogeneous Wireless Edge Devices | IEEE Journals & Magazine | IEEE Xplore

Snowball: Energy Efficient and Accurate Federated Learning With Coarse-to-Fine Compression Over Heterogeneous Wireless Edge Devices


Abstract:

Model update compression is a widely used technique to alleviate the communication cost in federated learning (FL). However, there is evidence indicating that the compres...Show More

Abstract:

Model update compression is a widely used technique to alleviate the communication cost in federated learning (FL). However, there is evidence indicating that the compression-based FL system often suffers the following two issues, i) the implicit learning performance deterioration of the global model due to the inaccurate update, ii) the limitation of sharing the same compression rate over heterogeneous edge devices. In this paper, we propose an energy-efficient learning framework, named Snowball, that enables edge devices to incrementally upload their model updates in a coarse-to-fine compression manner. To this end, we first design a fine-grained compression scheme that enables a nearly continuous compression rate. After that, we investigate the Snowball optimization problem to minimize the energy consumption of parameter transmission with learning performance constraints. By leveraging the theoretical insights of the convergence analysis, the optimization problem is transformed into a tractable form. Following that, a water-filling algorithm is designed to solve the problem, where each device is assigned a personalized compression rate according to the status of the locally available resource. Experiments indicate that, compared to state-of-the-art FL algorithms, our learning framework can save five times the required energy of uplink communication to achieve a good global accuracy.
Published in: IEEE Transactions on Wireless Communications ( Volume: 22, Issue: 10, October 2023)
Page(s): 6778 - 6792
Date of Publication: 23 February 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

According to Cisco’s forecast, there will be 500 billion devices connected to the Internet by 2030 [1]. These devices equipped with versatile sensors generate massive data at the network edge, opening up new horizons for data-driven learning methods. Federated learning (FL) is an emerging distributed paradigm that enables multiple edge devices to train a global model without sharing local training data [2]. FL-empowered mobile edge computing system is recognized as a promising solution to realize ubiquitous intelligence [3]. In many real-world scenarios, mobile devices are strictly constrained by computing capability, channel state condition and battery lifetime [4]. In order to improve the efficiency of resource utilization, many researchers have proposed to compress the local model update before uploading it to the parameter server.

Select All
1.
K. Mayuram, "Cisco and SAS edge-to-enterprise IoT analytics platform", 2017, [online] Available: https://www.cisco.com/c/dam/global/fr_fr/solutions/data-center-virtualization/big-data/solution-cisco-sas-edge-to-entreprise-iot.pdf.
2.
H. B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Y. Arcas, "Communication-efficient learning of deep networks from decentralized data", Proc. AISTATS, pp. 1273-1282, Apr. 2017.
3.
R. Yu and P. Li, "Toward resource-efficient federated learning in mobile edge computing", IEEE Netw., vol. 35, no. 1, pp. 148-155, Jan./Feb. 2021.
4.
J. Huang, J. Cui, C.-C. Xing and H. Gharavi, "Energy-efficient SWIPT-empowered D2D mode selection", IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 3903-3915, Apr. 2020.
5.
F. Sattler, S. Wiedemann, K.-R. Müller and W. Samek, "Robust and communication-efficient federated learning from non-IID data", IEEE Trans. Neural Netw. Learn. Syst., vol. 31, no. 9, pp. 3400-3413, Sep. 2020.
6.
M. M. Amiri and D. Gündüz, "Federated learning over wireless fading channels", IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3546-3557, May 2020.
7.
M. M. Amiri, D. Gunduz, S. R. Kulkarni and H. V. Poor, "Convergence of update aware device scheduling for federated learning at the wireless edge", IEEE Trans. Wireless Commun., vol. 20, no. 6, pp. 3643-3658, Jun. 2021.
8.
S. Chen, C. Shen, L. Zhang and Y. Tang, "Dynamic aggregation for heterogeneous quantization in federated learning", IEEE Trans. Wireless Commun., vol. 20, no. 10, pp. 6804-6819, Oct. 2021.
9.
L. Li, D. Shi, R. Hou, H. Li, M. Pan and Z. Han, "To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices", Proc. IEEE INFOCOM Conf. Comput. Commun., pp. 1-10, May 2021.
10.
H. Xiao, J. Zhao, Q. Pei, J. Feng, L. Liu and W. Shi, "Vehicle selection and resource optimization for federated learning in vehicular edge computing", IEEE Trans. Intell. Transp. Syst., vol. 23, no. 8, pp. 11073-11087, Aug. 2022.
11.
Y. Wu, Y. Song, T. Wang, L. Qian and T. Q. S. Quek, "Non-orthogonal multiple access assisted federated learning via wireless power transfer: A cost-efficient approach", IEEE Trans. Commun., vol. 70, no. 4, pp. 2853-2869, Apr. 2022.
12.
J. Yao and N. Ansari, "Enhancing federated learning in fog-aided IoT by CPU frequency and wireless power control", IEEE Internet Things J., vol. 8, no. 5, pp. 3438-3445, Mar. 2021.
13.
N. H. Tran, W. Bao, A. Zomaya, M. N. H. Nguyen and C. S. Hong, "Federated learning over wireless networks: Optimization model design and analysis", Proc. IEEE INFOCOM Conf. Comput. Commun., pp. 1387-1395, Apr. 2019.
14.
Y. Zhan, P. Li, Z. Qu, D. Zeng and S. Guo, "A learning-based incentive mechanism for federated learning", IEEE Internet Things J., vol. 7, no. 7, pp. 6360-6368, Jul. 2020.
15.
Y. Li et al., "Energy-constrained D2D assisted federated learning in edge computing", Proc. Int. Conf. Model. Anal. Simul. Wireless Mobile Syst. Int. Conf. Model. Anal. Simul. Wireless Mobile Syst., pp. 33-37, Oct. 2022.
16.
S. Chen, D. Yu, Y. Zou, J. Yu and X. Cheng, "Decentralized wireless federated learning with differential privacy", IEEE Trans. Ind. Informat., vol. 18, no. 9, pp. 6273-6282, Sep. 2022.
17.
Z. Yang, M. Chen, W. Saad, C. S. Hong and M. Shikh-Bahaei, "Energy efficient federated learning over wireless communication networks", IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1935-1949, Mar. 2021.
18.
V.-D. Nguyen, S. K. Sharma, T. X. Vu, S. Chatzinotas and B. Ottersten, "Efficient federated learning algorithm for resource allocation in wireless IoT networks", IEEE Internet Things J., vol. 8, no. 5, pp. 3394-3409, Mar. 2021.
19.
W. Shi, S. Zhou, Z. Niu, M. Jiang and L. Geng, "Joint device scheduling and resource allocation for latency constrained wireless federated learning", IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 453-467, Jan. 2021.
20.
J. Xu and H. Wang, "Client selection and bandwidth allocation in wireless federated learning networks: A long-term perspective", IEEE Trans. Wireless Commun., vol. 20, no. 2, pp. 1188-1200, Feb. 2021.
21.
T. Zhang and S. Mao, "Energy-efficient federated learning with intelligent reflecting surface", IEEE Trans. Green Commun. Netw., vol. 6, no. 2, pp. 845-858, Jun. 2022.
22.
J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh and D. Bacon, "Federated learning: Strategies for improving communication efficiency", arXiv:1610.05492, 2016.
23.
D. Alistarh, D. Grubic, J. Li, R. Tomioka and M. Vojnovic, "QSGD: Communication-efficient SGD via gradient quantization and encoding", Proc. NIPS, pp. 1-12, 2017.
24.
L. Cui, X. Su, Y. Zhou and J. Liu, "Optimal rate adaption in federated learning with compressed communications", Proc. IEEE INFOCOM Conf. Comput. Commun., pp. 1459-1468, May 2022.
25.
D. Shi, L. Li, R. Chen, P. Prakash, M. Pan and Y. Fang, "Towards energy efficient federated learning over 5G+ mobile devices", IEEE Wireless Commun., vol. 29, no. 5, pp. 44-51, Oct. 2022.
26.
R. Jin, X. He and H. Dai, "Communication efficient federated learning with energy awareness over wireless networks", IEEE Trans. Wireless Commun., vol. 21, no. 7, pp. 5204-5219, Jul. 2022.
27.
N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor and S. Cui, "UVeQFed: Universal vector quantization for federated learning", IEEE Trans. Signal Process., vol. 69, pp. 500-514, 2021.
28.
P. Li, X. Huang, M. Pan and R. Yu, "FedGreen: Federated learning with fine-grained gradient compression for green mobile edge computing", Proc. IEEE Global Commun. Conf. (GLOBECOM), pp. 1-6, Dec. 2021.
29.
S. Han, H. Mao and W. J. Dally, "Deep compression: Compressing deep neural networks with pruning trained quantization and Huffman coding", Proc. ICLR, pp. 1-14, 2016.
30.
S. Golomb, "Run-length encodings (corresp.)", IEEE Trans. Inf. Theory, vol. IT-12, no. 3, pp. 399-401, Jul. 1966.
Contact IEEE to Subscribe

References

References is not available for this document.