Loading [MathJax]/extensions/MathMenu.js
Snowball: Energy Efficient and Accurate Federated Learning With Coarse-to-Fine Compression Over Heterogeneous Wireless Edge Devices | IEEE Journals & Magazine | IEEE Xplore

Snowball: Energy Efficient and Accurate Federated Learning With Coarse-to-Fine Compression Over Heterogeneous Wireless Edge Devices


Abstract:

Model update compression is a widely used technique to alleviate the communication cost in federated learning (FL). However, there is evidence indicating that the compres...Show More

Abstract:

Model update compression is a widely used technique to alleviate the communication cost in federated learning (FL). However, there is evidence indicating that the compression-based FL system often suffers the following two issues, i) the implicit learning performance deterioration of the global model due to the inaccurate update, ii) the limitation of sharing the same compression rate over heterogeneous edge devices. In this paper, we propose an energy-efficient learning framework, named Snowball, that enables edge devices to incrementally upload their model updates in a coarse-to-fine compression manner. To this end, we first design a fine-grained compression scheme that enables a nearly continuous compression rate. After that, we investigate the Snowball optimization problem to minimize the energy consumption of parameter transmission with learning performance constraints. By leveraging the theoretical insights of the convergence analysis, the optimization problem is transformed into a tractable form. Following that, a water-filling algorithm is designed to solve the problem, where each device is assigned a personalized compression rate according to the status of the locally available resource. Experiments indicate that, compared to state-of-the-art FL algorithms, our learning framework can save five times the required energy of uplink communication to achieve a good global accuracy.
Published in: IEEE Transactions on Wireless Communications ( Volume: 22, Issue: 10, October 2023)
Page(s): 6778 - 6792
Date of Publication: 23 February 2023

ISSN Information:

Funding Agency:


I. Introduction

According to Cisco’s forecast, there will be 500 billion devices connected to the Internet by 2030 [1]. These devices equipped with versatile sensors generate massive data at the network edge, opening up new horizons for data-driven learning methods. Federated learning (FL) is an emerging distributed paradigm that enables multiple edge devices to train a global model without sharing local training data [2]. FL-empowered mobile edge computing system is recognized as a promising solution to realize ubiquitous intelligence [3]. In many real-world scenarios, mobile devices are strictly constrained by computing capability, channel state condition and battery lifetime [4]. In order to improve the efficiency of resource utilization, many researchers have proposed to compress the local model update before uploading it to the parameter server.

Contact IEEE to Subscribe

References

References is not available for this document.