Loading [MathJax]/extensions/MathZoom.js
AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices | IEEE Conference Publication | IEEE Xplore

AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices


Abstract:

In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a ...Show More

Abstract:

In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints. By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.
Date of Conference: 17-20 May 2023
Date Added to IEEE Xplore: 29 August 2023
ISBN Information:

ISSN Information:

Conference Location: New York City, NY, USA

Funding Agency:


I. Introduction

Federated learning (FL) is an emerging distributed learning paradigm that enables multiple edge devices to train a common global model without sharing individual data [1]. This privacy-friendly data analytics technique over massive devices is envisioned as a promising solution to realize pervasive intelligence [2]. However, in many real-world application areas, mobile devices are often equipped with different local resources, which raises the emerging challenges for locally on-demand training [3]. Given different local resources status (e.g., computing capability and communication channel state) and personalized efficiency constraints (e.g., latency and energy), it is crucial to customize training strategies for heterogeneous edge devices.

Contact IEEE to Subscribe

References

References is not available for this document.