Loading [MathJax]/extensions/MathMenu.js
Joint Client Selection and Resource Allocation for Federated Learning in Mobile Edge Networks | IEEE Conference Publication | IEEE Xplore

Joint Client Selection and Resource Allocation for Federated Learning in Mobile Edge Networks


Abstract:

Federated Learning (FL) has received widespread attention in 5G mobile edge networks (MENs) due to its ability to facilitate collaborative learning of machine learning mo...Show More

Abstract:

Federated Learning (FL) has received widespread attention in 5G mobile edge networks (MENs) due to its ability to facilitate collaborative learning of machine learning models without revealing user privacy data. However, FL training is both time and energy consuming. Constrained by the instability and limited resources of clients in MENs, it is challenging to optimize both learning time and energy consumption for FL. This paper studies the problem of client selection and resource allocation to minimize the energy consumption and learning time of multiple FL jobs competing for resources. Because minimizing learning time and minimizing energy consumption are conflicting objectives, we design a decoupling algorithm to optimize them separately and efficiently. Simulations based on popular models and learning datasets show the effectiveness of our approach, reducing up to 75.7% energy consumption and 38.5% learning time compared to prior work.
Date of Conference: 10-13 April 2022
Date Added to IEEE Xplore: 16 May 2022
ISBN Information:

ISSN Information:

Conference Location: Austin, TX, USA

Funding Agency:


I. Introduction

Federated Learning (FL) emerges as an attractive learning paradigm to enable edge intelligence in 5G and beyond while protecting data privacy [1], [2]. We consider the state-of-the-art FL system, Hierarchical FL, in mobile edge networks (MENs) and call this system HFLMEN in this work. As illustrated in Fig. 1, HFLMEN distributes the computing tasks of machine learning (ML) jobs across many mobile user equipments (UEs) and uses a cloud server as parameter server (PS) to orchestrate the iterative learning process with the help of base stations (BSs). In each global training round, HFLMEN lets clients, the participating UEs, down-load global model parameters or gradients from the cloud server via BSs, train model using local datasets, and upload their model updates to BSs after a given number of local training rounds. BSs aggregate model updates of associated clients and send the results back to cloud server for global model synchronization.

Contact IEEE to Subscribe

References

References is not available for this document.