I. Introduction
Recent years have witnessed a phenomenal growth in mobile data, most of which are generated in real-time and distributed at edge devices (e.g., smartphones and sensors) [1]. Uploading these massive data to the cloud for training artificial intelligence (AI) models is impractical due to various issues including privacy, network congestion, and latency. To address these issues, the federated edge learning (FEEL) framework has been developed [2]–[4], which implements distributed machine learning at the network edge. In particular, a server updates a global model by aggregating local models (or stochastic gradients) transmitted by devices that are computed using local datasets. The updating of the global model using local models and the reverse are iterated till they converge. Besides preserving data privacy by avoiding data uploading, FEEL leverages distributed computation resources as well as allows rapid access to the real-time data generated by edge devices. One focus in the research area is communication-efficient FEEL where wireless techniques are designed to accelerate learning by reducing communication overhead and latency. However, the topic of energy-efficient communication for FEEL so far has not been explored. This is an important topic as training and transmission of large-scale models are energy consuming, while most edge devices especially sensors have limited battery lives. This topic is investigated in the current work where novel radio-resource-management (RRM) strategies for joint bandwidth allocation and user scheduling are proposed for minimizing the total device energy-consumption under a constraint on the learning speed.
A framework for FEEL system.