I. Introduction
Federated learning (FL) is composed of a parameter server and multiple edge devices, achieving the updating and optimization of a global model through the aggregation calculation of model parameters [1]. As shown in Fig. 1, FL enables cross-device model training while ensuring privacy protection by keeping the original data on the local devices. However, local clients are progressively expanding to various small-edge devices beyond servers and base stations [2], [3], such as smartphones and wearable devices [4], [5]. This expansion not only results in an increased number of clients but also introduces a rise in diversity concerning computing power, storage, and electricity, so heterogeneity poses a more prominent bottleneck than communication in the current generation of battery-powered mobile devices [6].
Training process of FL.