I. Introduction
Federated Learning (FL) emerges as an attractive learning paradigm to enable edge intelligence in 5G and beyond while protecting data privacy [1], [2]. We consider the state-of-the-art FL system, Hierarchical FL, in mobile edge networks (MENs) and call this system HFLMEN in this work. As illustrated in Fig. 1, HFLMEN distributes the computing tasks of machine learning (ML) jobs across many mobile user equipments (UEs) and uses a cloud server as parameter server (PS) to orchestrate the iterative learning process with the help of base stations (BSs). In each global training round, HFLMEN lets clients, the participating UEs, down-load global model parameters or gradients from the cloud server via BSs, train model using local datasets, and upload their model updates to BSs after a given number of local training rounds. BSs aggregate model updates of associated clients and send the results back to cloud server for global model synchronization.