I. Introduction
Exploiting the computing power of mobile devices with specialized engines (e.g., Neural Engine in iPhone), an attractive paradigm of federated learning (FL) that combines the mobile crowdsensing (MCS) [1] has been deeply investigated recently (e.g., Google AI), where the training task is offloaded to the mobile crowd [2] [3]. Currently, most machine learning (ML) [4] requires pooling large amounts of user data and sensitive personal information into a server for model training and analysis. Then, these abundant data can lead to network communication problems, high end-to-end latency, and user privacy leakage [5] [6]. FL can address these problems [7]. FL allows mobile devices to cooperatively train the global model in a decentralized manner without storing the original training data centrally. In FL, distributed mobile devices download a global model from the server at each iteration and then train it using their original datasets stored in local mobile devices. As the local models on the mobile devices are continuously updated and uploaded to the server, the server aggregates the parameters of these local models. The server updates the global model based on each uploaded weight parameter to generate a new global model for the next iteration. The mobile devices and the server repeat this process until the global model converges or reaches the expected test accuracy [8], and then the FL process ends.