I. Introduction
With the booming development of emerging technologies such as the Internet of Things (IoT), social networks, and smart society, a large amount of data has been generated at the network edge, and how to use the data to build intelligent applications has become an important research field [1]–[3]. However, due to limitations in transmission bandwidth, data storage, and security, there are many difficulties in transmitting large amounts of data to a centralized location for further processing such as machine learning. Therefore, in order to utilize the computing resources at the network edge and realizing the essential ubiquitous learning, the idea of Federated Learning (FL) is proposed. FL is a variant of distributed machine learning in which the training data is stored locally in the mobile users [4]. More specifically, in a FL iteration, local mobile users first train their local model with the global parameters obtained in the previous iteration and their local data, and then the local model parameters are sent to the Multi-access Edge Computing (MEC) server. This process continues until the training accuracy converges. However, in practice, since mobile users need to transmit local model parameters over wireless channels and radio resources are limited, errors may be introduced. In this way, the performance of FL may become deteriorated due to problems in the transmission process.