I. Introduction
In Ubiquitous intelligence of future 6G vehicular networks, deep model training is critical since it can mine knowledge from vehicular data to improve the quality of many artificial intelligence (AI) driven vehicular services [1]. When executing the model training, the traditional centralized learning paradigm requires vehicles to upload their raw data to a central server. This leads to significant communication overheads and the risk of privacy leakage for vehicles. With increasing emphasis on privacy and widespread deployment of edge computing in vehicular networks, federated learning (FL) emerges as a promising distributed learning paradigm. FL enables the vehicles to train the local model with private data, and further upload the local gradients to the global server for aggregation to achieve the global model training [2]. In this process, the vehicles retain private data and only upload local gradients of which size is much smaller, which significantly reduces the communication overheads and alleviates the risk of privacy leakage [3].