I. Introduction
Federated Learning (FL) [1]–[5], as a creative distributed learning framework [4], [6]–[9], is widely applied across various fields. It provides privacy protection of local data when accomplishes distributed model training. In specific, each client trains its model based on locally collected data and uploads the model parameters to the server. Accordingly, the server will aggregate all the received model parameters to generate an updated global model and send the model back to each client. Through iterative training, FL can train a global model comparable to central training. Recent studies [10]–[13] have revealed that if the server is curious-but-honest, it will use its authority to collect the gradient of the model uploaded by the client, and uses the gradient inversion attack to reconstruct the private data of the client, indicating that FL still has the risk of privacy leakage attack. Consequently, it is a necessity to defend FL against privacy leakage attacks while maintaining its main task performance, e.g., model accuracy.