I. Introduction
Large-scale and high-quality datasets have become essential for achieving high-precision learning tasks. However, privacy or competition concerns often hinder data holders from sharing their data, resulting in data silo. Federated learning (FL) [1] provides a solution to data silo by enabling the acquisition and processing of data to be carried out locally on clients. In FL, instead of transmitting the original data, only model updates are sent to the server, thereby ensuring privacy protection without direct exposure. Nevertheless, FL still faces challenges. Adversaries exploit model updates to infer clients’ privacy [2], [3], and attempt to poison the global model [4], [5]. To address these challenges, researchers have proposed various strategies.