I. Introduction
Federated learning (FL) [1] has risen as a key framework within the distributed learning domain, offering a way to unite efforts in model development without the requirement for data centralization. The FL structure is primarily made up of two key entities: 1) a central aggregation server and 2) various local clients. The central server plays a crucial part in starting a global model that is designed to meet a particular goal, which is then shared with the clients. Once the global model is received, each client undertakes a training process using their own data, later sending back the updated model parameters or gradients to the central server. The central server then uses a carefully designed aggregation method to integrate these updates, refining the global model to be resilient and representative of the collective insights. This coordinated strategy not only boosts the model’s adaptability across a range of data environments but also rigorously protects the privacy of client data, tackling a significant issue in today’s data-centric applications.