I. Introduction
Traditional centralized machine learning techniques encounter challenges in dealing with the high computational expense of processing large amounts of data collected at a single location and the burden on the network caused by the accumulation of voluminous data [1]. Federated learning [2], a form of distributed machine learning, has attracted attention as a potential solution to this issue. In federated learning, a model with a uniform structure is first distributed to each server from a central server, with each server training the model using only its own data. After several training cycles, each server transfers the trained model to the central server, which then merges the models received from the servers. The central server subsequently re-distributes the integrated models to the servers, and each server trains the transferred models using its own data again. By iterating through this process, federated learning can train models based on all the data without collecting the data owned by each server in one place. As mentioned above, federated learning resolves the problem of training costs for extensive amounts of data by distributing training across multiple servers. Federated learning also resolves the issue of the network load by indirectly integrating information on all of the training data through model exchanges without gathering data in a single location.