I. Introduction
Federated learning (FL) [1] has become a popular distributed framework that enables tremendous participants who own local datasets cooperatively to train a joint machine learning (ML) model. FL essentially solves problems of client privacy [2], data and profit ownership [3], and takes advantage of edge computation [4] to a large extent. Classification methods play a crucial role in knowledge discovery of databases and data mining, such as sparse support vector machine [5], stochastic gradient descent (SGD) based deep learning [6], graph neural networks [7], gradient boosting decision trees (GBDTs) [8], and meta-learning [9] is widely used in FL. XGboost [10], with highly efficient, flexible, and portable characteristics, is famous for both classification and regression tasks. It was utilized in many Kaggle competitions winning teams and was the third most commonly used ML algorithm. In the foreseeable future, eXtreme Gradient Boosting (XGBoost) and its variants are still the most popular methods in the data science community.