Loading [MathJax]/extensions/MathZoom.js
FedHiT: Privacy Protection for Federated Learning via Hierarchical Training | IEEE Conference Publication | IEEE Xplore

FedHiT: Privacy Protection for Federated Learning via Hierarchical Training


Abstract:

Federated learning (FL), collaborating with thousands of participants in a distributed manner, greatly protects the privacy of local data. However, recent research reveal...Show More

Abstract:

Federated learning (FL), collaborating with thousands of participants in a distributed manner, greatly protects the privacy of local data. However, recent research reveals that FL is at risk of privacy leakage attacks. Consequently, a variety of techniques have been applied to address the issue of privacy protection and effective distributed training of FL, such as differential privacy (DP), gradient compression (GC), and homomorphic encryption (HE). However, these techniques are still limited in three aspects, i.e., communication overhead, fidelity, and effectiveness. To address these challenges, we propose FedHiT, a novel privacy protection for FL via hierarchical training. It minimizes the information the curious-but-honest server acquires by employing a hierarchical upload mechanism. FedHiT differs from previous work in the three key aspects: (1) communication - it has low communication overhead since it only uploads partial model parameters to the server iteratively; (2) effectiveness - it can effectively defense various privacy leakage attacks and adaptive attacks; (3) fidelity - it ensures effectiveness while maintaining the model’s accuracy without compromise. Extensive experiments are conducted on three datasets under three attacks, compared with three state-of-the-art baselines, and the results testify that FedHiT outperforms other methods in model accuracy, communication overhead, and privacy protection capacity. For instance, the average model accuracy with FedHiT is only 1.59% lower than that of without any defense (i.e., the fidelity of using FedHiT only lost 1.59%), but its privacy protection ability is 1.4∼2 times that of DP and GC methods with the same fidelity.
Date of Conference: 20-22 December 2024
Date Added to IEEE Xplore: 18 February 2025
ISBN Information:
Conference Location: Nanjing, China

Funding Agency:


I. Introduction

Federated Learning (FL) [1]–[5], as a creative distributed learning framework [4], [6]–[9], is widely applied across various fields. It provides privacy protection of local data when accomplishes distributed model training. In specific, each client trains its model based on locally collected data and uploads the model parameters to the server. Accordingly, the server will aggregate all the received model parameters to generate an updated global model and send the model back to each client. Through iterative training, FL can train a global model comparable to central training. Recent studies [10]–[13] have revealed that if the server is curious-but-honest, it will use its authority to collect the gradient of the model uploaded by the client, and uses the gradient inversion attack to reconstruct the private data of the client, indicating that FL still has the risk of privacy leakage attack. Consequently, it is a necessity to defend FL against privacy leakage attacks while maintaining its main task performance, e.g., model accuracy.

Contact IEEE to Subscribe

References

References is not available for this document.