A Lightweight and Accuracy-Lossless Privacy-Preserving Method in Federated Learning | IEEE Journals & Magazine | IEEE Xplore

A Lightweight and Accuracy-Lossless Privacy-Preserving Method in Federated Learning


Abstract:

The emergence of big data and artificial intelligence (AI) marks a significant milestone in technological advancement, impacting various sectors and transforming the esse...Show More

Abstract:

The emergence of big data and artificial intelligence (AI) marks a significant milestone in technological advancement, impacting various sectors and transforming the essence of public and personal activities. Traditional machine learning, known as centralized learning, involves collecting data from multiple sources for the model development, which poses significant privacy risks. To address the conflict between the need for data sharing and the protection of privacy, federated learning (FL) has emerged as a promising solution. This approach allows for the continuous sharing of model updates between a central server and numerous local devices, fostering collaborative model development. However, it also brings about considerable communication costs and raises privacy concerns due to the potential for sensitive information leakage from the central server and local devices. To address these issues, we propose a lightweight and accuracy-lossless privacy-preserving FL scheme based on gradient clipping. The central server introduces noise to the global model to prevent clients from extracting valuable information, and then the local devices train these models using their data. To reduce the communication load, parameters with less impact on the model are removed using the Fisher information matrix. Additionally, to enhance privacy protection, the client-computed parameters are perturbed using the Diffie-Hellman key exchange method. Our experiments show that this approach greatly reduces the communication load for clients and the server, while effectively safeguarding client privacy and maintaining the model’s accuracy.
Published in: IEEE Internet of Things Journal ( Volume: 12, Issue: 3, 01 February 2025)
Page(s): 3118 - 3129
Date of Publication: 11 October 2024

ISSN Information:

Funding Agency:


I. Introduction

Federated learning (FL) [1] has risen as a key framework within the distributed learning domain, offering a way to unite efforts in model development without the requirement for data centralization. The FL structure is primarily made up of two key entities: 1) a central aggregation server and 2) various local clients. The central server plays a crucial part in starting a global model that is designed to meet a particular goal, which is then shared with the clients. Once the global model is received, each client undertakes a training process using their own data, later sending back the updated model parameters or gradients to the central server. The central server then uses a carefully designed aggregation method to integrate these updates, refining the global model to be resilient and representative of the collective insights. This coordinated strategy not only boosts the model’s adaptability across a range of data environments but also rigorously protects the privacy of client data, tackling a significant issue in today’s data-centric applications.

Contact IEEE to Subscribe

References

References is not available for this document.