Loading [a11y]/accessibility-menu.js
RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference Attacks | IEEE Conference Publication | IEEE Xplore

RPFL: Robust and Privacy Federated Learning against Backdoor and Sample Inference Attacks


Abstract:

Federated learning (FL) offers a solution for mitigating the issue of data silo. However, FL faces threats to both robustness and privacy, which hinder the widespread app...Show More

Abstract:

Federated learning (FL) offers a solution for mitigating the issue of data silo. However, FL faces threats to both robustness and privacy, which hinder the widespread application of FL. Most existing approaches focus on one of these threats or require significant resources to tackle both simultaneously. To meet the requirements of robustness and privacy, we propose a robust and privacy-preserving FL (RPFL) based on random selection and lightweight sharing. Our random selection method effectively invalidates malicious models to protect the integrity of the global model. On the other hand, we employ the technique of multi-party computation (MPC) to enhance privacy. To mitigate additional communication overhead and computation overhead introduced by MPC, we propose lightweight sharing. Besides, we adopt compressed sensing and parameter-clipping to further improve the communication efficiency and robustness of RPFL. We prove the performance of RPFL in terms of robustness, privacy, as well as efficiency. The extensive experimental results demonstrate that RPFL effectively improves the robustness and privacy of FL with only a negligible performance penalty.
Date of Conference: 17-21 December 2023
Date Added to IEEE Xplore: 26 March 2024
ISBN Information:

ISSN Information:

Conference Location: Ocean Flower Island, China

Funding Agency:


I. Introduction

Large-scale and high-quality datasets have become essential for achieving high-precision learning tasks. However, privacy or competition concerns often hinder data holders from sharing their data, resulting in data silo. Federated learning (FL) [1] provides a solution to data silo by enabling the acquisition and processing of data to be carried out locally on clients. In FL, instead of transmitting the original data, only model updates are sent to the server, thereby ensuring privacy protection without direct exposure. Nevertheless, FL still faces challenges. Adversaries exploit model updates to infer clients’ privacy [2], [3], and attempt to poison the global model [4], [5]. To address these challenges, researchers have proposed various strategies.

Contact IEEE to Subscribe

References

References is not available for this document.