Loading [MathJax]/extensions/MathMenu.js
Federated Split Learning With Data and Label Privacy Preservation in Vehicular Networks | IEEE Journals & Magazine | IEEE Xplore

Federated Split Learning With Data and Label Privacy Preservation in Vehicular Networks


Abstract:

Federatedlearning (FL) is an emerging distributed learning paradigm widely used in vehicular networks, where vehicles are enabled to train the deep model for the server w...Show More

Abstract:

Federatedlearning (FL) is an emerging distributed learning paradigm widely used in vehicular networks, where vehicles are enabled to train the deep model for the server while keeping private data locally. However, the annotation of private data by vehicular users is very difficult since the high costs and professional needs, and one solution is that roadside infrastructures could provide label mapping to the data according to the geographical coordinates. In this scenario where vehicles and roadside infrastructures hold the data and labels, respectively, traditional FL is not applicable since it needs each participant to have both data and labels. In this article, we propose a federated split learning (FSL) paradigm that split the deep model into two submodels which are trained separately in the vehicles and the roadside infrastructures. The vehicles and the roadside infrastructures exchange the intermediate data (i.e., smashed data and cut layer gradients) in training local submodels and upload the local gradients to the global server for aggregation into the global model. Specifically, we first adopt three types of privacy attacks to demonstrate that attackers could recover the private data and labels according to the shared intermediate data and uploaded local gradients. We then propose a differential privacy (DP)-based defense mechanism to defend the privacy attacks by perturbing the intermediate data. Furthermore, we design a contract-based incentive mechanism that encourages vehicles and roadside infrastructures to enhance training performance by adjusting their privacy strategies. The simulation results illustrated that the proposed defense mechanism can remarkably emasculate the performance of attacks and the proposed incentive mechanism is efficient in the FSL paradigm for vehicular networks.
Published in: IEEE Transactions on Vehicular Technology ( Volume: 73, Issue: 1, January 2024)
Page(s): 1223 - 1238
Date of Publication: 24 August 2023

ISSN Information:

Funding Agency:


I. Introduction

In Ubiquitous intelligence of future 6G vehicular networks, deep model training is critical since it can mine knowledge from vehicular data to improve the quality of many artificial intelligence (AI) driven vehicular services [1]. When executing the model training, the traditional centralized learning paradigm requires vehicles to upload their raw data to a central server. This leads to significant communication overheads and the risk of privacy leakage for vehicles. With increasing emphasis on privacy and widespread deployment of edge computing in vehicular networks, federated learning (FL) emerges as a promising distributed learning paradigm. FL enables the vehicles to train the local model with private data, and further upload the local gradients to the global server for aggregation to achieve the global model training [2]. In this process, the vehicles retain private data and only upload local gradients of which size is much smaller, which significantly reduces the communication overheads and alleviates the risk of privacy leakage [3].

Contact IEEE to Subscribe

References

References is not available for this document.