Loading [MathJax]/extensions/MathMenu.js
FlexSplit: A Configurable, Privacy-Preserving Federated-Split Learning Framework | IEEE Conference Publication | IEEE Xplore

FlexSplit: A Configurable, Privacy-Preserving Federated-Split Learning Framework


Abstract:

In this paper, we address the challenges of preserving client data and model sharing privacy in a distributed machine learning system. Different from conventional federat...Show More

Abstract:

In this paper, we address the challenges of preserving client data and model sharing privacy in a distributed machine learning system. Different from conventional federated learning where clients share their entire model, we propose a distributed learning framework named FlexSplit where each client can select the number of layers trained between it and the server to further control their individual privacy levels. FlexSplit improves scalability by performing part of the training process at multiple edge servers in parallel before the aggregation at the cloud server. Preliminary experimental results show that FlexSplit can achieve higher validation accuracy than a conventional federated learning model. We also highlight the privacy-utility trade-off where clients can increase their privacy level by sharing fewer layers to minimise privacy attacks at the cost of lower validation accuracy of their local model.
Date of Conference: 28 May 2023 - 01 June 2023
Date Added to IEEE Xplore: 23 October 2023
ISBN Information:

ISSN Information:

Conference Location: Rome, Italy

I. Introduction

Machine learning using deep neural networks (DNNs) is a promising approach to improve the performance of complex functionalities in 5G mobile networks such as optimisation, network control, and intelligent signal processing [1]. Typical centralised learning approaches require all mobile clients to transmit all their data to the cloud server to perform model training which incurs high communications resources and privacy risks as the cloud server manages the clients' data and models. To improve privacy and reduce communication overheads, distributed learning approaches such as federated learning (FL) have been proposed where mobile clients col-laborate to locally train part of the global DNN model using their private data and only share their trained models to the edge or cloud server [2]. However, it has been shown that sharing client models can still lead to privacy vulnerabilities where an attacker could reconstruct or infer the clients' private data using information embedded in their trained models [3]–[8]. Furthermore, outputs of the neurons at different layers in a DNN model contain private/sensitive information that an attacker can target (see Fig. 1 and [9]–[11]).

Contact IEEE to Subscribe

References

References is not available for this document.