Loading [MathJax]/extensions/MathMenu.js
Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning | IEEE Journals & Magazine | IEEE Xplore

Exploring the Privacy-Energy Consumption Tradeoff for Split Federated Learning


Abstract:

Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology that leverages the strengths of both federated and split learning. In t...Show More

Abstract:

Split Federated Learning (SFL) has recently emerged as a promising distributed learning technology that leverages the strengths of both federated and split learning. In this approach, clients are responsible for training only part of the model, termed the client-side model, thereby alleviating their computational burden. Then, clients can enhance their convergence speed by synchronizing these client-side models. Consequently, SFL has received significant attention from both industry and academia, with diverse applications in 6G networks. However, while offering considerable benefits, SFL introduces additional communication overhead when interacting with servers. Moreover, the current SFL method presents several privacy concerns during frequent interactions. In this context, the choice of the cut layer in SFL, which splits the model into both client- and server-side models, can substantially impact the energy consumption of clients and their privacy because it influences the training burden and output of the client-side models. Correspondingly, extensive research is required to analyze the impact of cut layer selection, and careful consideration should be given to this aspect. Therefore, this study provides a comprehensive overview of the SFL process and reviews its state-of-the-art. We thoroughly analyze the energy consumption and privacy regarding the cut layer selection in SFL by considering the influence of various system parameters on the selection strategy. Moreover, we provide an illustrative example of cut layer selection to minimize the clients’ risk of reconstructing raw data at the server. This is done while sustaining energy consumption within the required energy budget, which involves trade-offs. We also discuss other control variables that can be optimized in conjunction with the cut layer selection. Finally, we highlight open challenges in this field. These are promising avenues for future research and development.
Published in: IEEE Network ( Volume: 38, Issue: 6, November 2024)
Page(s): 388 - 395
Date of Publication: 01 May 2024

ISSN Information:

Funding Agency:

No metrics found for this document.

Introduction

Federated Learning (FL) fundamentally addresses the challenges associated with centralized learning by distributing the training process across multiple clients, enabling parallel processing. This approach also helps to safeguard the privacy of raw data stored on clients by exchanging only the model parameters. However, FL requires local training for each client, which can significantly burden clients with limited battery power and computational resources when dealing with large models such as Deep Learning (DL). Split Learning (SL) has emerged as a solution to mitigate this problem. SL involves breaking down a full DL model into two sub-models that can be trained both on a main server and across distributed clients. This approach alleviates the local training burden associated with FL while preserving data privacy. Nevertheless, SL introduces its own set of challenges, primarily related to the training time overhead, owing to its relay-based training method. In this relay-based approach, only one client trains with the main server at any given time, whereas the other clients remain idle. This sequential training method leads to inefficient distributed processing and a long training latency. To address this challenge, various strategies have been proposed to parallelize the SL training process [1]. Inspired by these efforts, split federated learning, simply called split-fed learning (SFL), has recently been proposed as a novel approach that leverages the strengths of both FL and SL. Unlike SL, in SFL, all clients perform their local training in parallel while actively engaging with the main server and federated server (fed server). In SFL, the fed server plays a pivotal role in aggregating local model updates from clients using predefined aggregation techniques, such as FedAvg. This aggregation process occurred synchronously during each round of training. By introducing this additional aggregation server, SFL seamlessly combines the advantages of both FL and SL [2].

Usage
Select a Year
2025

View as

Total usage sinceMay 2024:385
0102030405060JanFebMarAprMayJunJulAugSepOctNovDec405326000000000
Year Total:119
Data is updated monthly. Usage includes PDF downloads and HTML views.

Contact IEEE to Subscribe

References

References is not available for this document.