Loading [MathJax]/extensions/MathMenu.js
Predictive GAN-Powered Multi-Objective Optimization for Hybrid Federated Split Learning | IEEE Journals & Magazine | IEEE Xplore

Predictive GAN-Powered Multi-Objective Optimization for Hybrid Federated Split Learning


Abstract:

As an edge intelligence algorithm for multi-device collaborative training, federated learning (FL) can protect data privacy but increase the computing load of wireless de...Show More

Abstract:

As an edge intelligence algorithm for multi-device collaborative training, federated learning (FL) can protect data privacy but increase the computing load of wireless devices. In contrast, split learning (SL) can reduce the computing load of devices by model splitting and assignment. To take advantage of FL and SL, we propose a hybrid federated split learning (HFSL) framework for wireless networks in this paper, which combines the multi-worker collaborative training of FL and the flexible splitting of SL. To reduce the computational idleness in model splitting, we design a parallel computing scheme for model splitting without label sharing and conduct a theoretical analysis of the impact of the delayed gradient on the convergence. Aiming to obtain the trade-off between the training time and energy consumption, we model the joint optimization problem of splitting decisions, the bandwidth, and computing resources as a multi-objective problem. As such, we propose a predictive generative adversarial network (GAN)-powered multi-objective optimization algorithm to obtain the Pareto front of the problem, which utilizes the discriminator to guide the training of the generator to predict promising solutions. Experimental results demonstrate that the proposed algorithm outperforms the considered baselines in finding Pareto optimal solutions, and the solutions obtained from the proposed HFSL framework can dominate the solution of FL.
Published in: IEEE Transactions on Communications ( Volume: 71, Issue: 8, August 2023)
Page(s): 4544 - 4560
Date of Publication: 19 May 2023

ISSN Information:

Funding Agency:


I. Introduction

With the rapid growth of Internet of things (IoT), a large amount of data is generated by IoT devices every day [1]. To take advantage of the distributed data, edge machine learning algorithms are being developed to realize intelligent applications in wireless networks [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. Federated learning (FL) [12] is proposed as a method to collaboratively train a machine learning algorithm with the local data of many wireless devices. Compared with traditional centralized learning that transmits large amounts of raw data to the cloud server for training, FL can effectively protect data privacy without exchanging the local data. However, the IoT devices, also called workers, need to perform the local update of the training model with their computing power in FL. It can greatly increase the computation burden of the workers, especially for training deep neural networks with high computational complexity. When workers have low computing power, the FL training time can be significantly prolonged, which can impede the practical application of FL. In addition, local updating entirely by the own computing power of workers increases their energy consumption.

Contact IEEE to Subscribe

References

References is not available for this document.