Loading [MathJax]/extensions/MathMenu.js
Predictive GAN-Powered Multi-Objective Optimization for Hybrid Federated Split Learning | IEEE Journals & Magazine | IEEE Xplore

Predictive GAN-Powered Multi-Objective Optimization for Hybrid Federated Split Learning


Abstract:

As an edge intelligence algorithm for multi-device collaborative training, federated learning (FL) can protect data privacy but increase the computing load of wireless de...Show More

Abstract:

As an edge intelligence algorithm for multi-device collaborative training, federated learning (FL) can protect data privacy but increase the computing load of wireless devices. In contrast, split learning (SL) can reduce the computing load of devices by model splitting and assignment. To take advantage of FL and SL, we propose a hybrid federated split learning (HFSL) framework for wireless networks in this paper, which combines the multi-worker collaborative training of FL and the flexible splitting of SL. To reduce the computational idleness in model splitting, we design a parallel computing scheme for model splitting without label sharing and conduct a theoretical analysis of the impact of the delayed gradient on the convergence. Aiming to obtain the trade-off between the training time and energy consumption, we model the joint optimization problem of splitting decisions, the bandwidth, and computing resources as a multi-objective problem. As such, we propose a predictive generative adversarial network (GAN)-powered multi-objective optimization algorithm to obtain the Pareto front of the problem, which utilizes the discriminator to guide the training of the generator to predict promising solutions. Experimental results demonstrate that the proposed algorithm outperforms the considered baselines in finding Pareto optimal solutions, and the solutions obtained from the proposed HFSL framework can dominate the solution of FL.
Published in: IEEE Transactions on Communications ( Volume: 71, Issue: 8, August 2023)
Page(s): 4544 - 4560
Date of Publication: 19 May 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

With the rapid growth of Internet of things (IoT), a large amount of data is generated by IoT devices every day [1]. To take advantage of the distributed data, edge machine learning algorithms are being developed to realize intelligent applications in wireless networks [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11]. Federated learning (FL) [12] is proposed as a method to collaboratively train a machine learning algorithm with the local data of many wireless devices. Compared with traditional centralized learning that transmits large amounts of raw data to the cloud server for training, FL can effectively protect data privacy without exchanging the local data. However, the IoT devices, also called workers, need to perform the local update of the training model with their computing power in FL. It can greatly increase the computation burden of the workers, especially for training deep neural networks with high computational complexity. When workers have low computing power, the FL training time can be significantly prolonged, which can impede the practical application of FL. In addition, local updating entirely by the own computing power of workers increases their energy consumption.

Select All
1.
G. Zhu, D. Liu, Y. Du, C. You, J. Zhang and K. Huang, "Toward an intelligent edge: Wireless communication meets machine learning", IEEE Commun. Mag., vol. 58, no. 1, pp. 19-25, Jan. 2020.
2.
X. Wang, Y. Han, V. C. M. Leung, D. Niyato, X. Yan and X. Chen, "Convergence of edge computing and deep learning: A comprehensive survey", IEEE Commun. Surveys Tuts., vol. 22, no. 2, pp. 869-904, 2nd Quart. 2020.
3.
M. Chen et al., "Distributed learning in wireless networks: Recent progress and future challenges", IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3579-3605, Dec. 2021.
4.
C. Zhang, P. Patras and H. Haddadi, "Deep learning in mobile and wireless networking: A survey", IEEE Commun. Surveys Tuts., vol. 21, no. 3, pp. 2224-2287, 3rd Quart. 2019.
5.
A. Zappone, M. Di Renzo and M. Debbah, "Wireless networks design in the era of deep learning: Model-based AI-based or both?", IEEE Trans. Commun., vol. 67, no. 10, pp. 7331-7376, Oct. 2019.
6.
Z. Zhao, C. Feng, H. H. Yang and X. Luo, "Federated-learning-enabled intelligent fog radio access networks: Fundamental theory key techniques and future trends", IEEE Wireless Commun., vol. 27, no. 2, pp. 22-28, Apr. 2020.
7.
E. Li, L. Zeng, Z. Zhou and X. Chen, "Edge AI: On-demand accelerating deep neural network inference via edge computing", IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 447-457, Jan. 2020.
8.
L. Liu, J. Zhang, S. Song and K. B. Letaief, "Hierarchical federated learning with quantization: Convergence analysis and system design", IEEE Trans. Wireless Commun., vol. 22, no. 1, pp. 2-18, Jan. 2023.
9.
M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor and S. Cui, "A joint learning and communications framework for federated learning over wireless networks", IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 269-283, Jan. 2021.
10.
C. Feng, Z. Zhao, Y. Wang, T. Q. S. Quek and M. Peng, "On the design of federated learning in the mobile edge computing systems", IEEE Trans. Commun., vol. 69, no. 9, pp. 5902-5916, Sep. 2021.
11.
B. Yin, Z. Chen and M. Tao, "Dynamic data collection and neural architecture search for wireless edge intelligence systems", IEEE Trans. Wireless Commun., vol. 22, no. 1, pp. 688-703, Jan. 2023.
12.
B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. Y. Arcas, "Communication-efficient learning of deep networks from decentralized data", Proc. Int. Conf. Artif. Intell. Statist., pp. 1273-1282, 2017.
13.
O. Gupta and R. Raskar, "Distributed learning of deep neural network over multiple agents", J. Netw. Comput. Appl., vol. 116, pp. 1-8, Aug. 2018.
14.
P. Vepakomma, O. Gupta, T. Swedish and R. Raskar, "Split learning for health: Distributed deep learning without sharing raw patient data", arXiv:1812.00564, 2018.
15.
S. Wang, X. Zhang, H. Uchiyama and H. Matsuda, "HiveMind: Towards cellular native machine learning model splitting", IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 626-640, Feb. 2022.
16.
C. Thapa, P. C. M. Arachchige, S. Camtepe and L. Sun, "SplitFed: When federated learning meets split learning", Proc. AAAI Conf. Artif. Intell., pp. 8485-8493, Jun. 2022.
17.
Y. Gao et al., "Evaluation and optimization of distributed machine learning techniques for Internet of Things", IEEE Trans. Comput., vol. 71, no. 10, pp. 2538-2552, Oct. 2022.
18.
D.-J. Han, H. I. Bhatti, J. Lee and J. Moon, "Accelerating federated learning with split learning on locally generated losses", Proc. ICML Workshop Federated Learn. User Privacy Data Confidentiality, pp. 1-12, Jul. 2021.
19.
C. He, M. Annavaram and S. Avestimehr, "Group knowledge transfer: Federated learning of large CNNs at the edge", Proc. 34th Int. Conf. Neural Inf. Process. Syst., vol. 33, pp. 14068-14080, Dec. 2020.
20.
Y. Tian, Y. Wan, L. Lyu, D. Yao, H. Jin and L. Sun, "FedBERT: When federated learning meets pre-training", ACM Trans. Intell. Syst. Technol., vol. 13, no. 4, pp. 1-26, Aug. 2022.
21.
S. Park, G. Kim, J. Kim, B. Kim and J. C. Ye, "Federated split vision transformer for COVID-19 CXR diagnosis using task-agnostic training", arXiv:2111.01338, 2021.
22.
A. Howard et al., "Searching for MobileNetV3", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 1314-1324, Oct. 2019.
23.
D. Narayanan et al., "PipeDream: Generalized pipeline parallelism for DNN training", Proc. 27th ACM Symp. Operating Syst. Princ., pp. 1-15, Oct. 2019.
24.
Y. Huang et al., "GPipe: Efficient training of giant neural networks using pipeline parallelism", Proc. 33rd Int. Conf. Neural Inf. Process. Syst., vol. 32, pp. 103-112, Dec. 2019.
25.
Q. Zhang and H. Li, "MOEA/D: A multiobjective evolutionary algorithm based on decomposition", IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712-731, Dec. 2007.
26.
K. Deb and H. Jain, "An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach—Part I: Solving problems with box constraints", IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577-601, Aug. 2014.
27.
X. Zhang, Y. Tian, R. Cheng and Y. Jin, "A decision variable clustering-based evolutionary algorithm for large-scale many-objective optimization", IEEE Trans. Evol. Comput., vol. 22, no. 1, pp. 97-112, Feb. 2018.
28.
R. Cheng, Y. Jin, K. Narukawa and B. Sendhoff, "A multiobjective evolutionary algorithm using Gaussian process-based inverse modeling", IEEE Trans. Evol. Comput., vol. 19, no. 6, pp. 838-856, Dec. 2015.
29.
L. Pan, C. He, Y. Tian, H. Wang, X. Zhang and Y. Jin, "A classification-based surrogate-assisted evolutionary algorithm for expensive many-objective optimization", IEEE Trans. Evol. Comput., vol. 23, no. 1, pp. 74-88, Feb. 2019.
30.
C.-W. Seah, Y.-S. Ong, I. W. Tsang and S. Jiang, "Pareto rank learning in multi-objective evolutionary algorithms", Proc. IEEE Congr. Evol. Comput., pp. 1-8, Jun. 2012.

Contact IEEE to Subscribe

References

References is not available for this document.