Distributed Deep Reinforcement Learning-Based Gradient Quantization for Federated Learning Enabled Vehicle Edge Computing | IEEE Journals & Magazine | IEEE Xplore

Distributed Deep Reinforcement Learning-Based Gradient Quantization for Federated Learning Enabled Vehicle Edge Computing


Abstract:

Federated learning (FL) can protect the privacy of the vehicles in vehicle edge computing (VEC) to a certain extent through sharing the gradients of vehicles’ local model...Show More

Abstract:

Federated learning (FL) can protect the privacy of the vehicles in vehicle edge computing (VEC) to a certain extent through sharing the gradients of vehicles’ local models instead of the local data. The gradients of vehicles’ local models are usually large for the vehicular artificial intelligence (AI) applications, thus transmitting such large gradients would cause large per-round latency. Gradient quantization has been proposed as one effective approach to reduce the per-round latency in FL enabled VEC through compressing gradients and reducing the number of bits, i.e., the quantization level, to transmit gradients. The selection of quantization level and thresholds determines the quantization error (QE), which further affects the model accuracy and training time. To do so, the total training time and QE become two key metrics for the FL enabled VEC. It is critical to jointly optimize the total training time and QE for the FL enabled VEC. However, the time-varying channel condition causes more challenges to solve this problem. In this article, we propose a distributed deep reinforcement learning (DRL)-based quantization level allocation scheme to optimize the long-term reward in terms of the total training time and QE. Extensive simulations identify the optimal weighted factors between the total training time and QE, and demonstrate the feasibility and effectiveness of the proposed scheme.
Published in: IEEE Internet of Things Journal ( Volume: 12, Issue: 5, 01 March 2025)
Page(s): 4899 - 4913
Date of Publication: 21 August 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

With the rapid development of autonomous driving technology, a large amount of data are generated by various sensors on vehicles, such as cameras, radar, lidar, as well as proximity and temperature sensors. For example, a self-driving car is expected to generate about 1 GB of data per second [1]. Vehicles need powerful computing capability to process and analyze the data to support the modeling training of the vehicular artificial intelligence (AI) applications, such as simultaneous localization and mapping (SLAM), augmented reality (AR) navigation, object tracking, and high-definition (HD) map generation [2]. However, the computing capability of vehicles is limited. In this situation, vehicle edge computing (VEC) becomes a promising technology to facilitate these applications, where a base station (BS) connected with an edge server can collect and utilize the vehicles’ data for the model training [3]. However, the raw data generated by a vehicle often contains personal information, thus there may be a risk of data leakage in data privacy in VEC [4].

Select All
1.
Q. Wu, W. Wang, P. Fan, Q. Fan, J. Wang and K. B. Letaief, "URLLC-awared resource allocation for heterogeneous vehicular edge computing", IEEE Trans. Veh. Technol., vol. 73, no. 8, pp. 11789-11805, Aug. 2024.
2.
Q. Wu, S. Wang, H. Ge, P. Fan, Q. Fan and K. B. Letaief, "Delay-sensitive task offloading in vehicular fog computing-assisted platoons", IEEE Trans. Netw. Service Manag., vol. 21, no. 2, pp. 2012-2026, Apr. 2024.
3.
D. Long, Q. Wu, Q. Fan, P. Fan, Z. Li and J. Fan, "A power allocation scheme for MIMO-NOMA and D2D vehicular edge computing based on decentralized DRL", Sensors, vol. 23, no. 7, pp. 3449, 2023.
4.
Q. Wu, Y. Zhao, Q. Fan, P. Fan, J. Wang and C. Zhang, "Mobility-aware cooperative caching in vehicular edge computing based on asynchronous federated and deep reinforcement learning", IEEE J. Sel. Topics Signal Process., vol. 17, no. 1, pp. 66-81, Jan. 2023.
5.
Z. Yu, J. Hu, G. Min, Z. Zhao, W. Miao and M. S. Hossain, "Mobility-aware proactive edge caching for connected vehicles using federated learning", IEEE Trans. Intell. Transp. Syst., vol. 22, no. 8, pp. 5341-5351, Aug. 2021.
6.
Q. Wu, X. Wang, Q. Fan, P. Fan, C. Zhang and Z. Li, "High stable and accurate vehicle selection scheme based on federated edge learning in vehicular networks", China Commun., vol. 20, no. 3, pp. 1-17, Mar. 2023.
7.
R. Zhang et al., "Generative AI-enabled vehicular networks: Fundamentals framework and case study", IEEE Netw., vol. 38, no. 4, pp. 259-267, Jul. 2024.
8.
Y. Oh, N. Lee, Y.-S. Jeon and H. V. Poor, "Communication-efficient federated learning via quantized compressed sensing", IEEE Trans. Wireless Commun., vol. 22, no. 2, pp. 1087-1100, Feb. 2023.
9.
D. Alistarh, D. Grubic, J. Li, R. Tomioka and M. Vojnovic, "QSGD: Communication-efficient SGD via gradient quantization and encoding", Proc. 31st Adv. Neural Inf. Process. Syst., pp. 1-12, 2017.
10.
Q. Wu and J. Zheng, "Performance modeling and analysis of the ADHOC MAC protocol for vehicular networks", Wireless Netw., vol. 22, pp. 799-812, Apr. 2016.
11.
Q. Wu, W. Wang, P. Fan, Q. Fan, H. Zhu and K. B. Letaief, "Cooperative edge caching based on elastic federated and multi-agent deep reinforcement learning in next-generation networks", IEEE Trans. Netw. Service Manag., vol. 21, no. 4, pp. 4179-4196, Aug. 2024.
12.
H. Zhu, Q. Wu, X.-J. Wu, Q. Fan, P. Fan and J. Wang, "Decentralized power allocation for MIMO-NOMA vehicular edge computing based on deep reinforcement learning", IEEE Internet Things J., vol. 9, no. 14, pp. 12770-12782, Jul. 2022.
13.
W. Qiong, S. Shuai, W. Ziyang, F. Qiang, F. Pingyi and Z. Cui, "Towards V2I age-aware fairness access: A DQN based intelligent vehicular node training and test method", Chin. J. Electron., vol. 32, no. 6, pp. 1230-1244, Nov. 2023.
14.
J. Mills, J. Hu and G. Min, "Multi-task federated learning for personalised deep neural networks in edge computing", IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 3, pp. 630-641, Mar. 2022.
15.
J. Wang, J. Hu, G. Min, W. Zhan, A. Y. Zomaya and N. Georgalas, "Dependent task offloading for edge computing based on deep reinforcement learning", IEEE Trans. Comput., vol. 71, no. 10, pp. 2449-2461, Oct. 2022.
16.
P. Liu et al., "Training time minimization for federated edge learning with optimized gradient quantization and bandwidth allocation", Front. Inf. Technol. Electron. Eng., vol. 23, pp. 1247-1263, Aug. 2022, [online] Available: https://doi.org/10.1631/FITEE.2100538.
17.
Z. Yang et al., "Delay minimization for federated learning over wireless communication networks", Proc. ICML Workshop Federated Learn., pp. 1-7, 2020.
18.
Y. Wang, Y. Xu, Q. Shi and T.-H. Chang, "Robust federated learning in wireless channels with transmission outage and quantization errors", Proc. IEEE 22nd Int. Workshop Signal Process. Adv. Wireless Commun. (SPAWC), pp. 586-590, 2021.
19.
S. Wan, J. Lu, P. Fan, Y. Shao, C. Peng and K. B. Letaief, "Convergence analysis and system design for federated learning over wireless networks", IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3622-3639, Dec. 2021.
20.
M. Chen, H. V. Poor, W. Saad and S. Cui, "Convergence time optimization for federated learning over wireless networks", IEEE Trans. Wireless Commun., vol. 20, no. 4, pp. 2457-2471, Apr. 2021.
21.
Y. Jiang et al., "Model pruning enables efficient federated learning on edge devices", IEEE Trans. Neural Netw. Learn. Syst., vol. 34, no. 12, pp. 10374-10386, Dec. 2023.
22.
V.-D. Nguyen, S. K. Sharma, T. X. Vu, S. Chatzinotas and B. Ottersten, "Efficient federated learning algorithm for resource allocation in wireless IoT networks", IEEE Internet Things J., vol. 8, no. 5, pp. 3394-3409, Mar. 2021.
23.
Y. Wang, Y. Xu, Q. Shi and T.-H. Chang, "Quantized federated learning under transmission delay and outage constraints", IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 323-341, Jan. 2022.
24.
R. Hönig, Y. Zhao and R. Mullins, "DAdaQuant: Doubly-adaptive quantization for communication-efficient federated learning", Proc. 39th Int. Conf. Mach. Learn., pp. 8852-8866, 2022, [online] Available: https://proceedings.mlr.press/v162/honig22a.html.
25.
D. Jhunjhunwala, A. Gadhikar, G. Joshi and Y. C. Eldar, "Adaptive quantization of model updates for communication-efficient federated learning", Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 3110-3114, 2021.
26.
N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor and S. Cui, "Federated learning with quantization constraints", Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), pp. 8851-8855, 2020.
27.
N. Shlezinger, M. Chen, Y. C. Eldar and H. V. Poor, "UVeQFed: Universal vector quantization for federated learning", IEEE Trans. Signal Process., vol. 69, pp. 500-514, Dec. 2020.
28.
S. Chen, C. Shen, L. Zhang and Y. Tang, "Dynamic aggregation for heterogeneous quantization in federated learning", IEEE Trans. Wireless Commun., vol. 20, no. 10, pp. 6804-6819, Oct. 2021.
29.
M. K. Nori, S. Yun and I.-M. Kim, "Fast federated learning by balancing communication trade-offs", IEEE Trans. Commun., vol. 69, no. 8, pp. 5168-5182, Aug. 2021.
30.
G. Wang, F. Xu, H. Zhang and C. Zhao, "Joint resource management for mobility supported federated learning in Internet of Vehicles", Future Gener. Comput. Syst., vol. 129, pp. 199-211, Apr. 2022.

Contact IEEE to Subscribe

References

References is not available for this document.