System architecture for UPF allocation in C-V2X networks with MEC integration using DRL-based algorithms (DQN and Actor-Critic) for latency optimization.
Abstract:
In this paper, we proposed an online learning method for predicting an allocation of User Plane Function (UPF) in Cellular Vehicle-to-Everything (C-V2X) networks integrat...Show MoreMetadata
Abstract:
In this paper, we proposed an online learning method for predicting an allocation of User Plane Function (UPF) in Cellular Vehicle-to-Everything (C-V2X) networks integrated with Multi-Access Edge Computing (MEC). Our study employed Deep Reinforcement Learning (DRL) techniques, specifically Deep Q-Network (DQN) and Actor-Critic (AC) algorithms. The DQN and AC algorithms were implemented to decide the optimal location of UPFs subject to vehicle positions and speed data of the vehicles. Our objective was to reduce the latency of communications between UPF and vehicles by placing the UPF(s) in optimal way. The simulation results showed that both DQN and AC algorithms reduced the latency significantly. We compared our proposed methods with the existing approaches which are K-mean Greedy Average and Greedy Average algorithms. The proposed AC algorithm achieved up to 40% reduction of average latency compared with the baseline methods when the placement of multiple UPFs are considered.
System architecture for UPF allocation in C-V2X networks with MEC integration using DRL-based algorithms (DQN and Actor-Critic) for latency optimization.
Published in: IEEE Access ( Volume: 13)