Loading [MathJax]/extensions/MathMenu.js
Proximal Policy Optimization based computations offloading for delay optimization in UAV-assisted mobile edge computing | IEEE Conference Publication | IEEE Xplore

Proximal Policy Optimization based computations offloading for delay optimization in UAV-assisted mobile edge computing


Abstract:

UAVs have the potential to enhance wireless systems by improving range and quality, and this can be achieved by utilising the Mobile Edge Computing provided by the unmann...Show More

Abstract:

UAVs have the potential to enhance wireless systems by improving range and quality, and this can be achieved by utilising the Mobile Edge Computing provided by the unmanned aerial vehicle. In this system, the MEC server mounted on the unmanned aerial vehicle can offer offload services to all the User Equipment in a given space. By offloading some proportion of its tasks to unmanned aerial vehicle for computation, the UE can perform the remaining tasks locally. The objective of this study is to minimize the maximum processing delay in the whole process by optimizing on four parameters, scheduling of the user equipment, portion of the task that has to be offloaded, angle of flight for the UAV, and speed of flight of the UAV, taking into account discrete variables and power constraints. However, due to the nonconvexity, the state space’s high dimension, and action’s space continuous nature of this problem, we are proposing a Proximal Policy optimization algorithm which is based on boosting the policy gradient. Further we will do a comparative analysis of PPO algorithm with other popular Reinforcement Learning algorithms, particularly the Deep Deterministic Policy Gradient algorithm. PPO algorithm can quickly achieve the optimal policy for offloading computation in a dynamic environment. The results obtained implies that the both PPO and DDPG algorithm converges quickly, and the processing delay is minimizd. However, our proposed PPO algorithm has shown significant improvement in minimizing the processing delay. Our model also performs way better compared to basic algorithm such as Deep Q Network (DQN).
Date of Conference: 15-18 December 2023
Date Added to IEEE Xplore: 22 January 2024
ISBN Information:
Conference Location: Sorrento, Italy
Citations are not available for this document.

I. Introduction

The proliferation and advancement of 5G technology will lead to the popularity and prosperity of computation-intensive applications on user equipment (UEs), such as telemedicine, VR/AR, and online gaming. Computation-intensive mobile applications, including online gaming, VR/AR, and telemedicine, often demand high resources for computation with significant energy requirements. However, most of the devices, such as UEs, typically have certain limit on the amount of computations it can perform. In order to provide a solution to this mentioned problem, Mobile Cloud Computing (MCC) can effectively be used. This paradigm of computation significantly augments the computational and storage capacities of UEs, by enabling them to let the computation be completed on the cloud by offloading. This approach can also reduce the energy consumption of UEs. [1].

Cites in Papers - |

Cites in Papers - IEEE (1)

Select All
1.
Priyadarshni Priyadarshni, Praveen Kumar, Shivani Tripathi, Akshun Pratap Dubey, Rajiv Misra, "MEC- Assisted Task offloading using Meta-Reinforcement Learning for B5G/6G Network", 2024 IEEE International Conference on Big Data (BigData), pp.4309-4314, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.