Loading [MathJax]/extensions/MathMenu.js
Joint Optimization of Multi-Type Caching Placement and Multi-User Computation Offloading for Vehicular Edge Computing | IEEE Conference Publication | IEEE Xplore

Joint Optimization of Multi-Type Caching Placement and Multi-User Computation Offloading for Vehicular Edge Computing


Abstract:

With the rapid development of Artificial Intelligence (AI) and Internet of Vehicles (IoV), the types of vehicular applications are becoming more diverse. And Vehicular Ed...Show More

Abstract:

With the rapid development of Artificial Intelligence (AI) and Internet of Vehicles (IoV), the types of vehicular applications are becoming more diverse. And Vehicular Edge Computing (VEC) can provide the computing resource and caching resource for the diverse applications with the lower latency compared with the cloud. However, due to the limited resource of VEC and the long haul transmission from the cloud, the multi-type caching of the diverse applications from multi-users bring the huge challenges. In this paper, we propose a joint optimization problem of multi-type caching placement and multi-user computation offloading in the three-layer end-edge-cloud architecture to minimize the overall system latency. As the resolution of the NP-Hard problem, a Caching and Offloading Framework for Multi-user Multi-type Requests (COF-MMR) based on Deep Deterministic Policy Gradient (DDPG) algorithm is explored. Simulation results show that our proposed COFMMR framework has achieved an up to 20% improvement in reducing the overall system latency compared to the baseline scheme.
Date of Conference: 04-08 December 2023
Date Added to IEEE Xplore: 26 February 2024
ISBN Information:

ISSN Information:

Conference Location: Kuala Lumpur, Malaysia

Funding Agency:


I. Introduction

The advancements in Artificial Intelligence (AI) and Internet of Vehicles (IoV) enable rapid development of vehicular applications, which provide comfortable travel experiences for drivers. Meanwhile, these applications have a demand for lower latency, intensive computing capability, and increased caching resources. If data is interacted with cloud center, it may not meet the low content access latency and diversified application requirements. Luckily, Vehicular Edge Computing (VEC) [1] and Edge Caching can serve as an effective framework [2] to reduce vehicle service latency by migrating network content and computing resource to edge sides.

Contact IEEE to Subscribe

References

References is not available for this document.