I. Introduction
In recent years, both academia and industry have shown growing interest in connected and autonomous vehicles (CAVs) [1]. By utilizing vehicle-road cooperation technology, vehicles can support innovative applications, such as autonomous driving, efficient fleet coordination, and instant video processing, all of which are crucial for improving road safety and convenience. However, these high-demand applications often require significant computational power, creating challenges for CAVs with constrained resources [2]. Vehicular edge computing (VEC) offers an effective solution by transferring resource-heavy tasks to roadside units (RSUs) equipped with edge servers (RESs), which helps reduce vehicle workload and minimize task processing delays [3]. This enables CAVs to function more efficiently [4], [5]. Nevertheless, since RSUs must cater to multiple vehicles simultaneously, the challenge lies in optimally utilizing limited edge resources to maximize system benefits [6]. Extensive research has been conducted on edge resource allocation in CAVs [7], [8], where most of the existing work focuses on single-objective optimization problems. Recently, artificial intelligence and deep reinforcement learning techniques have also emerged as potential solutions [9], [10], although most studies focus on optimizing a single performance criterion. However, in the face of diverse applications, the offloading process needs to comprehensively consider different requirements and system performance indicators. For example, to reduce latency, if tasks are offloaded to an RSU for processing, additional costs, including communication costs and computing costs, are incurred. If all tasks are offloaded to an RSU, it will be overloaded and unable to utilize as many resources as possible. Therefore, autonomous vehicles must consider comprehensive performance and achieve multiobjective optimization.