I. INTRODUCTION
In traditional cloud computing, user-initiated requests are serviced through an uplink from the edge network to the central network and then to the cloud computing center. Once processed, the results are returned to the user via a downlink from the cloud computing center. However, with the development of 5G and 6G, and improvements in people's quality of life, there is a growing need for low-latency and high-bandwidth application scenarios, such as autonomous driving, smart cities, and healthcare[1]. To meet these increasing low-latency demands, edge computing has emerged. However, edge servers have significant differences in computing and storage capabilities compared to cloud computing centers. As various businesses expand over time, latency requirements have further increased, leading to the large-scale deployment of edge servers to quickly respond to user requests. Consequently, the energy consumption problem has become increasingly prominent. At present, there are the following related researches on edge computing: