I. Introduction
The Internet of Things (IoT) has had explosive growth in recent years. Emerging IoT applications, such as autonomous driving, virtual reality/augmented reality and so forth, are latency-sensitive and computation-intensive [1], [2]. However, IoT devices are typically characterized by restricted computation resource. Hence, it is challenging to implement these latency-sensitive applications on such devices, and a significant surge in demand for computation resource is induced [3]. Recently, multiaccess edge computing (MEC) has been considered as a promising solution to tackle these issues [4]. MEC extends the cloud-computation capabilities and IT service environment to the edge of the network. Compared to faraway centralized data centers, MEC servers can be deployed at various access points [e.g., mobile base stations (BSs)] close to IoT devices. By offloading the computation tasks to MEC servers, devices can finish the data delivery and computation with a short latency. MEC systems have been investigated with different optimization objectives and in multifarious situations [5]–[7]. In [5], the joint computation offloading and resource allocation strategy was studied to minimize users’ energy consumption. In [6], latency-minimization problems were formulated and studied in a scenario where users may opt for partial offloading to the edge server via an access point with the aid of an intelligent reflecting surface. In [7], a dynamic spectrum management framework was proposed to improve spectrum resource utilization in MEC systems in autonomous vehicular networks.