I. Introduction
The rapid developments of mobile networks and Internet of Things (IoT) have accelerated the spread of various intelligent applications (e.g., augmented reality (AR), virtual reality (VR), face recognition, interactive gaming, etc.), which require extremely high computational load to achieve low-latency services. IoT devices, however, have limited computing ability as well as battery capacity and thereby cannot meet the demanding requirements of these computing-intensive and low-latency applications. Mobile edge computing (MEC) has been considered as a promising technology to handle this challenge by deploying high-performance servers at the edge of wireless networks [2]. With MEC, IoT devices can offload their computational tasks to the edge servers, thereby saving energy of devices while meeting the delay requirement [3]. In recent years, there have been considerable studies in MEC networks that investigate computation offloading, computing and communication resource allocation to improve system performance [4]–[7]. However, these works only study the scenario that the MEC servers are deployed in fixed locations (e.g., base station (BS) or access point (AP)) and cannot be applied to the situation where the devices are out of the infrastructure coverage or the network facilities are destroyed.