Loading web-font TeX/Caligraphic/Regular
Joint Task Offloading and Resource Allocation for Multi-Task Multi-Server NOMA-MEC Networks | IEEE Journals & Magazine | IEEE Xplore

Joint Task Offloading and Resource Allocation for Multi-Task Multi-Server NOMA-MEC Networks


Where MINLP is mixed integer nonlinear programming, SSAA is suboptimal subchannel allocation algorithm, PAA is power allocation algorithm, TAA is task allocation algorith...

Abstract:

By offloading computationally intensive tasks of smart end devices to edge servers deployed at the edge of the network, mobile edge computing (MEC) has become a promising...Show More

Abstract:

By offloading computationally intensive tasks of smart end devices to edge servers deployed at the edge of the network, mobile edge computing (MEC) has become a promising technology to provide computing services for Internet of Things (IoT) devices. In order to further improve the access capability of MEC and increase the spectrum utilization efficiency, in this article, Non-Orthogonal Multiple Access (NOMA) technology is introduced into MEC systems and we study the computing offloading problem of multi-user, multi-task and multi-server through joint optimization of task offloading and resource allocation, we intend to maximize the system's processing capability as an optimization goal. To solve the proposed mixed integer nonlinear programming (MINLP) problem, the objective optimization problem is firstly decoupled into two sub-problems of resource allocation and task allocation. Secondly the resource allocation problem is further decomposed into computation resource optimization and communication resource allocation. For the communication resource allocation, it first fixed power allocation, then the sub-channel allocation problem is regarded as a many-to-one matching problem between sub-channels and users. In addition, we propose a low-complexity sub-optimal matching algorithm for sub-channel allocation to maximize the offloading efficiency. Based on our proposed sub-channel allocation scheme, the transmission power allocation is regarded as a convex optimization problem, which is tackled by Lagrangian multiplier method. Finally, under the condition of resource allocation, the tasks of all end devices (EDs) are allocated. Experimental numerical results show that the proposed scheme can effectively decrease latency and energy consumption of networks, improve system processing capability, and further improve MEC system performance.
Where MINLP is mixed integer nonlinear programming, SSAA is suboptimal subchannel allocation algorithm, PAA is power allocation algorithm, TAA is task allocation algorith...
Published in: IEEE Access ( Volume: 9)
Page(s): 16152 - 16163
Date of Publication: 08 January 2021
Electronic ISSN: 2169-3536

Funding Agency:


CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

The process of social industrialization puts forward high-quality requirements for fast and effective data services and the application of 5G network provides a basic platform for this demand. However, with the popularity of mobile terminals (MTs) such as smart-phones, wearable devices and etc., the increase of massive data and the emergence of diversified services, the development of wireless communication has been affected a lot. Meanwhile the explosive growth of mobile Internet services has generated various emerging mobile applications with huge amount of computation, such as virtual reality (VR), human-computer interaction and big data analysis, which often demand stringent delay and processing requirements. This will bring challenges to MTs with limited battery capacity and computing resources [1]–​[3]. Although the cloud center is rich in computing and storage resources, the main problem is resource centralization and the distance between MTs and the cloud is longer, which will lead to the large network delay, high energy consumption and task execution overhead, while sensitive applications (such as electronic medicine) require low delay and small energy consumption [4]. To confront such a real-time challenge, as a distributed computing paradigm, MEC was proposed to solve the above problems by bringing computation and storage close to edge network [5]. MEC can reduce the delay and energy consumption of computing tasks and improve the resource utilization through properly offloading computation-intensive tasks to nearby MEC servers. With its advantages, MEC has been applied in many fields, such as IoT, Internet of Vehicles (IoV) and ultra-dense network et al. [6], [7]. In addition, in order to further improve the efficiency and flexibility of computing offloading by MEC, a new multi-access MEC paradigm is proposed, in which MTs can use different wireless access networks to offload computing tasks to multiple edge servers simultaneously [8]–​[10]. However, due to data transmission through wireless link, the performance of MEC system mainly depends on the allocation of wireless resources for data transmission and calculation task allocation, which has aroused many scholars’ research.

With the rapid development of the IoT, orthogonal Multiple Access (OMA) technology has become difficult to meet the demand of mass MTs for simultaneous access, so how to implement a time-frequency resource block (RB) to carry more MTs has become a new research direction. And NOMA technology [11], [12] comes into being. NOMA allows multiple MTs to use the same RB simultaneously and further apply the successive interference cancellation (SIC) technology to alleviate the MTs’ co-channel interference, which can effectively improve resource utilization. A lot of studies have confirmed the potential advantages of NOMA, such as improved system throughput, increased energy spectrum efficiency and reduced latency [13]–​[15].

MEC can reduce task execution costs, but MEC server has finite computation capacity relative to cloud center, combined with the advantages of MEC and NOMA, the paper considers a NOMA enabled multi-node MEC system, where EDs utilize NOMA to offload their computing tasks to different edge servers simultaneously. By reasonably allocating computing tasks of EDs and wireless resources in system, the offloading efficiency can be enhanced and the MEC network performance will be further improved. The main contributions of this article are summarized as follows:

  1. We investigate a network scenario with multiple mobile edge server nodes (MSNs) and multiple EDs which have computation-intensive tasks to process, in which each MSN is equipped with MEC server to provide wireless and computing resources, and each ED’s task can be divided into parts of any size for local and remote computing.

  2. To improve resource utilization and MEC performance, NOMA technology is adopted for data transmission during task offloading. We consider the constraints of communication resources and wireless resources, the joint optimization problem of task offloading and resource allocation is formulated to maximize the task processing capability of the system.

  3. To cope with the formulated MINLP problem, according to the characteristic of the objective function, we firstly break down original problem into two sub-problems, namely resource allocation (RA) problem and task allocation (TA) problem. Then we can further decompose the RA problem into computation resource and communication resource allocation.

  4. For communication resource allocation, the power allocation among sub-channels is first supposed to be equal, and then the sub-channel allocation problem is regarded as a two-sided matching process between sub-channels and MTs. And we put forward a low complexity sub-optimal matching algorithm for sub-channel allocation. Based on the subchannel allocation result, the transmission power allocation is considered as a convex optimization problem and is solved by using Lagrange multiplier method. Finally, on the basis of resource allocation, the task allocation algorithm is used to solve the TA problem. The computer simulation results indicate that the proposed task offloading and resource allocation scheme improves the MEC system performance.

The remainder of the paper is organized as follows. We introduce the related works in section II and the NOMA-MEC network model in section III. In section IV, we show the formulated optimization problem and decompose the problem. Section V describes the solution for the proposed problem and we show the simulation results in section VI. Finally, the conclusion is given for the paper in section VII.

SECTION II.

Related Works

Recently, since MEC has made great breakthroughs in improving quality of experience (QoE), which has aroused many scholars’ research in task offloading and resource allocation. In most of the research, they regard delay, energy consumption and system overhead as the important criteria with the constraints of quality of service (QoS) and resources. In a single MEC server scenario, some works focused on decreasing the energy consumption with the constraints of computation resources and delay, the tasks of multiple users are offloaded to an edge server, and the joint optimization problem of offloading decision and resource allocation was studied in binary offloading case [16]–​[18]. In [19], the joint sub-channel and power allocation problem in the MEC system based on Orthogonal Frequency Division Multiple Access (OFDMA) was investigated to minimize the delay of each mobile device. Chen et al. [20] studied the multi-user offloading problem in a multi-channel wireless environment and regarded the distributed offloading decision problem as a multi-user potential game, and proved the existence of Nash equilibrium. In [21], the authors designed a MEC offloading mechanism to save energy and concurrently meet low latency for a mobile user. Although the mentioned above research about single server has made some achievements in improving the performance of MEC, due to the limited computing capacity of the MEC server, when a large number of terminals request computing offloading, it will cause network congestion and large delay. In order to further improve the offloading efficiency, some researchers have studied the cooperative multi-node resource allocation. Literature [22] proposed an offloading scheme that MTs’ additional tasks could be further offloaded to other MEC servers connected to it through the collaboration of multiple MEC servers, to enhance the computing offloading service and improve the revenue of the terminal. Yang et al. [23] presented a two-layer architecture consisting of micro base station and macro base station, in which users offload their computing tasks to micro base station (MBS) or MBS relayed them to macro base station to complete task execution, effectively reducing system energy consumption. In [24], K. Cheng et al. proposed a computation offloading framework to enable multiple users to offload their computing tasks to multiple MEC servers by jointly optimizing the offloading strategy and radio resource allocation, which assumed the computing resources allocated to each user are fixed. Yang et al. [25] proposed a novel offloading framework for the multi-server MEC network assisting mobile users in executing computation-intensive jobs via uplink OFDMA offloading system. A multi-task learning based feedforward neural network (MTFNN) model is designed to resolve the MINLP problem by jointly optimizing offloading decision and computational resource allocation. The simulation results show that the uploading scheme based on the MTFNN model has better performance. It is worth learning from when solving such problems.

The aforementioned research about MEC have had obvious effects in reducing time delay and energy consumption, but the studies of single server and multi-server are all based on OMA-MEC system, the spectrum utilization efficiency is lower, and the user experience cannot be well satisfied while a large number of terminals accessed to request task offloading.

Therefore, facing the deficiencies of previous MEC research, with the advantages of NOMA, many works focused on NOMA-MEC systems to achieve better resource allocation and improve the quality of user experience. M. Zeng et al. [26] introduced wireless power transfer (WPT) technology into NOMA-MEC system for energy-efficient computation, and studied task offloading problem to maximize the sum of computing rates of all users. To better improve the ability to access of MEC systems and reduce users’ computation overhead, in [27], Zhou et al. introduced NOMA into MEC system and investigated a multi-user computation offloading problem, then by fixing the offloading decision iteratively updating the resource allocation and efficiently solve it. In [9] and [10], the authors considered a NOMA-based multi-access MEC IoT system.

The emergence of 5G network brings a huge breakthrough on transmission rate, MEC-enable IoT was proved as a promising solution to reduce the delay of task and save the energy of UEs in some IoT scenarios, such as unmanned aerial vehicle (UAV), autonomous vehicle and Industry IoT et al. [28]–​[31]. In [29], an edge learning-assisted offloading framework for autonomous driving is proposed to improve the inference accuracy while meeting the latency constraint for autonomous driving. In [30], due to the high inference accuracy and strict delay requirements in the target tracking scenario, and the limited computing resources and energy budget of the UAV, a novel hierarchical deep learning (DL) tasks distribution framework was proposed, where the type of DL task is offloaded to the MEC server, and further improve the accuracy of reasoning. In [31], Z. Zhao et al. investigated a communication and computation problem for industrial IoT networks. To enhance the system performance, a three-hierarchical optimization framework is proposed to reduce the latency and energy consumption by jointly optimizing bandwidth allocation, offloading, and relay selection.

By summarizing the research of NOMA in MEC, we can conclude that the combination of NOMA and MEC has made progress in meeting user requirements and improving user experience. Different from some studies, we are committed to propose an extensive computation offloading solution for the multi-user multi-task and multiple servers NOMA enabled MEC system by jointly optimizing the task allocation and resources allocation.

SECTION III.

System Model

A. Networzk Model

In this article, we consider a heterogeneous NOMA-MEC network shown as Fig. 1, which consists a number of MSNs with different storage and computing capabilities to provide offloading services for multiple EDs. These nodes are mainly composed of base stations, wireless access points, wireless routers, etc., and each node equipped with a MEC server. To increase spectrum utilization, all MSNs share spectrum resources, the system spectrum equally divides into a set of sub-channels denoted as {\mathcal{ S}}{\mathcal{ C}}=\{1,\ldots,K\} and denote k\in {\mathcal{ S}}{\mathcal{ C}} as sub-channel k . To facilitate analysis, we assume a quasi-static network, this assumption has been widely used in [27], [36]. Table 1 shows the main notations to be used in the paper.

TABLE 1 Notations
Table 1- 
Notations
FIGURE 1. - Network model of the system.
FIGURE 1.

Network model of the system.

We denote the set of MSNs by {\mathcal{ M}}=\{\mathrm {1,\ldots,M}\} and denote m\in {\mathcal{ M}} as MSN m , the set of EDs by {\mathcal{ N}}=\{1,\ldots,N\} and n\in {\mathcal{ N}} as ED n , each ED n has a task CT_{n} , which can be expressed by < D_{n},f_{n}^{l},B_{n} > , where D_{n} denotes the size of the input data, f_{n}^{l} denotes the local computing capacity of ED n , and B_{n} denotes the number of CPU cycles required for computing one bit of task of ED n . We assume the input task can be into sections of any size to execute paralleled at the EDs and MEC servers [32]. We supposed all EDs have J tasks offloaded to MSNs for computation, and denote the set of offloading tasks as {\mathcal{ J}} =\left \{{{1,2,\ldots J} }\right \},J\le N\times M , we denote D_{nm} \in {\mathcal{ J}} as ED n offloads the tasks to MSN m .

B. Communication Model

We first introduce the wireless transmission model in this system. In the uplink, each ED sends the signals that are superimposed together by NOMA respectively. we assume SIC receiver is implemented at the MSNs for receiving end. On each channel, according to the order of EDs’ channel gains, while the MSN with small channel gain is decoded, the higher channel gain is regarded as interference [33]. Specifically, through continuous decoding and reshaping, the signal with poor channel quality is first demodulated, and we subtract it from the entire superimposed signal interfering signal, and then decode signal with the second poor channel quality, and so on, until all the signals are separated. So the signal with the best channel quality is not interfered by others in the same NOMA cluster, but the MSN with the worst channel quality is interfered by all other MSNs in the cluster. Without loss of generality, s_{nm}^{k} is denoted as channel allocation variable, if ED n offloads its tasks to MSN m through subchannel k , s_{nm}^{k} =1 , otherwise, s_{nm}^{k} =0 . We assume that a subchannel can be occupied by multiple offloading tasks, and each task occupies at most one sub-channel, so there are the following constraints \begin{align*} \sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {s_{nm}^{k}}}\le&J_{\max }, \quad \forall k \tag{1}\\ \sum \limits _{k=1}^{K} {s_{nm}^{k}}\le&1,\quad \forall n,m\tag{2}\end{align*}

View SourceRight-click on figure for MathML and additional features. where J_{\max } represents the maximum number of offloading tasks that can be assign to each sub-channel.

The transmission power and channel power gain from ED n to MSN m on channel k are respectively denoted by p_{nm}^{k} and g_{nm}^{k} , Generally, we assume the channel gains of ED n on channel k are ordered as \begin{align*}\left |{ {g_{n1}^{k}} }\right |& \le \left |{ {g_{n2}^{k}} }\right |\le \ldots \le \left |{ {g_{nm}^{k}} }\right |\le \ldots \le \left |{ {g_{nM}^{k}} }\right |, \\&\qquad \qquad \qquad \qquad \qquad \qquad \,\, \forall n\in {\mathcal{ N}},~m\in {\mathcal{ M}}\tag{3}\end{align*}

View SourceRight-click on figure for MathML and additional features. After the SIC technology, the received signal at MSN m from ED n on subchannel k is \begin{align*}y_{nm}^{k} & =g_{nm}^{k} \sqrt {p_{nm}^{k}} x_{nm}^{k} +g_{nm}^{k} \sum \limits _{l=m+1}^{M} {\sqrt {p_{nl}^{k}} x_{nl}^{k}} \\& \qquad \qquad \qquad \quad +\sum \limits _{m=1}^{M} {\sum \limits _{l^{\prime }=1,l^{\prime }\ne n}^{N} {g_{l^{\prime }m}^{k} \sqrt {p_{l^{\prime }m}^{k}}}} x_{l^{\prime }m}^{k} +\sigma ^{2}\tag{4}\end{align*}
View SourceRight-click on figure for MathML and additional features.
where x_{nm}^{k} denotes the modulated symbol of MSN m on sub-channel k . The first term in (4) is the received signal transmitted from ED n to MSN m . The second term represents the co-interference when ED n offloads tasks to other MSNs on the same sub-channel. The third term represents the interference that other EDs offload tasks to MSN m through the same channel. The fourth term is white Gaussian noise (AWGN).

The data rate from ED n to MSN m on sub-channel k is \begin{equation*} r_{nm}^{k} =W\log _{2} \left ({{1+\gamma _{nm}^{k}} }\right)\tag{5}\end{equation*}

View SourceRight-click on figure for MathML and additional features. In Eq. (5) \begin{equation*} \gamma _{nm}^{k} =\frac {p_{nm}^{k} \left |{ {g_{nm}^{k}} }\right |^{2}}{\left |{ {g_{nm}^{k}} }\right |^{2}\sum \limits _{l=m+1}^{M} {p_{nl}^{k}} +\sum \limits _{m=1}^{M} {\sum \limits _{l^{\prime }=1,l^{\prime }\ne n}^{N} {p_{l^{\prime }m}^{k} \left |{ {g_{l^{\prime }m}^{k}} }\right |}}^{2}+\sigma ^{2}}\end{equation*}
View SourceRight-click on figure for MathML and additional features.
where W is the bandwidth of the sub-channel. \sigma ^{2} denotes the power of the additive white Gaussian noise (AWGN). Let I_{nm}^{k} =\left |{ {g_{nm}^{k}} }\right |^{2}\sum \limits _{l=m+1}^{M} {p_{nl}^{k}} +\sum \limits _{m=1}^{M} {\sum \limits _{l^{\prime }=1,l^{\prime }\ne n}^{N} {p_{l^{\prime }m}^{k} \left |{ {g_{l^{\prime }m}^{k}} }\right |}}^{2} .

Therefore, the total sum rate from ED n to MSN m is \begin{equation*} r_{nm} =\sum \nolimits _{k\in {\mathcal{ K}}} {s_{nm}^{k} r_{nm}^{k}}\tag{6}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

C. Compution Model

In the paper, EDs’ tasks are divided according to the proportion, \theta _{nn} and \theta _{nm} are represented the proportion of local computing and offloading tasks to MSN m of ED n respectively.

When requested task \theta _{nn} D_{n} of ED n is computed locally, the required computing time T_{nn}^{l} can be expressed as \begin{equation*} T_{nn}^{l} =\frac {\theta _{nn} D_{n} B_{n}}{f_{n}^{l}}\tag{7}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

The system generates additional delay when the tasks are executed at the MSNs. For each ED, the latency consists of uplink transmission time, the processing time at the MSNs, and the downloading time of computation results. In this article, it is assumed that the results downloading time are ignored, since the task results are usually much smaller than the size of input data, as in [27], [36].

When computation task \theta _{nm} D_{n} of ED n is offloaded to MSN m executed remotely, the transmission time can be computed as \begin{equation*} T_{nm}^{off} =\frac {\theta _{nm} D_{n}}{r_{nm}}\tag{8}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

The delay of remote calculation on MSN m is \begin{equation*} T_{nm}^{exe} =\frac {\theta _{nm} D_{n} B_{n}}{f_{mn}^{e}}\tag{9}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where f_{mn}^{e} is the computing resources allocated by the MSN m to ED n .

The total delay caused by executing task \theta _{nm} D_{n} of ED n at the MEC server m can be represented as \begin{equation*} T_{nm}^{c} =T_{nm}^{off} +T_{nm}^{exe}\tag{10}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

For the convenience of processing, the total delay of task execution for ED n is expressed as \begin{align*} T_{nm} =\begin{cases} \frac {\theta _{nn} D_{n} B_{n}}{f_{n}^{l}},& n=m \\ \frac {\theta _{nm} D_{n}}{r_{nm}}+\frac {\theta _{nm} D_{n} B_{n} }{f_{mn}^{e}},& n\ne m \\ \end{cases}\tag{11}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Since the tasks of ED n are sliced into multiple parts for local computation and remote computation respectively, tasks are transmitted and processed in parallel. Therefore, the time of ED n completing the computation task is the maximum of local and edge computation time, which is presented as follows \begin{equation*} T_{n} =\max T_{nm}\tag{12}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

In addition, the task processing capacity of ED n is defined as [34]\begin{equation*} C_{n} =\frac {D_{n}}{\textrm {T}_{n}}\tag{13}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Finally, this article defines C as the overall task processing capability of the whole system.\begin{equation*} \textrm {C}=\sum \limits _{n=1}^{N} {C_{n}}\tag{14}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

SECTION IV.

Problem Formulation

When the task processing capability of the whole system is higher, namely the C is so higher that the system obtains better the system processing capability. Therefore, our goal is to maximize the task processing capability of the system by jointly optimizing the ratio of tasks \boldsymbol {\theta } , sub-channel assignment {s} , transmit power {p} , and computing resources allocation {f} , our optimization problem P1 is described as \begin{align*} &\hspace {-0.5pc}\max \limits _{\left \{{{P,\theta,f,S} }\right \}} \textrm {C}\tag{15}\\[-1pt]&\text {s.t. c1}:s_{nm}^{k} \in \left \{{{0,1} }\right \}, \quad \forall n,m,k\tag{15a}\\[-1pt]&\quad ~~\text {c2}: \sum \limits _{n=1}^{N} {\sum \limits _{m}^{M} {s_{nm}^{k}}} \le J_{\max }, \quad \forall k\tag{15b}\\[-1pt]&\quad ~~\text {c3}: \sum \limits _{k=1}^{K} {s_{nm}^{k}} \le 1,\quad \forall n,m\tag{15c}\\[-1pt]&\quad ~~\text {c4}: \sum \limits _{m=1}^{M} {\sum \limits _{k\in {\mathcal{ K}}} {s_{nm}^{k} p_{nm}^{k} }} \le P_{n}^{\max }, \quad \forall n\tag{15d}\\[-1pt]&\quad ~~\text {c5}: p_{nm}^{k} \ge 0, \quad \forall n,m,k\tag{15e}\\[-1pt]&\quad ~~\text {c6}: r_{nm}^{k} \ge r_{\min }, \quad \forall n,m,k\tag{15f}\\[-1pt]&\quad ~~\text {c7}: f_{mn}^{e} \ge 0, \quad \forall n,m\tag{15g}\\[-1pt]&\quad ~~\text {c8}: \sum \limits _{n=1}^{N} {f_{mn}^{e}} \le F_{m},\quad \forall m\tag{15h}\\[-1pt]&\quad ~~\text {c9}: \theta _{nn},\theta _{nm} \in \left [{ {0,1} }\right], \quad \forall n,m\tag{15i}\\[-1pt]&\quad ~~\text {c10}: \theta _{nn} +\sum \limits _{m=1}^{M} {\theta _{nm}} =1,\quad \forall n\tag{15j}\end{align*}

View SourceRight-click on figure for MathML and additional features. where P_{n}^{\max } is the maximum transmit power of ED n , F_{m} is the maximum computing capacity of MSN m .

Constraint c1 states that subchannel allocation s_{nm}^{k} is a binary variable; c2 means that at most one channel serves a task of one ED; c3 implies that each offloading task occupies at most one sub-channel; c4 and c5 represent power constraints; c6 represents the QoS constraint, and the basic communication must guarantee the network data rate; c7 and c8 indicate the computing resources limitation for each ED; c9 and c10 ensure all tasks of each ED are executed both local and on MSNs remotely.

Obviously, since channel decision variable is 0–1, the proposed problem P1 is a MINLP problem and it is difficult to obtain the optimal solution. To make it more tractable, problem P1 can be written as \begin{equation*} \max \left \{{{\textrm {C}} }\right \}=\max \left \{{{\sum \limits _{n=1}^{N} {C_{n} }} }\right \}=\max \left \{{{\sum \limits _{n=1}^{N} {\min \left \{{{\frac {D_{n} }{\textrm {T}_{nm}}} }\right \}}} }\right \}\tag{16}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

We assume that the input data D_{n} of each mobile device is fixed, the task processing capability of each ED is only related to its delay, and the delay among each ED does not affect each other. So the optimal solution of P1 can be solved when the minimum \textrm {T}_{nm} is obtained [34]. Therefore, the P1 problem can be transformed into \begin{align*} \text {P2:}&\min \limits _{\forall n}\,\, \textrm {max}T_{nm} \\&\text {s.t. } ~\text {c}1-\text {c}10\tag{17}\end{align*}

View SourceRight-click on figure for MathML and additional features. T_{nm} is mainly a function of task allocation, transmission power and computation resource. Therefore, task allocation and resource allocation are crucial in offloading. And we decomposed the P2 problem into two sub-problems for solution: resource allocation problem (\text{P}2^\prime ) and task offloading problem (\text{P}2^{\prime \prime } ). Where \text{P}2^\prime determines how much transmitting power and computing capacity allocated on MSNs for offloading tasks, and \text{P}2^{\prime \prime } determines how many tasks will be assigned to local EDs and MSNs to compute. The specific description is as follows.

Due to consider the transmission rate and computing capacity, the maximum effective offloading rate R is obtained from resource allocation is \begin{align*} \text {P2}':&\max \limits _{S,P,f} \textrm {R}=\max \sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {R_{nm}}} \\&\text {s.t.} ~\text {c}1-\text {c}8\tag{18}\end{align*}

View SourceRight-click on figure for MathML and additional features. where \begin{aligned} R_{nm} =\begin{cases} \frac {f_{n}^{l}}{B_{n}},& n=m \\ \frac {r_{nm} f_{mn}^{e}}{B_{n} r_{nm} +f_{mn}^{e}},& n\ne m \\ \end{cases} \end{aligned}

Through task assignment, the minimum effective delay of task processing is

P2”:\begin{align*}&\mathop {\min \max }\limits _{\theta } T_{nm}\,\,=\min \,\textrm {max}\left \{{ {\frac {\theta _{nn} B_{n} D_{n}}{f_{n}^{l}},\frac {\theta _{nm} D_{n} }{R_{nm}}} }\right \}, \quad \forall n,m \\&\text {s.t.}~\, \text {c}9, \text {c}10\tag{19}\end{align*}

View SourceRight-click on figure for MathML and additional features.

SECTION V.

Proposed Algorithm

A. Problem Decomposition

For the problem (P2’), the effective offloading rate R of the system in (18) is equivalent to \begin{equation*} \textrm {R}=\sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {\frac {\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} f_{mn}^{e}}{B_{n} \sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} +f_{mn}^{e}}}}\tag{20}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

By using the idea of divide-and-conquer strategy, the offloading rate of ED n to MSN m for computing tasks was firstly optimized \begin{align*}&\max \limits _{S,P,f}\, R_{nm} =\frac {\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} f_{mn}^{e}}{B_{n} \sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} +f_{mn}^{e}}, \quad \forall m,n \\&\text {s}.\text {t}. ~\text {c}1-\text {c}8\tag{21}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Therefore problem (20) is equivalent to \begin{align*} \min \limits _{S,P,f} \frac {1}{R_{nm}}=&\frac {B_{n} \sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} +f_{mn}^{e} }{\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k} f_{mn}^{e}},\quad \forall m,n \\=&\frac {B_{n}}{f_{mn}^{e}}+\frac {1}{\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k}},\quad \forall m,n \\&\hspace {-4pc}\text {s}.\text {t}.~\text {c}1-\text {c}8\tag{22}\end{align*}

View SourceRight-click on figure for MathML and additional features.

It can be seen from (22) that computational resource allocation f_{mn}^{e} and communication allocation s_{nm}^{k} and p_{nm}^{k} are decoupled in the objective function and constraints. By using this nature, we can decompose problem (22) into two independent problems, namely communication resource allocation and computational resource allocation, and solve them respectively, as shown in the following sections.

1) Computation Resources Allocation

Through (22), we describe the computational resource allocation problem as follow \begin{align*}&\min \limits _{f} \frac {B_{n}}{f_{mn}^{e}},\quad \forall m,n \\&\text {s}.\text {t}.~\text {c}7, \text {c}8\tag{23}\end{align*}

View SourceRight-click on figure for MathML and additional features.

It noted that the constraints c7 and c8 are convex, and we denote the objective function of (23) as \Lambda \left ({f }\right) , by obtaining the second derivative of \Lambda \left ({f }\right) with respect to f_{mn}^{e} , we have \begin{equation*} \frac {\partial ^{2}\Lambda \left ({f }\right)}{\partial f_{mn}^{e2}}=\frac {2B_{n}}{\left ({{f_{mn}^{e}} }\right)^{3}}>0, \quad \forall m,n\tag{24}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Therefore, (22) is a convex optimization problem and can be solved by KKT conditions. We first express the Lagrangian function of (23) as \begin{equation*} L\left ({{\Lambda \left ({f }\right),v} }\right)= \frac {B_{n} }{f_{mn}^{e}}+\sum \limits _{m=1}^{M} {v_{m}} \left ({{\sum \limits _{n=1}^{N} {f_{mn}^{e}} -F_{m}} }\right)\tag{25}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where v=\left [{ {v_{1},v_{2},\ldots,v_{\textrm {M}}} }\right] is the Lagrange multiplier vector. And take the first derivative of a Lagrange function \begin{equation*} \frac {\partial L\left ({{\Lambda \left ({f }\right),v} }\right)}{\partial f_{mn}^{e}}= -\frac {B_{n}}{\left ({{f_{mn}^{e}} }\right)^{2}}+v_{m},\quad \forall n,m\tag{26}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

Let \partial L\left ({{\Lambda \left ({f }\right),v} }\right)/\partial f_{mn}^{e} = 0 , the optimal resource allocation of problem (21) can be obtained as follows \begin{equation*} \left ({{f_{mn}^{e}} }\right)^{\ast }=\sqrt {\frac {B_{n}}{v_{m}^{\ast }}},\quad \forall n,m\tag{27}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

When v_{m}^{\ast } >0 , meet the following constraints \begin{equation*} \sum \limits _{n=1}^{N} {\left ({{f_{mn}^{e}} }\right)^{\ast }} =F_{m},\quad \forall m\tag{28}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Substitute (25) into (26), the Lagrange multiplier is \begin{equation*} v_{m}^{\ast } =\left ({{\frac {1}{F_{m}}\sum \limits _{n=1}^{N} {\sqrt {B_{n}}} } }\right)^{2},\quad \forall m\tag{29}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Substituting (29) into (27), we can obtain the optimal solution of computation resource as \begin{equation*} \left ({{f_{mn}^{e}} }\right)^{\ast }=\frac {F_{m} \sqrt {B_{n}} }{\sum \limits _{n=1}^{N} {\sqrt {B_{n}}}}, \quad \forall m\tag{30}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

2) Communication Resources Allocation

Through (22), we describe the communication resource allocation of all subchannels problem as follows \begin{align*}&\min \limits _{S,P} ~\frac {1}{\sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k}}}} \\&\text {s}.\text {t}. ~\text {c}1-\text {c}6\tag{31}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Similarly, we equate (31) as \begin{align*}&\max \limits _{S,P} \xi =\sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {\sum \limits _{k=1}^{K} {s_{nm}^{k}} r_{nm}^{k}}} \\&\text {s}.\text {t}.~\text {c}1-\text {c}6\tag{32}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Since subchannel allocation is a 0–1 decision problem, problem (32) is still a MINLP problem. Moreover, it can be seen that the subchannel allocation and power allocation are coupled of on all subchannels, and it is quite complicated to obtain the global optimal solution, so we firstly decouple the subchannel allocation and power allocation to obtain the solution. It assumes that the transmission power is equal on each subchannel, and we propose a greedy subchannel matching algorithm for channel assignment.

3) Sub-Channel Assignment

In order to describe the dynamic matching between EDs and sub-channels, sub-channel allocation is regarded as a two-sided matching process between the set of offloading tasks {\mathcal{ J}} and sub-channel set SC. If ED n uses channel k to offload the task to MSN m in the process of offloading tasks, we deem that sub-channel SC_{k} and offloaded task D_{nm} are matched each other. According to channel state information, the preference lists of offloading tasks and subchannels can be expressed as \begin{align*} PF\_{}ED=&\Biggl [{ PF\_{}ED\left ({{D_{11}} }\right),\cdots,PF\_{}ED\left ({{D_{nm}} }\right),\cdots,} \\& {PF\_{}ED\left ({{D_{NM}} }\right) }\Biggr]^{T} \\ PF\_{}SC=&\left [{ {PF\_{}SC\left ({1 }\right),\!\cdots \!,PF\_{}SC\left ({k }\right),\!\cdots \!,PF\_{}SC\left ({K }\right)} }\right]^{T}\end{align*}

View SourceRight-click on figure for MathML and additional features. where PF\_{}ED\left ({{D_{nm}} }\right) and PF\_{}{\text {SC}}\left ({k }\right) are the preference lists of D_{nm} and SC_{k} , respectively.

We use a notation \succ to represent the preference relationship, if ED n offloads task D_{nm} to MSN m has higher channel gain on SC_{i} than that on SC_{j} , we say D_{nm} prefers SC_{i} to SC_{j} . It can be noted as \begin{equation*} SC_{i} \left ({{D_{nm}} }\right)\succ SC_{j} \left ({{D_{nm}} }\right)\tag{33}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

If the offloading tasks in set q can provide higher \xi than in set {q}' on SC_{k} , we say SC_{k} prefers offloading task set q to task set {q}' , and we describe the case as \begin{equation*} \xi \left ({q }\right)>\xi \left ({{q'} }\right),\quad q,~{q}'\subset {\mathcal{ J}}\tag{34}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

According to the preference list of EDs’ offloading tasks and sub-channels, the sub-channel allocation problem is expressed as a two-sided matching problem as [15] and [33]. First, two definitions are considered.

Definition 1:

Given offloading task of EDs and subchannels as two disjoint sets {\mathcal{ J}} and SC. A many-to-one, two-sided matching {\mathcal{ A}} is a mapping from all the subsets of {\mathcal{ J}} into SC for D_{nm} \in {\mathcal{ J}} and SC_{k} \in {\mathcal{ S}}{\mathcal{ C}} , and satisfies follow conditions

  1. {\mathcal{ A}}\left ({{D_{nm}} }\right)\in {\mathcal{ S}}{\mathcal{ C}} .

  2. {\mathcal{ A}}^{-1}\left ({{SC_{k}} }\right)\subseteq {\mathcal{ J}} .

  3. \left |{ {\cal A\left ({{D_{nm}} }\right)} }\right |=1,\left |{ {\cal A^{-1}\left ({{SC_{k}} }\right)} }\right |\le J_{\max } .

  4. SC_{k} \in {\mathcal{ A}}\left ({{D_{nm}} }\right)\Leftrightarrow D_{nm} \in {\mathcal{ A}}^{-1}\left ({{SC_{k}} }\right) .

The above conditions are explained as follows: 1) shows that each offloading task if and only if matches one subchannel; 2) implies each subchannel can be matched with a subset of tasks; 3) represents that the number of tasks of EDs can be allocated on the same subchannel is limited to J_{\max } ; and 4) expresses offloading task D_{nm} and subchannel SC_{k} are matched with each other.

Definition 2:

Given a matching {\mathcal{ A}} , we suppose D_{nm} \notin {\mathcal{ A}}^{-1}\left ({{\textrm {SC}_{k}} }\right),\textrm {SC}_{k} \notin {\mathcal{ A}}\left ({{D_{nm}} }\right) , if there is \xi \left ({{S_{new}} }\right)>\xi \left ({{\cal A^{-1}\left ({{\textrm {SC}_{k}} }\right)} }\right),S_{new} becomes the preferred tasks set for subchannel SC_{k} and \left ({{D_{nm},\textrm {SC}_{k}} }\right) is a preferred matched pair. Where S_{new} \subseteq \{D_{nm} \}\cup S,S={\mathcal{ A}}^{-1}\left ({{\textrm {SC}_{k}} }\right) ,and where S is the task set has been assigned to SC_{k} .

On basis of the above analysis, we will depict the matching process between the offloading tasks of EDs and the subchannels. When EDs offload tasks, if each ED has to select the best subchannel to transfer tasks. Meanwhile, each subchannel has to assign the best subset of tasks. This will lead to high complexity, especially while there are more EDs. Since the optimal solution case is to search all possible matches to maximize overall transmission rate. Therefore, in order to reduce the complexity, a suboptimal subchannel allocation algorithm (SSAA) is proposed. The main idea of the suboptimal matching algorithm is that each ED sends matching request through its offloading tasks’ preference list to its preferred channel, but this preferred channel has the right to reject or accept the task based on the offload efficiency provided by all offloading tasks. The algorithm 1 describes as follows.

Algorithm 1 SSAA

1:

Initialize the power allocation for each ED p_{nm} =P_{n}^{\max }/M .

2:

Initialize preference lists PF\_{}ED\left ({{D_{nm}} }\right) for all the offloading tasks of EDs and PF\_{}SC\left ({k }\right) for all the subchannels according to the channel state information, \forall n,m .

3:

Initialize the matched list S_{Match} \left ({k }\right) to record the set of tasks D_{nm} allocated on SC_{k} for all the subchannels, \forall k\in {\mathcal{ S}}{\mathcal{ C}} .

4:

Initialize S_{UnMatch} to record D_{nm} that has not been allocated any subchannel.

5:

while \left \{{{S_{UnMatch}} }\right \}\ne \emptyset do

6:

for n=1 to N do

7:

for m=1 to M do

8:

Based preference lists PF\_{}ED(D_{nm}) , each ED sends matching request to its most preferred subchannel k^{\wedge } .

9:

if \vert S_{Match} (k^{\wedge })\vert < J_{\max } then

10:

Subchannel k^{\wedge } adds task D_{nm} of ED n to S_{Match} (k^{\wedge }) , and removes D_{nm} from S_{UnMatch}

11:

end if

12:

if \vert S_{Match} (k^{\wedge })\vert =J_{\max } then

13:

  1. Subchannel k^{\wedge } select the set of D_{nm} , which satisfies maximum \xi .

  2. Reject other tasks of EDs, update S_{UnMatch} and delete the rejected tasks from subchannel k’s preference list.

  3. The unchosen D_{nm} will go to step 8 and repeat this \quad step until it has been allocated on one subchannel.

14:

end if

15:

end for

16:

end for

17:

end while

B. Complexity Analysis

The optimal subchannel assignment scheme can be obtained by exhaustive searching over all possible combinations of EDs and subchannel and selecting one that maximizes the system offloading efficiency. If we have J offloading tasks of EDs and K subchannels, we suppose there are two offloading tasks that can reuse the same subchannel. The time complexity of exhaustive searching is O\left ({{\left ({{2K} }\right)!/2^{K}} }\right) . To compare with the complexity of our proposed algorithms, we take natural logarithm of the complexity. The exhaustive searching logarithm complexity is O\left ({{\ln \left ({{\left ({{2K} }\right)!} }\right)-K} }\right)=O\left ({{\ln \left ({{\left ({{2K} }\right)!} }\right)} }\right) . By adopting the Stirling’s formula [15], \ln \left ({{x!} }\right)=x\ln x-x+O\left ({{\ln \left ({x }\right)} }\right) , the logarithm complexity of the exhaustive method can be expressed as O\left ({{K\ln K} }\right) . In the proposed suboptimal algorithm, the complexity of the worst case is O\left ({{K^{2}} }\right) . And the logarithm complexity is O\left ({{\ln K} }\right) . Since O\left ({{K\ln K} }\right) >O\left ({{\ln K} }\right) and the actual complexity of the proposed suboptimal algorithm is much smaller than the worst-case complexity, so the complexity of the proposed algorithm is much smaller than the optimal sub-channel allocation scheme. It can be found that for a small number of EDs (N = 3 ), the SSAA will yield the identical results from the exhaustive search.

1) Power Allocation on Each Subchannel

In this section, we will optimize the transmit power P under given sub-channel allocation. The power allocation problem on each sub-channel is expressed as \begin{align*}&\max \limits _{P} \sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {r_{nm}^{k}}},\quad \forall k \\&\text {s}.\text {t}. ~\text {c}4-\text {c}6\tag{35}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Obviously, the objective function of (35) is the logarithmic function of P_{nm}^{k} , the second derivative of P_{nm}^{k} is less than 0, so problem (35) is convex. Therefore, it can be solved by the KKT condition. First construct the Lagrangian function \begin{align*}L\left ({{P,\alpha,\beta } }\right)& = \sum \limits _{n=1}^{N} {\sum \limits _{m=1}^{M} {r_{nm}^{k}}} +\sum \limits _{n=1}^{N} {\sum \limits _{k=1}^{K} {\alpha _{nm}^{k} \left ({{P_{n}^{\max } -\sum \limits _{m=1}^{M} {p_{nm}^{k}}} }\right)}} \\& \qquad \qquad + \sum \limits _{n=1}^{N} {\sum \limits _{k=1}^{K} {\sum \limits _{m=1}^{M} {\beta _{nm}^{k}} \left ({{r_{nm}^{k} -r_{\min }} }\right)}}\tag{36}\end{align*}

View SourceRight-click on figure for MathML and additional features. where \boldsymbol {\alpha,\beta }\boldsymbol{ >0} represents the Lagrange multiplier vector.

The first derivative of p_{nm}^{k} is \begin{align*} \frac {\partial L\left ({{P,\alpha,\beta } }\right)}{\partial p_{nm}^{k} }\!=\! \frac {Wg_{nm}^{k}}{\left ({{I_{nm}^{k}\! +\!\sigma ^{2}+p_{nm}^{k} g_{nm}^{k}} }\right)\ln 2}\left ({{1+\beta _{nm}^{k}} }\right)-\alpha _{nm}^{k} \\ {}\tag{37}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Let \partial L\left ({{P,\alpha,\beta } }\right){/}\partial p_{nm}^{k} = 0 , the optimal power allocation is expressed as \begin{equation*} \left ({{p_{nm}^{k}} }\right)^{\ast }=\frac {W\left ({{1+\left ({{\beta _{nm}^{k}} }\right)^{\ast }} }\right)}{\left ({{\alpha _{nm}^{k}} }\right)^{\ast }\ln 2}-\frac {I_{nm}^{k} +\sigma ^{2}}{g_{nm}^{k}}\tag{38}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

The power allocation solution is obtained in (38), we can apply the sub-gradient method to update the Lagrange multiplier for the objective function is differentiable. Therefore, the Lagrange multiplier can be updated with gradient descent as \begin{align*} \alpha _{nm}^{k} \left ({{t+1} }\right)=&\left [{ {\alpha _{nm}^{k} \left ({t }\right)\!-\!\zeta _{1} \left ({t }\right)\left ({{P_{n}^{\max } \!-\!\sum \limits _{m=1}^{M} {p_{nm}^{k}}} }\right)} }\right]^{+}, \quad \forall k \\ {}\tag{39}\\ \beta _{nm}^{k} \left ({{t+1} }\right)=&\left [{ {\beta _{nm}^{k} \left ({t }\right)\!-\!\zeta _{2} \left ({t }\right)\left ({{r_{nm}^{k} \!-\!r_{\min }} }\right)} }\right]^{+}, \quad \forall k\tag{40}\end{align*}

View SourceRight-click on figure for MathML and additional features. where t is the iteration index, \zeta _{1} \left ({t }\right) , \zeta _{2} \left ({t }\right) are positive step sizes at iteration t . Power allocation algorithm (PAA) is elaborated in Algorithm 2.

Algorithm 2 PAA

1:

Set the iteration index t=0 , the iteration step sizes \zeta _{1} \left ({t }\right)>0,\zeta _{2} \left ({t }\right)>0

2:

Initialize \varepsilon >0,\alpha _{nm}^{k} \left ({1 }\right)>0 and \beta _{nm}^{k} \left ({1 }\right)>0 , according to eq. (38), we calculate p_{nm}^{k} .

3:

Update the Lagrange multiplier \alpha _{nm}^{k} \left ({t }\right) and \beta _{nm}^{k} \left ({t }\right) according to (39) and (40).

4:

Then update p_{nm}^{k} according to equation (38)

5:

if \left |{ {p_{nm}^{k} \left ({{t+1} }\right)-p_{nm}^{k} \left ({t }\right)} }\right |< e\vert \vert t>T_{\max } , the result is the optimal solution, where T_{\max } is the maximum number of iterations.

6:

else set t=t+1 and return step 3.

7:

end if

C. Joint Sub-Channel Assignment and Transmit Power Allocation

In the previous sections, equal power allocation is assumed, the solution for the subchannel allocation was given, then we optimize transmission power under the given conditions of channel allocation. To simultaneously optimize power allocation and subchannel allocation, we propose a joint communication resources allocation algorithm (JCRAA) to solve problem (31). The key idea of JCRAA is to iteratively update sub-channel assignment through Algorithm 1 and transmit power allocation through Algorithm 2. When the subchannel assignment solution can’t be changed, the iterations stop. The details of JCRAA is shown in Algorithm 3.

Algorithm 3 JCRAA

1:

Let the iteration index t_{1}=0 .

2:

Initialize power allocation for each ED within the power range \textrm {P}^{\left ({0 }\right)} .

3:

Let t_{1}= t_{1}+1 .

4:

Under power allocation \textrm {P}^{\left ({{t_{1} -1} }\right)} , update the sub-channel allocation result according to Algorithm 1.

5:

Assign the sub-channel {\mathcal{ S}}{\mathcal{ C}}^{\left ({{t_{1} -1} }\right)} and update the power allocation results according to Algorithm 2.

6:

if {\mathcal{ S}}{\mathcal{ C}}^{\left ({{t_{1}} }\right)}={\mathcal{ S}}{\mathcal{ C}}^{\left ({{t_{1} -1} }\right)} then

7:

the algorithm is terminated

8:

else return step 3

9:

end if

D. Task Allocation Strtegy Based on Resource Allocation

Based on the resource allocation, we design a task allocation algorithm (TAA) to determine how many tasks should be allocated to local and edge nodes for problem P2. The main steps of the Algorithm 4 are as follows.

Algorithm 4 TAA

Input:

Task size of each ED to be calculated, and resource allocation results in P2’.

Output:

Local task allocation ratio \theta _{nn} and offloading task ratio \theta _{nm} .

1:

For \forall n , compute the total offloading rate R_{n} ;

2:

For \forall n , compute offloading rate R_{nm} from ED n to MSN m ;

3:

Calculate the offloading ratio \theta _{nm} =R_{nm} /R_{n} ;

4:

Calculate the proportion of tasks performed locally \theta _{nn} =1-\sum \nolimits _{m=1}^{M} {\theta _{nm}}, \forall n

SECTION VI.

Simulation Results

In this section, the simulation results are used to evaluate the impact of the proposed resource allocation scheme on system performance. We consider the scenario of 3 EDs, 3 MSNs and 5 subchannels. In simulations, the local computing capacity for the 3 EDs is f_{n}^{l} =\left \{{{0,6,0.8,1} }\right \}\textrm {GHz} . Meanwhile for MEC server, the computing capacity of the 3 MSNs is F_{m} =\left \{{ {5,10,15} }\right \}\textrm {GHz} as [9]. For the wireless transmission, the channel gains are characterized by a path-loss model and we set the pass loss model as modeled as 36.7\log (d_{nm})+140.7 [35], d_{nm} is the distance between the ED n and MSN m , Rayleigh fading obeys zero mean and unit variance similar to [37]. Sub-channel bandwidth W=1 MHz, maximum transmission power of the 3 EDs is P_{n}^{\max }=27\text {dBm} , we set the noise power is \sigma ^{2}=-174\textrm {dBm/Hz} .The minimum transmission rate of each ED is normalized r_{\min } =1\textrm {bps}/\textrm {Hz} . We compare our proposed scheme in this article (NOMA-MEC) against the following benchmark schemes:

  1. All local computing scheme (ALL LOCAL): A All tasks of EDs are executed locally.

  2. All MEC computing scheme (ALL MEC): All tasks are offloaded to 3 MSNs by NOMA.

  3. One MSN scheme (One MSN): We consider a MSN scheme, that is an edge server, as in [20]. In simulation, Let MSN’s computing capability be the sum of the computing capability of the multi-node collaborative edge nodes in this article, which is 30GHz.

  4. OMA-MEC scheme (OMA-MEC): We consider an OMA-MEC system as a benchmark, where EDs adopt frequency division multiple access (FDMA) scheme for computation offloading.

A. Evaluation of Task Allocation Results

Firstly, to intuitively and concretely reflect the task distribution of the proposed scheme in this article, the relationship between the total tasks volume of each ED and task assignment is shown respectively in Fig. 2 (a), (b) and (c). It can be seen from the figures that for each ED, the number of tasks assigned to local EDs and MSNs increases with the total number of processed tasks growing. At the same time, it can be seen that for ED3, the amount of tasks assigned to local is basically is equal to the tasks assigned to MSN3, which is due to the larger computing capacity of ED3, when assigned a large amount of tasks, it will lead to a small delay.

FIGURE 2. - The relationship between the total tasks and task allocation for each ED.
FIGURE 2.

The relationship between the total tasks and task allocation for each ED.

In addition, in order to analyze the overall distribution of system computation tasks, Fig. 3 describes the relationship between the total tasks of the system and the tasks allocation of MSNs and local EDs. It can be seen that as the total number of system tasks increases, the amount of tasks allocated to the MSNs and users themselves increases. At the same time, it can be seen that when the amount of system tasks is small (less than 100Mb), the tasks allocated to the three edge servers is basically the same. However, when the system tasks become large (200Mb), the tasks allocated to MSN3 is much larger than the tasks allocated to MSN1 and MSN2. This is because MSN3 has strong computing capacity and can meet the needs of EDs with large tasks. As the amount of tasks increases, the amount of tasks allocated to ED3 is larger.

FIGURE 3. - The relationship between the total number of tasks in the system and task allocation.
FIGURE 3.

The relationship between the total number of tasks in the system and task allocation.

B. Evaluation of System Performance

The relationship between total tasks and system performance under the five schemes is shown as Fig. 4. As shown in Figure (e) and (f), the relationship between system delay and energy consumption with the total tasks is pictured. Since it is a task assignment based on resource allocation, the time delay and energy consumption are linear with the total amount of tasks. We can see from the figure that the solution in this article has the most advantages in terms of delay and energy consumption. The case of OMA-MEC is slightly inferior to the NOMA-MEC case proposed in this article. The third best case is all offloading scheme, compared with the case of one MSN, although local computing is not considered, EDs’ tasks are offloaded to different MSNs to perform collaborative computing. And the computing capacity of the edge server is the sum of the computing capacity of the 3 MSNs proposed in this article, but it does not carry out collaborative computing, resulting in high delay and energy consumption. And the system performance of all local was the worst compared to the other schemes.

FIGURE 4. - The relationship between total tasks and system performance.
FIGURE 4.

The relationship between total tasks and system performance.

Fig.5 performs the relationship between the total amount of tasks and the system processing capability. Since the processing capability characterizes the system capacity, for a system, as the task size increases, the system processing capability remains unchanged. However, it can be seen from the figure that when the solution in this article is adopted, the processing capability of the system is the best, followed by the solution for OMA-MEC, and the worst for all local computing. This is because for the five schemes, under the condition of equal tasks, the scheme proposed in this article has the smallest delay so that leads to the processing capability.

FIGURE 5. - The relationship between system tasks and system processing capability.
FIGURE 5.

The relationship between system tasks and system processing capability.

SECTION VII.

Conclusion

To enhance the performance of MEC systems and further improve user experience, we proposed a novel network scenario which we used NOMA to transmit the offloading tasks of EDs to multiple edge servers. In the paper, we presented an optimization framework for a multi-user multi-task and multi-server NOMA-MEC system to maximize system processing capability via jointly optimizing the tasks offloading and resources allocation. By decomposing the formulated problem, an efficient algorithm was proposed to tackle the formed problem under the help of convex optimization theory. Through computer simulation, the effectiveness of the proposed scheme was verified. It showed that our proposed scheme can efficiently reduce the delay and energy consumption and improve the processing capability of MEC systems.

References

References is not available for this document.