Introduction
In recent years, a rapid increase occurs to the number of vehicles, the incidence rate of traffic accidents urban areas rises rapidly. To improve the traffic conditions in the city and ensure the driving safety, internet of vehicles (IoV) is maturing [1]. In IoV, for helping vehicles learn about the road conditions timely, the communication technologies among vehicles experience rapid development. Vehicle-to-everything (V2X) communication, as a novel communication technology, has raised much attention from industrial and academic fields. With the long-term evolution technology (LTE), the V2X technology provides the realization of the massive data transmission which has controllable latency [2]. Technically, V2X communication comprises vehicle-to-vehicle (V2V) communication, vehicle-to-pedestrian (V2P) communication, vehicle-to-infrastructure (V2I) communication and vehicle-to-network (V2N) [3]. Vehicles, people and infrastructures collect the information about the surrounding environment, which is exchanged with other information collectors nearby. Besides, by using a connected-car technology, Vehicular Ad-hoc Networks (VANETs) support safety-critical applications in C-ITS [4].
However, vehicles in IoV contain large quantities of complex computing tasks, while the physical resources deployed in vehicles, aiming to process and store the data, are limited. Making use of the communication technologies in IoV for transmitting tasks to other infrastructures which have rich computing resources is the next critical step for realizing the vision of accident-free driving [5]. Therefore, vehicles are connected to the transmitters and receivers. These devices such as intelligent cameras, sensors, actuators,etc. transmit vehicle’s information to remote infrastructures like remote cloud data centers [6]. After offloading computing tasks to the remote cloud based on V2X communication technology, the cloud returns the computing results to vehicles, which dissatisfies the computing tasks in IoV [7].
Nevertheless, the relatively long distance between vehicles and the remote cloud causes the unbearable latency. Mobile edge computing (MEC)seems to be fit for reducing the latency. The stringent latency requirements are satisfied by MEC technology to the cellular network architecture [8], [9]. By means of pushing the cloud services to the edge of the whole network, the vehicle’s mobile applications are offloaded to edge devices rather than the remote cloud platform anymore [10]. In summary, the original vehicle transmits the computing tasks to the destination vehicle. Then the computing tasks in the destination vehicle are transmitted to the destination edge server device (ECD) [11].
The idea of offloading computing tasks is not novel indeed. However, there are some weaknesses for the process of offloading the computing tasks to ECDs [12]. The limited computing resources of the ECDs are infrequent for massive unprocessed data. Due to this, the efficiency of data processing decreases to a large extent. From the perspective of ECDs, it is urgent to reduce the latency and increase the resource utilization exhaustively. Therefore, it is critical to limit the computing task amount in ECDs. Under this circumstance, offloading the computing tasks to other ECDs becomes a necessity. During this process, the latency is required to be reduced. Meanwhile, the explosion of the data requires improving the resource utilization of ECDs. In order to reduce the latency and increase the resource utilization when offloading computing tasks, a computation offloading method which employs V2X technology for data transmission in edge computing, named V2X-COM, is devised.
In a word, the main contributions are shown as followed:
The computation offloading problem of reducing the latency and improving the resource utilization in IoV is seen as a multi-objective optimization problem.
An algorithm based on V2X communication for obtaining the offloading route where the computing tasks are offloaded between the origin vehicle and the ultimate one is proposed.
Non-dominated sorting genetic algorithm III (NSGA-III) is adopted for realizing the optimization of reducing the latency and improving the resource utilization of ECDs.
Simulation experiments are conducted to prove the effectiveness of the method.
This paper contains six sections. In Section II, the related work is summarized. In Section III, the whole mathematical modeling is described. In section IV, we design a computation offloading method by adopting NSGA-III method. In Section V, simulation experiments and comparison analyses are presented. Finally, Section VI outlines the conclusion and future work.
Related Work
In recent years, the meaning of the smart cities is discussed hotly. Facing more frequently traffic problems because of the increasing growth of vehicles, IoV has becomes an advanced technology to address this situation. MEC utilizes its advantages fully in IoV, which shortens the distance between the server and the vehicle and provides and provides more efficient data processing [13]–[15]. By using V2X communication, the spectrum resource is used more fairly, and energy consumption is reduced obviously. Thereby, the quality of computation is improved [8].
With more and more attention paid by users on this area, IoV has experienced tremendous changes. Visible light communication (VLC), as a momentous technology in IoV, has been preferred in vehicle-to-vehicle (V2V) as well as vehicle-to-infrastructure (V2I) communications [16]. Besides, for connected vehicle technology (CVT) safety applications, dedicated short-range communication (DSRC) has been seen as the primary option which can make automatic driving technology more mature [17]. In [18], Lin et al. utilized VANETs nodes, and designed an architecture based on the moving zone so as to promote the dissemination of information. Wu et al. studied time optimization of multiple knowledge transers in the big data environment in [19].
As a new paradigm of the technology on the communication of vehicles in IoV, V2X communication emerges as an effective solution in the field of 5G-enabled vehicular communications [20], has also raised much attention from industrial and academic fields. In [21], Chen et al. presented the outline of V2X services in 3GPP, and investigated the latest standardization of LTE V2X in 3GPP. Considering that V2X communication is based on dedicated short-range communications (DSRC) and cellular networks, Abboud et al. proposed an interworking network between DSRC and cellular network technologies for V2X communication [22]. Besides, direct device-to-device (D2D) links are considered helpful to V2X application. In [23], Sun et al. studied the radio resource management issues for V2X communications. However, how to make use of the limited resource of spectrum has been an urgent challenge which requires to be worked out. In [24], Wang et al. designed an energy-aware spectrum sharing strategy which allows V2X users to access unlicensed channels equally. In [25], Wang et al. proposed an energy-efficient coverage control algorithm based on PSO for wireless sensor networks. In Chinese telematics industry, V2X based on LTE is widely used as LTE-V. In addition, LTE-based V2X has been redefined by the 3GPP standardization [26]. The standardization plan which supports LTE-based V2V and V2X services has been completed in 3GPP Release 14 in 2016 and 2017, for applying to the LTE system of vehicle industry [2], [27].
On the process of dealing with the computing tasks, vehicles generally transmit the tasks to the remote cloud to execute with the V2X communication. However, the time consumption of this process exceeds the latency limitation of vehicles in IoV. Contrary to traditional remote cloud-based cellular architecture, the MEC infrastructure is believed to be more suitable for IoV [28]. Compared with traditional cloud computing, it is promising for MEC to provide computing resources which are close to ECDs [29]. For using MEC in IoV, Hu et al. proposed a multi-access edge computing framework and the homologous communication protocol [30]. Liu et al. proposed an architecture for network which supports software-defined network (SDN) in [31]. Furthermore, Kumar et al. investigated a method that uses vehicular delay-tolerant networks (VDTNs) as the solutions for data dissemination to vehicles which are at the edge of the network by using MEC [32].
Generally, if the cloud services can be pushed to the network edge by using MEC technology, the latency requirements of the computing tasks are easily satisfied [33], [34], [39]. However, the computing resources in ECDs are limited. Thus, it is necessary to allocate and coordinate resources between ECDs and the remote cloud. To address this challenge, Ndikumana et al. proposed an offloading algorithm which enables collaborative cache allocation and computation in [35]. Besides, in [36], Chen et al. proposed a distributed computation offloading algorithm which has the ability to achieve a Nash equilibrium so as to achieve superior computation offloading performance. Furthermore, the optimization of the offloading process in IoV is also significant. In [37], a low-complexity sub-optimal algorithm which optimizes task offloading scheduling and transmits power allocation is proposed. In [38], for providing satisfactory computation performance, Mao et al. proposed a low-complexity online algorithm that collectively determines the computation offloading strategy, the CPU-cycle frequencies and the computation offloading transmission power.
However, few studies pay attention on the multi-objective optimization for MEC in IoV. It is still a significant challenge to take both latency and resource utilization into consideration. Therefore, this paper puts forward an offloading method, aiming to deal with this challenge.
System Model in IoV Based on Edge Computing
In this section, a system model is proposed, evaluating the latency and resource utilization. In Table 1, vital notations and the corresponding descriptions are presented.
A. Resource Model
A communication framework in V2X based on edge computing is shown in Fig 1. In the framework, a scene where a one-way road is existed and N edge nodes (ENs) are deployed on one side of the road is considered, denoting Edge nodes as
The computational resource and caching space requirement for each computing task is denoted as
B. Latency Model
When the EN executes a computing task, the latency includes the transmitting time from the origin vehicle to the goal one, the time for offloading task from the goal vehicle to the destination EN, the time for execution on the destination EN and the feedback time for EN transmitting the computing results to the original vehicle.
On the one-way road, the vehicles run across different ENs. We suppose all vehicles are always in the road segment of the ENs. Then we use a binary variable to detect the location of the vehicle \begin{equation*} J_{m}^{n}\left ({x }\right) = \begin{cases} 0, & {v_{m}}~{is~in~the~coverage~of~}{d_{n}},\\ 1, & {otherwise}. \end{cases}\tag{1}\end{equation*}
The time for transmitting the computing task \begin{align*} {a_{m}}\left ({x }\right) = \sum \limits _{n = 1}^{N} {\sum \limits _{m' = 1}^{M} {J_{m}^{n} \cdot \left ({{1 - J_{m}^{\prime n}} }\right) \cdot Y_{m}^{m'} \cdot \frac {p_{m}}{{{\lambda _{V2V}}}}}} \cdot \left ({{{x_{m'}} + 1} }\right), \\\tag{2}\end{align*}
\begin{equation*} Y_{m}^{m'}\left ({x }\right) = \begin{cases} 0, & {v_{m'}} {~is~the~destination~vehicle,} \\ 1, & otherwise. \end{cases}\tag{3}\end{equation*}
The time for offloading the \begin{equation*} {b_{m}}\left ({x }\right) = \sum \limits _{n = 1}^{N} {J_{m}^{\prime n}\left ({x }\right) \cdot \frac {p_{m}}{{{\lambda _{{\mathrm{V2I}}}}}},}\tag{4}\end{equation*}
Seeing the physical resources of the EN as multiple resource units, the property of the units and the data size of the task determine the execution time. Let
The execution time of \begin{equation*} {c_{m}}\left ({x }\right) = \sum \limits _{n = 1}^{N} {\left ({{1 - Y_{m}^{m'}} }\right) \cdot \frac {r_{n}}{u_{n} \cdot q}}.\tag{5}\end{equation*}
The time in offloading the execution results back to the vehicles is calculated by \begin{equation*} {d_{m}}\left ({x }\right) = {p'_{m} \mathord {\left /{ {\vphantom {p'_{m} {{\lambda _{{\mathrm{V2I}}}}}}} }\right. } {{\lambda _{{\mathrm{V2I}}}}}}.\tag{6}\end{equation*}
The total latency for implementing \begin{equation*} {h_{m}}\left ({x }\right) = {a_{m}}\left ({x }\right) + {b_{m}}\left ({x }\right) + {c_{m}}\left ({x }\right) + {d_{m}}\left ({x }\right).\tag{7}\end{equation*}
Hence, the entire latency for implementing the overall computing tasks is figured by \begin{equation*} H = \sum \limits _{m = 1}^{M} {h_{m}} \left ({x }\right).\tag{8}\end{equation*}
C. Resource Utilization Model
The resource utilization of the ENs relates to the VM instance amount in ENs. In ENs, multiple VMs are instantiated for allocating the computing tasks the computing resources. The requirements of the computing tasks for the computing resources are weighted by the number of the VM instances. Denoting the number of VMS in \begin{equation*} {r_{n}}\left ({x }\right) = \frac {1}{\theta _{n}} \cdot \sum \limits _{s = 1}^{S} {n{m_{n}} \cdot F_{s}^{n}\left ({x }\right)},\tag{9}\end{equation*}
\begin{equation*} F_{s}^{n}\left ({x }\right) = \begin{cases} 1, & if~the~VMs~is~deployed~on~{d_{n}},\\ 0, & otherwise. \end{cases}\tag{10}\end{equation*}
Let \begin{equation*} {k_{n}}\left ({x }\right) = \begin{cases} 0, & if \displaystyle \sum \limits _{s = 1}^{S} {F_{s}^{n}\left ({x }\right) \cdot {l_{s}}\left ({x }\right) = 0},\\ 1, & otherwise, \end{cases}\tag{11}\end{equation*}
\begin{equation*} {l_{s}}\left ({x }\right) = \begin{cases} 1, & if ~{p_{s}} {~hosts~a~load,} \\ 0, & otherwise. \end{cases}\tag{12}\end{equation*}
Therefore, the total number of the running ENs is calculated by \begin{equation*} RB = \sum \limits _{n = 1}^{N} {k_{n}\left ({x }\right)}.\tag{13}\end{equation*}
Then, the resource utilization rate is calculated by \begin{equation*} RU = \frac {1}{RB} \cdot \sum \limits _{n = 1}^{N} {r_{n}\left ({x }\right)}.\tag{14}\end{equation*}
D. Problem Formulation
In this paper, the goal is to realize the optimization of decreasing the latency presented in (8) and increasing the resource utilization in (14) as possible. The problem is formalized as \begin{align*}&\min H,\max RU. \tag{15}\\&{{s. t.}}~{v_{m}} \in r{s_{n}}, \quad m \in \{1,2,\ldots,M\},~n \in \{1,2,\ldots,N\}, \\ \tag{16}\\&\qquad \sum \limits _{m = 1}^{M} {c{s_{m}}} \le \sum \limits _{n = 1}^{N} {d{s_{n}}},\tag{17}\\&\qquad \sum \limits _{m = 1}^{M} {u{s_{m}}} \le \sum \limits _{n = 1}^{N} {d{u_{n}}}.\tag{18}\end{align*}
The objective function computes the minimal latency and maximum resource utilization rate through the offloading of computing tasks. The constraints make sure that the total required computation resources for all the computing tasks are less than those of all the ENs, and the caching space of all the ENs is enough for the computing tasks.
A Computation Offloading Method for Vehicle to Everything in Edge Computing
In this section, NSGA-III is adopted to seek the optimal solution globally for the computation offloading method. Then the most optimal offloading strategy is picked out by SAW and MCDM in the last iteration.
A. Method Based on NSGA-III
In this section, the process of offloading the computing tasks is defined as a multi-objective optimization problem of reducing the latency and increasing the resource utilization. In view of that NSGA-III is effective in addressing the optimization problem, the proposed method is utilized to optimize the objectives given in (15).
In the method, the offloading strategies are encoded firstly. Then the fitness functions and constraints given in (16), (17), (18) are used to provide restrictions for the optimization problem with multiple objectives. Additionally, conducting the crossover and mutation operations which aims at creating brand-new offloading solutions. Finally, suitable solutions in the last population are selected.
1) Encoding for Offloading Strategies
In this phase, we do encoding operation based on the offloading strategies. In the framework in V2X based on edge computing, each computing task from the vehicle is offloaded to the EN. We consider seeing the offloading strategy as a gene in the genetic algorithm (GA). All genes which consist of all strategies for offloading all tasks compose a chromosome, and the chromosome glasses the hybrid of these strategies. An instance of encoding the computation offloading strategies is presented in Fig. 2. As is shown in the example, the chromosome is composed by
2) Fitness Functions and Constraints
In this subsection, the fitness functions are used to provide the restrictions for the overall offloading strategies in the chromosomes. Each chromosome is seen as an individual, standing for a solving method for the multi-objective optimization problem and all individuals compromise the population.
In this paper, the latency and the resource utilization make up of the fitness functions, which are presented in (8) and (14) respectively. The proposed method is aimed at seeking the hybrid offloading strategies which have minimum latency and maximum resource utility. The ideal solution in this paper is realizing trade-offs between these objectives. Formulations (16), (17) and (18) define the constraints.
In this phase, we determine the parameters of GA. Among these parameters, the population size is denoted as
Meanwhile, denoting the gene for the offloading strategy of the
3) Crossover Operation
In this subsection, utilizing the crossover operation to generate new chromosomes based on the standard single-point operation. An example of crossover operation with two chromosomes is illustrated in Fig. 3. As is shown in the instance, the first operation is determining the crossover point. Then the genes which are around the point are swapped to create a group of brand-new offloading strategies which make up of new chromosomes.
4) Mutation Operation
The mutation operation is aimed at creating new chromosomes with high fitness by modifying genes of the chromosomes. Fig. 4 shows a mutation instance for a chromosome. In this example, each gene is modified with the same possibility
5) Selection
The selection operation is aimed at picking up the chromosomes with higher fitness for the next generation. We evaluate the solutions according to the offloading strategies by (8) and (14) respectively. Based on the fitness values, we do the usual domination principle for the
In the first non-dominated front, we randomly choose one solution from the overall solutions each time if the number of solutions we selected is less than
Firstly, we normalize the two fitness functions. In the \begin{align*} H'=&H - {H^{*}}. \tag{19}\\ RU'=&RU - R{U^{*}}.\tag{20}\end{align*}
Let \begin{align*} {\alpha _{H}}=&\max \frac {H'}{W_{H}}. \tag{21}\\ {\alpha _{RU}}=&\max \frac {RU'}{{{W_{RU}}}}.\tag{22}\end{align*}
Suppose two fitnesses as two axes. The intercepts are separate, which are denoted as \begin{align*} H''=&\frac {H'}{\lambda _{H}}. \tag{23}\\ RU''=&\frac {RU'}{{{\lambda _{RU}}}}.\tag{24}\end{align*}
After the normalization, the fitness values of each individual are in the domain [0, 1). The normalized solutions are spread on the 2-dimensional hyperplane composed by the two axes.
In the hyperplane, \begin{equation*} z = \left ({{\begin{array}{cccccccccccccccccccc} {\theta + 1}\\ \theta \end{array}} }\right),\tag{25}\end{equation*}
Then we make the solutions which is included in the
The process of this method is elaborated in Algorithm. 1. The algorithm’s input is the population of the
Algorithm 1 Computation Offloading Method Based on NSGA-III
Conduct crossover and mutation operations
for each individual in the population do
end for
Do non-domination sorting for the
Select the solutions by non-dominated fronts
if the amount of solutions in the
Perform the generation of reference points under the criterion (25)
Associate solutions with reference points
Determine the
end if
return
B. Solution Evaluation Employing Saw and MCDM
The computation offloading method we proposed has the goal to achieve trade-offs between reducing the latency and increasing the resource utilization. Respective chromosome in a population strand for the hybrid strategies for offloading the computing tasks. In this paper, we adopt SAW and MCDMto select the relatively ideal strategy from the
The higher latency is, the worse the solution becomes. Therefore, the latency is a negative criterion. On the contrary, resource utilization is a negative criterion. The latency of the \begin{equation*} V({H_{ps}}) = \begin{cases} \dfrac {{H_{}^{\max } - {H_{ps}}}}{{H_{}^{\max } - H_{}^{\min }}}, & H_{}^{\max } - H_{}^{\min } \ne 0, \\ 1, & H_{}^{\max } - H_{}^{\min } = 0, \end{cases}\tag{26}\end{equation*}
\begin{align*} V(R{U_{ps}}) = \begin{cases} \dfrac {{R{U_{ps}} - R{U^{\min }}}}{{R{U^{\max }} - R{U^{\min }}}}, & R{U^{\max }} - R{U^{\min }} \ne 0, \\ 1, & R{U^{\max }} - R{U^{\min }} = 0, \end{cases} \\\tag{27}\end{align*}
Furthermore, to asses the utility value of each individual, the weights of the two fitness functions are determined in advance.\begin{align*} V({C_{ps}}) = {w_{1}} \cdot V({H_{ps}}) + {w_{2}} \cdot V(R{U_{ps}})({w_{1}} + {w_{2}} = 1), \\\tag{28}\end{align*}
\begin{equation*} V(C) = \max \limits _{ps = 1}^{PS} V({C_{ps}}).\tag{29}\end{equation*}
Hence, if the chromosome has the maximum utility value in the population, then the corresponding hybrid computation offloading strategy is the optimal.
C. Method Overview
The proposed method aims to decrease the latency and increase the resource utilization. The process of offloading tasks is defined as a multi-objective optimization problem with potential constraints. In view of the effectiveness of NSGA-III solving the multi-objective optimization problem, NSGA-III is employed for the problem. Firstly, the strategies generated by offloading the computing tasks are encoded. Then the fitness functions of each solution is calculated. What’s more, conducting the crossover and mutation operations for generating new populations. Finally, the selection operation are employed to pick out the relatively optimal individuals for making up of the next generation. Furthermore, SAW and MCDM are adopted to select the most optimal solution.
Algorithm 2 describes the overview of the computation offloading method. The input of this algorithm is the initialized population and we output the optimal computation offloading method
Algorithm 2 Computation Offloading Method for V2X in Edge Computing
while
Do crossover and mutation operations
for each solution in the population do
Calculate the latency by (8)
Calculate the resource utilization by (14)
end for
Do selection by Algorithm. 1
end while
Select the optimal offloading strategy by (29)
return
Experiment Evaluation
In this paragraph, we conduct the simulations and experiments to make a comparison between V2X-COM we proposed and other methods and verify the efficiency of V2X-COM. The simulation setup, which includes the settings of the experiment coefficient and the comparative method descriptions, is shown fisrt to be more specific. Then, the influences of different vehicle scales on the latency and resource utilization property between the compared methods and V2X-COM are evaluated.
A. Coefficient Setup
In the simulation, we consider some vehicles running along a one-way road. Our experiments applied six datasets with different amount of vehicles and the number of vehicles is set to 20, 40, 60, 80, 100 and 120 respectively. In Table 2, we present the coefficient settings in the experiment.
our paraments are not fabricated out of thin air, we consulted some paraments in some references. Reference [24] has given the transmit power of vehicles. It helps us set the value of the offloading speed between vehicles. Chen et al. gave the transmit power of base station in [21]. It helps us set the value of the offloading speed between a vehicle and an edge node. Also, the range of EN is set as 500m, the range of the vehicle is set as 50m. We also consulted some other references which are not included in References.
To analyze the advantage of V2X-COM, we employ some basic offloading methods. The comparative methods are introduced as follows.
Benchmark: The task is considered being offloaded to an edge node which is the closest to the vehicle first. If the task’s resource requirement is more than the current edge node owns, this task is considered being offloaded to the edge node near the current one according to the shortest path algorithm. This process is terminated after the overall tasks being offloaded.
Best Fit Decreasing (BFD): The tasks are sorted decreasingly according to the resource requirement. Then the first task is offloaded to the edge node which owns the least resource but enough for this task. This process is terminated after all the tasks being offloaded.
First Fit Decreasing (FFD): The computing tasks are sorted decreasingly according to the requirement of the computation resource. Then the first task is offloaded to the edge node which owns enough resource for the current task. This process is terminated after all the tasks being offloaded.
Those methods are implemented under the simulation tools by CloudSim framework on a personal computer with Intel Core i7-4720HQ 3.60GHz processors and 4GB RAM. The following sections show the corresponding evaluation results detailedly.
B. Performance Evaluation of V2X-COM
The V2X-COM we proposed has the goal to achieve the balance in minimizing the latency and improving the resource utilization. The six sub-figures in Fig. 5 show the utility value comparison of the generated solutions from V2X-COM at different vehicle scales in 20, 40, 60, 80, 100 and 120, the number of solutions based on V2X-COM is respective 3, 2, 4, 3, 3 and 4. After statistics and analysis, the most ideal strategy is the one which has the highest utility value. For instance, the final strategy we selected is solution 3 because of it’s maximum utility value in Fig. 5a.
Comparison of the utility value of the solutions generated by V2X-COM at different vehicle scales.
C. Comparison Analysis
In the phase, the comparisons between the compared methods and V2X-COM are analyzed detailedly. Aiming to assess the performance of each method, the latency and the resource utilization are seen as critical standards. The results are presented in Figs. 6, 7, 8 and 9 respectively.
Analysis on different number of employed ENs: Fig. 6 presents the number of ENs which has been employed in the four methods respectively. The amount of ENs is installed as 50 in this experiment. As illustrated in Fig. 6, V2X-COM employs the same number of ENs or fewer than Benchmark, BFD and FFD. Besides, the number of ENs used in the V2X-COM method increases as the number of vehicles increases. Considering the increase in the number of vehicles, the number of ENs in operation should also be increased in preparation for responding to massive requests. Under the premise of meeting all computing tasks, when the vehicle scale increases, the fewer ENs are employed, the better the method is.
Analysis on the resource utilization: After offloading the overall tasks to the ENs based on the relevant strategies, the resource utilization is achieved clearly. The comparison of the number of employed edge nodes at different vehicle scales by using Benchmark, BFD, FFD and V2X-COM is shown in Fig. 7. The resource utilization is calculated based on the resource units which have been occupied in each EN. As the resource utilization is a significant index to judge the efficiency of V2X-COM, the resource utilization can distinguish the difference of all methods. In Fig. 7, the advantage of V2X-COM is not very obvious compared with Benchmark, BFD and FFD, however, in general, V2X-COM has the ability to use the resources more reasonable than the other methods.
Analysis on the time consumption: Latency is an essential standard of evaluation of the method performance. The latency is composed of the transmission time between vehicles, the transmission time from vehicles to ENs, the execution time and the feedback time. Fig. 8 shows the comparison of these details of Benchmark, BFD, FFD and V2X-COM at different vehicle scales. As the speed of V2I transmission is the same, the transmission time from vehicles to ENs and the feedback time are similar among Benchmark, BFD, FFD and V2X-COM. The difference is the transmission time. V2X-COM has the ability to consume less time to find the destination vehicle compared with Benchmark, BFD and FFD.
Comparison of the number of employed edge nodes at different amount of vehicles by benchmark, BFD, FFD and V2X-COM.
The resource utilization comparison at different amount of vehicles by benchmark, BFD, FFD and V2X-COM.
Comparison of the different part of the time consumption at different amount of vehicles by benchmark, BFD, FFD and V2X-COM.
Comparison of the time consumption at different amount of vehicles by benchmark, BFD, FFD and V2X-COM.
Fig. 9 classifies the time consumption comparison by Benchmark, BFD, FFD and V2X-COM. Obviously, the V2X-COM we proposed costs the least time among the four methods. However, as the amount of vehicles is small, the latency among these methods is nearly equal. As the vehicle scale increases, the preponderance of the V2X-COM is evident.
Conclusion and Future Work
In recent years, the computing tasks in IoV have been too complex to execute for vehicles. Therefore, vehicles have to offload the computing tasks to the cloud platform. However, compared with offloading the computing tasks to the cloud, MEC is more suitable for IoV. To realize the multi-objective optimization problem, a computation offloading method which employs V2X communication named V2X-COM is proposed in this paper. Firstly, by using V2X communication, the offloading route from the origin vehicle to the destination vehicle is acquired. Then, NSGA-III is utilized to realize the multi-objective optimization. The efficiency of V2X-COM is verified by subsequent experimental evaluations. In future work, we will attempt to adjust and extend our method to adapt to the real scene.