Introduction
The Traveling Salesman Problem (TSP) attempts to find the shortest tour that traverses all nodes once and only once. TSP is also a typical combinatorial optimization problem. The symmetric TSP means that the distances are the same between from node
According to solution accuracy, the TSP algorithms are divided into two categories. One is the exact algorithms which can find the optimal solution in thousands of nodes, the typical algorithms are branch and bound [2], branch and cut [3], branch and price [4], and dynamic programming [5]. The other is the heuristic intelligent algorithms, e.g., Genetic Algorithm (GA) [1], [6], [7], Simulated Annealing (SA) [8], [9], Artificial Bee Colony algorithm(ABC) [10], Ant Colony Optimization (ACO) [11], Particle Swarm Optimization (PSO) [12], neural network [13], [14], tabu search [15], shuffled frog-leaping algorithm [16], wedging insertion method [17].
Every algorithm has some superiority and weakness. Exact algorithms can obtain the optimal solution, but usually consume long runtime, particularly for large-scale TSP. The intelligent algorithms can find the near optimal solution in a relatively short time. Some hybrid intelligent algorithms synthesizing multiple methods can have a better and more competitive performance.
Othman et al. [18] introduces the performance and behavior of Water Flow-like algorithm (WFA) applying using 2-Opt and 3-Opt in TSP.
A parallelized neural network system (PMSOM) is proposed to solve the TSP problem [19]. It consists of dividing all cities into municipalities, finding the best overall solution for each municipality, joining neighborhood municipalities to generate the final solution. Test results show that the approach can obtain a better answer in terms of solution quality and runtime.
A hybrid genetic algorithm (HGA) with two local optimization strategies is introduced to solve TSP [20]. The computation results show that it has better performance than GA.
A discrete symbiotic organisms search (DSOS) algorithm is proposed to find a near optimal solution for TSP [21]. The symbiotic organisms search is improved and extended by using three mutation-based local search operators to reconstruct its population. Numerical results show that the proposed method can achieve results close to the theoretical best known solutions (BKS) within a reasonable time.
In improved fruit fly optimization algorithm (IFOA), three improvements are incorporated [22]. The results show a very competitive performance in terms of the convergence rate and precision.
A hybrid method between dynamic programming and memetic algorithm is introduced to solve the TSP with hotel selection which is a variant of the classic TSP [23]. Experiments show the proposed approach has remarkable performance.
An effective local search algorithm based on simulated annealing and greedy search techniques (ASA-GS) is presented to solve the TSP [24]. Test results show that the proposed algorithm provides better compromise between CPU time and solution accuracy for the TSP.
PSO and GA have the shortcomings of premature convergence and poor convergence for high dimensional complex problem, which cannot guarantee to obtain the optimal solution. Neural network is easy to generate unreasonable solution or local optimal solution. ACO is premature convergence, but ACO is a forward feedback method with very few parameters which is easy in adjusting. ACO algorithm has been considered that it is effective for small-scale TSP [25], [26]. Some hybrid algorithms based on ACO have a more competitive performance.
Ant colony extended (ACE) is a novel extensional algorithm of the general ACO framework [27]. Experimental results show ACE has better performance than ant colony system and max-min ant system in almost every tested TSP instance.
A new hybrid method based on SA, GA, ACO, and PSO is presented to solve TSP [28]. First, the initial solutions are generated by using ACO. Then optimize them by genetic simulated annealing. After a given number of iterations, the pheromone information between groups is exchanged by PSO. Experimental results from TSPLIB show that the proposed method has a better solution than the others.
In [29], a parallel cooperative hybrid algorithm (PACO-3Opt) is proposed to solve TSP. Firstly the ACO algorithm generates initial solutions in parallel, and then the 3-Opt algorithm improves these solutions. The algorithm enhances the quality and the robustness of the obtained solutions, significantly decreases the computational time and can solve large-scale TSP problem within a reasonable time.
A hybrid algorithm (ACO-Taguchi) based on ACO and the Taguchi method is proposed to solve TSP [30]. The performance of the proposed algorithm is improved after optimizing the ACO parameters by using the Taguchi method.
Wang presents hybrid max-min ant system (HMMA) [18] with four vertices and three lines inequality to solve TSP. The experimental results show that it can find the better approximate solutions
A hierarchical algorithm based on clustering algorithms and ACO (HCACO) is presented for solving large-scale TSP [31]. Firstly, the large-scale TSP problem is decomposed into several small-scale TSP subproblems by k-means algorithms, while all subproblems could be solved by ACO. Then, the local solutions for all subproblems are merged. Numerical simulation results show that the proposed algorithm has a beneficial effect for large-scale TSP.
In [32], A hybrid method called PSO-ACO-3Opt based on ACO, PSO, and 3-Opt is proposed. In this algorithm, PSO optimizes the parameters of ACO, the initial solution is obtained by ACO, and finally 3-Opt is used to avoid falling in local minimum. Experimental results show that the performance of the proposed method is better than that of other compared methods in terms of solution quality and robustness.
Another hybrid algorithm (C-PSO-ACO-KOpt) based on ACO, PSO, and k-Opt is proposed to solve TSP [33]. ACO is used to generate initial swarm of PSO. Then PSO is made on this swarm to find optimal path which is improved by k-Opt. The benchmark test results show that the algorithm is more efficient with respect to accuracy as well as runtime.
For a TSP problem, a feasible strategy is to firstly decompose into a number of small-scale subproblems which should maintain the main structure of the optimal tour, then solve all small-scale subproblems and finally merge all local tours.
This article proposes a hybrid hierarchical algorithm for TSP. Firstly, the TSP problem is decomposed into small-scale TSP by clustering algorithm based on the superiority of ACO algorithm in small-scale TSP. Then solve the TSP tour by ACO algorithm in each cluster and merge the local tours among clusters. Finally escape from local optima by using k-Opt algorithm to obtain the global tour. Three groups of benchmarks from TSPLIB are used to validate the algorithm. The results show that the algorithm in the article can get the tour closed to the theoretical optimal values in a reasonable and short runtime and has strong robustness.
The rest of the article is organized as follows. Section 2 introduces the Density Peaks Clustering (DPC), ACO, and k-Opt algorithms. The proposed algorithm is detailly explained in Section 3. Experiments and comparisons are revealed in Section 4. The results are analyzed in Section 5. Finally, Section 6 draws some conclusions and provides future work.
Materials and Methods
A. Density Peaks Clustering
There are a lot of clustering algorithms, such as k-means [34], affinity propagation algorithm [35], and density peaks clustering [36]. The DPC algorithm assumes that the cluster centers are surrounded by neighbors with lower local density and that the centers are at a relatively farther distance from any points with higher local density. The local density \begin{equation*} \rho _{i} =\sum \limits _{j} {\chi (d_{ij} -d_{c})}.\tag{1}\end{equation*}
Where, \begin{equation*} \chi (x)=\begin{cases} 1,&x<0 \\ 0,&x\ge 0. \end{cases}\tag{2}\end{equation*}
Let \begin{equation*} \delta _{i} =\min \limits _{j:\rho _{j} >\rho _{i}} (d_{ij}),\tag{3}\end{equation*}
If point \begin{equation*} \delta _{i} =\max _{j} (d_{ij}).\tag{4}\end{equation*}
The cluster centers are the points with especially large
B. Ant Colony Optimization
Ant colony algorithm [26], [37], [38], [39] proposed by M. Dorigo in 1991 is a heuristic search algorithm based on swarm intelligence. Ants can find the shortest tour between the nest and the food source in nature. Ants deposit a chemical substance, called pheromone, on the way they passed. Other ants follow the pheromone trail and search the food source. Furthermore, ants reinforce the pheromone while they return to their nest. Meanwhile, the pheromone evaporates over time and reduces its attractiveness. The pheromone density on the way depends on the times the tour is selected, especially in recent time. Ants can find the shortest tour by choosing the ways on which the pheromone density is the highest. Inspired by the foraging behavior of ants in nature, ant colony optimization is proposed to solve the optimization problem in the discrete system. At present, the algorithm has been widely used to solve traveling salesman problem, the assignment problem [40], the scheduling problem [41], the feature selection [42], and path planning of mobile robots [43]. Ant colony algorithm without prior knowledge is a stochastic optimization method. Ants randomly select the node, gradually optimize the tour with the aid of pheromones, and obtain the global optimal tour at last.
Assume that the TSP has \begin{equation*} \mathop P\nolimits _{ij}^{k} (t)=\begin{cases} \tau _{ij}^{\alpha } (t)\eta _{ij}^{\beta } (t)/\sum \nolimits _{s\in a_{k}} {\mathop \tau \nolimits _{is}^{\alpha } (t)\eta _{is}^{\beta } (t)},&j\in a_{k} \\ 0,&otherwise. \end{cases}\tag{5}\end{equation*}
Where \begin{equation*} \tau _{ij} (t+1)=(1-\rho)\tau _{ij} (t)+\Delta \tau _{_{ij}}^{k} (t,t+1).\tag{6}\end{equation*}
Where \begin{equation*} \Delta \tau _{_{ij}}^{k} (t,t+1)=\sum \limits _{k=1}^{n} {\Delta \tau _{_{ij} }^{k} (t,t+1)}.\tag{7}\end{equation*}
In the ant-cycle system, the quantity of pheromones left by the kth ant between node \begin{equation*} \Delta \tau _{_{ij}}^{k} (t,t+1)=\begin{cases} Q/L^{_{k}},&if\;the\;kth\;ant\;uses\;path_{ij} \\ 0,&otherwise. \end{cases}\tag{8}\end{equation*}
Where
C. k-Opt
Some optimization algorithms (e.g. GA, ABC) are presented to decrease the length of the tour. These algorithms sometimes fall in local minimum when searching the optimal tour for TSP. k-Opt algorithm has been used to avoid local optima. 2-Opt [44] and 3-Opt [45] are the typical subclasses of the k-Opt algorithm.
The 2-Opt algorithm basically removes two edges from the tour and reconnects the two edges to obtain a new tour. The steps continue only when the latest tour is shorter. Removing and reconnecting the tour leads to an optimal tour. The 3-Opt algorithm is similar, but instead of removing two edges, three are removed and reconnected.
If the three removed edges are interval among six nodes, all possible 3-Opt reconnection variants are as shown in Fig. 1. And Fig. 2 demonstrates all possible variants among five nodes. A 3-Opt movement is equal to two or three 2-Opt movements. A 3-Opt movement may provide better solutions, but it is significantly slower.
All possible 3-Opt reconnection variants among six nodes. There are seven variants. In which, (a), (b), and (c) are essentially 2-Opt movements; while (d), (e), (f), and (g) are 3-Opt movements.
All possible 3-Opt reconnection variants among five nodes. The red edge is relatively motionless. There are five possible variants. In which, (c), (d), and (e) are essentially 2-Opt movements; while (a) and (b) are 3-Opt movements, and they are symmetric.
Proposed Algorithm (DPC-ACO-KOpt)
Generally, in the process of solving the TSP problem by using ACO algorithm, computational complexity increases rapidly nonlinearly along with the dimension increase of the TSP problem instance. When the nodes are more than 200, it is difficult to compromise between runtime and solution accuracy. Jiang et al. [31] consider that the runtime and solution quality of ACO algorithm are both in the optimal region when the nodes are less than 40, but longer runtime and lower quality when more than 40. In [31], the clustering is conducted by using k-means algorithm. K-means is applicable for the nodes with circle or spherical distribution, but not for linear distribution. DPC which groups all nodes according to the density works effectively in more distribution styles, such as circle, spherical, curvilinear, and linear. This is a bold attempt for DPC in the similar solution.
As shown in Fig. 3 and Fig. 4, the paper proposes a hierarchical hybrid method based on ACO. Firstly, group all nodes and find cluster center node inner each group by using DPC algorithm. It takes linear and short time. All nodes are divided into two layers. The lower layer of the proposed approach consists of all clusters. Every cluster is a small-scale TSP problem which could be solved by ACO algorithm. All center nodes comprise the upper layer. Each center node represents one clustering group in the lower layer. Therefore, the upper layer is also a small-scale TSP problem. According to the sequence among groups in the upper layer, the initial global tour which visits all nodes can be constructed by merging local tours for all clustering groups in the lower layer. The final global tour is obtained after optimizing the initial local tour by using k-Opt algorithm.
The local tour of each group in the lower layer would be integrated the global tour. All center nodes in the upper layer constitute a new TSP problem which could also be solved by ACO. The order among center nodes in the upper layer is the sequence of integrating local tours in the groups at the macro level. The local tour in every group is the Hamilton loop. Thus, the local tour should be disconnected at one node, and reconnected with the adjacent group. To obtain the optimal tour, one pair of nodes with the shortest distance between two adjacent groups are disconnected in local tour in the lower layer and reconnected to construct the initial global TSP tour.
The ACO parameters may be different in the process of merging local tours when the number of groups is different with clustering. The parameters need to be determined by experiments for multiplex problem sizes.
In the process of clustering nodes by using DPC algorithm, when the number of nodes increases in each cluster, the number of groups and the clustering time both become less, but it takes more time to find the local tour by using ACO algorithm in each cluster. The total runtime usually becomes longer. In addition, the number of nodes in each group would affect the solution accuracy of the proposed method. Thus, the runtime and the accuracy of the algorithm are closely related to the number of nodes in each cluster.
The method that decomposes the problem into subproblems can improve the solution speed but may produce nonoptimal solution when connecting the local paths among subproblems. Therefore, the solution accuracy should be enhanced by using local path optimization algorithm, such as k-Opt.
Experiment
The proposed algorithm is carried out by MATLAB 2015b with single thread, the CPU is Intel (R) Core (TM) i7-6700 @3.40GHz, the memory capacity is 8 GB, and the OS is Win7. TSPLIB [46] is the public library of TSP benchmark instances which are commonly used for testing the optimization algorithms. The experiments are divided into three groups according to the problem size: small-scale, large-scale, and very large-scale.
A. The Scale of Each Clustering
To find the optimal range of the number of nodes in each cluster, the numbers from 30 to 80 with an interval five are candidate scales. These five instances (ch150, kroA200, rd400, d493, and p654) are selected for parameters determination experiments because they are different distribution styles and the nodes are not too many, but wide and uniform distribution. Each candidate number was repeated 20 times for five TSPLIB instances. The changed relative error is computed according to (9).\begin{equation*} CRE=(L_{ACO} -L_{HHA}) / L_{BKS} \times 100.\tag{9}\end{equation*}
For a certain TSP instance, let \begin{equation*} CT=T_{ACO} /T_{HHA}.\tag{10}\end{equation*}
Let
In order to obtain unbiased comparisons,
Both accuracy improvement and runtime reduction must be comprehensively considered in the assessment. The evaluation function is defined as (11).\begin{equation*} EV=\eta \times CRE+CT.\tag{11}\end{equation*}
Let
The relative error and the runtime based on the test are shown in Table 1. We observed that the accuracy was improved and the runtime was reduced for all instances by using the proposed algorithm. The accuracy improvement was the best for ch150 and kroa200 when the evaluation value was the highest, and the third best for rd400. While the runtime reduction was the most obvious for ch150, kroA200, and d493, and the second best for rd400 and p654. In five instances, the candidate number 35 was hit four times, and the number 30 was hit one time. The result shows that the runtime and the relative error of the proposed algorithm are both in the best range when there are at most 35 nodes in each group. The number of clusters increases as the nodes become more, so does the communication and merge time between groups. In theory, the number of nodes is the optimal for the ACO algorithms in the both upper and lower layers when the nodes are clustered 35 groups and each group has at most 35 nodes on average.
That is, the hierarchical hybrid algorithm with 1225 (
B. Evaluation of the DPC-ACO-KOpt Algorithm on the Small-Scale TSP Problems
There are usually less than 10 groups after using DPC algorithm for small-scale TSP problems. The communication time between groups is very short. The number of nodes for ACO are slightly different between with clustering and without clustering. The runtime of the proposed algorithm decreases, but not obviously.
Since the node distribution and the size in every TSP instance may be different, the optimal values of the
Generally, the number of ants
The k-Opt algorithm is used to escape from the local optimal solution. The k-Opt algorithm has a significant effect on improving solution accuracy. The time cost and solution accuracy improvement of k-Opt algorithm are shown in Table 4 for 10 small-scale TSP instances. The optimization effect is measured in (12). It is the percentage of the average tour length decreased by using k-Opt local optimization algorithm versus the length of BKS. The time cost is calculated in (13), and it is the percentage of the k-Opt runtime versus the total runtime with k-Opt.\begin{align*} DL_{KOpt}=&~(L_{avgwithoutKOpt} -L_{avg})/L_{BKS} \times 100. \tag{12}\\ TC_{KOpt}=&~(T_{KOpt} /T_{\mathrm {with}KOpt})\times 100.\tag{13}\end{align*}
We observed that k-Opt algorithm yielded better tour lengths for all instances. The optimization effect was most obvious for rat99, up to 13.14%. In the worst case, the tour length was decreased 2.23% for eil51. The time consuming for k-Opt was below 0.26% and almost negligible.
As previously mentioned, the characters and features of some related algorithms have been briefly stated in Section 1. To test the performance of the hierarchical hybrid algorithm, 11 other algorithms were selected to compare with the DPC-ACO-KOpt algorithm: PACO-3Opt (2016) [29], PSO-ACO-3Opt (2015) [32], ACE (2015) [27], ASA-GS (2011) [24], SA-ACO-PSO (2011) [28], WFA-2Opt (2013) [18], WFA-3Opt (2013) [18], ACO-Taguchi (2013) [30], IFOA (2017) [22], DSOS (2017) [21], and C-PSO-ACO-KOpt (2017) [33]. Table 5 demonstrates the comparison of the DPC-ACO-KOpt algorithm with these 11 algorithms for 10 small-scale TSP instances. The quality criteria of the solutions are the average tour length, the standard deviation (SD) and the relative error (RE). The RE is calculated in (14). \begin{equation*} RE=(L_{avg} -L_{BKS})/L_{BKS} \times 100.\tag{14}\end{equation*}
In terms of the average tour length and
For the small-scale TSP instances, the proposed algorithm performs better than the remaining 11 algorithms in 80% of instances ranging from 51 up to 200 nodes (i.e., 8 out of 10 instances). The average relative error was 0.07%, which was obviously less than the 0.14% of C-PSO-ACO-KOpt, 0.21% of ACE, and 0.4% of PACO-3Opt respectively.
The standard deviations of these tours found by the proposed algorithm were minimal for eil51, berlin52, and kroA200. WFA-2Opt had the best standard deviations for kroA100, lin105, and ch150. C-PSO-ACO-KOpt was the most stable for rat99 and eil101, PSO-ACO-3Opt for eil76, and PACO-3Opt for st70. This means the proposed algorithm has more robust solutions than other algorithms.
The proposed algorithm resolves the smaller scale TSP by using ACO after clustering all nodes and decomposing the original TSP problem. The problem size for the ACO is much smaller in the proposed algorithm, and the runtime is shorter in each cluster. Although it spends some runtime on merging the local tours, the total runtime is still shorter. The runtime comparison for 10 small-scale TSP instances is summarized in Fig. 5.
Runtime comparison of the proposed algorithm with ACO-KOpt (without DPC) for 10 small-scale TSP instances. DPC-ACO-KOpt outperformed ACO-KOpt without DPC in all test instances. The advantage was more significant when the nodes are more than 105.
C. Evaluation of the DPC-ACO-KOpt Algorithm on the Large-Scale TSP Problems
There are usually no more than 35 groups after clustering by DPC algorithm for large-scale TSP. The communication time between groups is not long. The problem sizes for ACO are clear different between with clustering and without clustering. The superiority of short runtime is significant for these large-scale TSP instances. Eleven cases from TSPLIB with 350 to 1200 nodes were chosen to test. The test environment was the same as that in the small-scale TSP instances. The parameters
To test the performance of the DPC-ACO-KOpt algorithm in the large-scale TSP problems, eight other algorithms briefly explained in Section 1 were selected to compare with the DPC-ACO-KOpt algorithm: PACO-3Opt (2016) [29], PSO-ACO-3Opt (2015) [32], HMMA (2015) [25], PMSOM (2015) [19], HCACO (2014) [31], HGA (2014) [20], ASA-GS (2011) [24], and DSOS (2017) [21]. The comparison results with eight other algorithms on the 11 large-scale instances are shown in Table 7. DPC-ACO-KOpt algorithm yielded the better solution than eight other algorithms except rat-style instances (rat575 and rat783) in terms of the average tour length and the best tour length. The relative error of the proposed algorithm was less than 0.99% except rat-style instances. It was distinct that ASA-GS was more suitable for rat-style TSP instances than the proposed algorithm. The densities of all nodes in the rat-style instances are almost equal so that the connection between groups could not lead to better tour when merging the local tours, and k-Opt does not remedy the deficiency.
For the large-scale TSP instances, the proposed algorithm outperforms the other algorithms in 81.82% of instances ranging from 400 up to 1173 nodes (i.e., 9 out of 11 instances). The average relative error was 1.45%, which was less than the 1.65 of ASA-GS, 2.4 of PACO-3Opt, and 2.65 of PSO-ACO-3Opt respectively. If excluding the rat-style instances, the average relative error of the proposed algorithm is 0.70 which is much less than 1.45.
D. Evaluation of the DPC-ACO-KOpt Algorithm on the Very Large-Scale TSP Problems
There are usually no more than 100 groups after clustering by using DPC algorithm for very large-scale TSP. The communication time between groups may be long and cannot be neglected. The size of every group after clustering is much smaller than the scale without clustering. Overall, the runtime can be reduced. The test environment and parameters are both the same as them in the large-scale instances.
To test the performance of the DPC-ACO-KOpt algorithm in the very large-scale TSP problems, five other algorithms briefly explained in Section 1 were selected to compare with the DPC-ACO-KOpt algorithm: PMSOM (2015) [19], HMMA (2015) [25], HCACO (2014) [31], ASA-GS (2011) [24], and DSOS (2017) [21]. The comparison results with five other algorithms on the nine very large-scale instances are shown in Table 8.
We observed that DPC-ACO-KOpt algorithm constructed better solution than other remaining algorithms on all very large-scale TSP instances. The relative error of the proposed algorithm was between 0.22% and 1.57%.
For the very large-scale TSP instances, the proposed algorithm surpasses the five other algorithms in 100% of these nine instances in terms of solution accuracy. The average relative error was 0.54%, which was significantly less than the 2.17 of ASA-GS, 6.94 of PMSOM, and 10.96 of HCACO respectively.
Compared with the suboptimum algorithm, the solution accuracy of the proposed algorithm improves 2.0 times for the small-scale problems (i.e., 0.07 versus 0.14), 1.14 times for the large-scale problems (i.e., 1.45 versus 1.65), 4.02 times for the very large-scale problems (i.e., 0.54 versus 2.17) in terms of the average relative error. The proposed algorithm is more applicable for the very large-scale TSP problem than the small-scale and large-scale TSP. This verifies the fact the algorithm has more remarkable advantage for the very large-scale TSP problem.
Overall, the proposed algorithm overcomes other algorithms in 86.67% of the 30 instances ranging from 51 to 3038 nodes (i.e., 26 out of 30 instances). The average relative error is 0.75 for all 30 instances (0.07 of small-scale TSP, 1.45 of large-scale TSP, and 0.54 of very large-scale TSP). Therefore, this result supports the fact that the proposed algorithm can realize TSP solutions with high-accuracy and compete favorably with the state-of-the-art TSP algorithms, especially for the very large-scale TSP problem.
Results and Analysis
The better solution accuracy of the proposed algorithm has been demonstrated in Table 5, 7, and 8. The results arise from better strategy, parameter tuning, and local optimization. The solution accuracy after grouping and merging does not decline but surpasses other algorithms. This phenomenon is mainly due to two reasons. One is that the ACO algorithm has higher-accuracy for small-scale TSP problems versus large-scale TSP problems. The other is that the DPC algorithm clustering nodes based on the density is effective for the decomposing of TSP problem. It is appropriate that the proposed algorithm solves the small-scale TSP by using ACO algorithm after DPC algorithm decomposes large-scale TSP into small-scale TSP. The performance is further revealed by descriptive statistical analysis and runtime analysis.
A. Descriptive Statistical Analysis on Solution Accuracy
The two best algorithms for small-scale instances are the proposed algorithm and C-PSO-ACO-KOpt, while for large-scale and very large-scale instances are the proposed algorithm and ASA-GS. We compare them against BKS using MATLAB statistical package to further validate the performance.
In summary, descriptive statistics reveal these algorithms in terms of mean, standard deviation, minimum, maximum and range. The Levene’s test validates whether all the algorithms have the same variance. The one way Analysis of Variance (ANOVA) test reveals performance difference among all the solutions and tests whenever the parametric assumption is met.
Table 9 and 10 demonstrates the descriptive statistics about the performance of proposed algorithm, C-PSO-ACO-KOpt, and ASA-GS with BKS. The proposed algorithm is averagely smaller than C-PSO-ACO-KOpt or ASA-GS in terms of mean, standard deviation, maximum and range. This supports the fact that the proposed algorithm is better than C-PSO-ACO-KOpt or ASA-GS. The data about the latter two algorithms have wider range and data spread around its mean value, while proposed algorithm has the smaller range and data dispersion around its mean value.
The analysis of equal variance among the three algorithms is based on the Levene’s test because it is robust even with departure from data normality. In Table 9, we conclude that the test results on small-scale, large-scale, and very large-scale instances are found to be statistically insignificant. This suggests the null hypothesis of equal variance cannot be rejected. In other words, proposed algorithm and the suboptimal algorithms have the same equal variance against BKS.
One way ANOVA is conducted to assess the difference between the proposed algorithm and BKS. In Table 12, the test result indicates that the majority of variation is within group and not between groups. The test that explains the difference between the two solutions reveals that the residual is minimal in the variance. Therefore, the ANOVA test is found to be statistically insignificant in the light of the high p-value and low F-statistic. In other words, there is no statistically significant difference between the BKS and the proposed algorithm.
B. Runtime Analysis
The runtime is an important performance for optimization algorithm. Let \begin{equation*} T(n)=O(N_{c} \times n^{2}\times m)=O(n^{3}).\tag{15}\end{equation*}
Let \begin{equation*} T^{HHA}(n)=O(n+N_{c} \times C_{m}^{3}\times \left \lceil{ {n/C_{m}} }\right \rceil +n)=O(n).\tag{16}\end{equation*}
In hierarchical hybrid algorithm, the runtime of grouping nodes by using DPC and merging local solutions are both
It is obvious that computational complexity of the proposed algorithm is less than the traditional ACO algorithm. The advantage of short runtime becomes more remarkable as problem size increases. The algorithms with short runtime or high accuracy in Section 1 were selected: ASA-GS (2011) [24], ACO-Taguchi (2013) [30], HMMA (2015) [25], PACO-3Opt (2016) [29], and C-PSO-ACO-KOpt (33) [2017]. We compared the proposed algorithm with these five methods. Table 13 demonstrates runtime comparison as the nodes become more. Obviously, PACO-3Opt was more suitable for eil51 due to parallelization and without clustering. AS problem size increases, the proposed method has more significant advantage in terms of runtime and overcomes other algorithms on all instances except eil51. The average runtime is 5.26 seconds for proposed algorithm, 1.39 times for ASA-GS (i.e., 5.26 versus 7.31), 3.07 times for ACO-Taguchi (i.e., 5.26 versus 16.14), and 38.03 times for C-PSO-ACO-KOpt (i.e., 5.26 versus 200.06).
Conclusion
TSP is a NP-hard problem to traverse all nodes once and only once in the shortest tour. The traditional exact algorithms and heuristic intelligent algorithms have some weaknesses in solving large-scale TSP, such as slow convergence speed in the intelligent algorithms, and long runtime that unacceptable in the exact algorithms. It becomes more noticeable when there are more than 200 nodes. A novel hierarchical hybrid algorithm is proposed in this article, which takes full advantage of the superiority that ACO algorithm in solving small-scale TSP problem and DPC algorithm in clustering large-scale nodes. The nodes are clustered by using DPC algorithm, and the large-scale TSP problem is decomposed into a few subproblems of TSP with small-scale nodes. Then the TSP tour in each group and the tour among all cluster center nodes are resolved by using ACO algorithm. The initial global tour is constructed by merging the local tours at the nearest nodes between two adjacent clusters. Finally, k-Opt optimizes the initial tour to generate the final global solution. The proposed algorithm is validated on 30 TSP instances in three groups. Experimental results show that the proposed algorithm has a significant effect on reducing runtime and has the superiority of higher accuracy and robustness. The advantage of the hierarchical hybrid algorithm would be more significant as the problem size increases. The next research content is exploring the relationship between iteration times of ACO in each group and the local or global optimal tour. Another research direction is studying how to merge more efficiently and effectively the local suboptimal tours to construct the global optimal solution.
ACKNOWLEDGMENT
The authors are grateful to the anonymous referees for valuable suggestions and comments which helped us improve the paper.