Introduction
Optimization is one of the most common and important issues in both research and development of science and technology. The real-world optimization problems often appear with high complexity and dimension. These problems are known as large-scale global optimization (or search) and cannot be solved by traditional optimization algorithms because of the high dimensions and complex interactions among variables. The large-scale global optimization problems exist extensively in many scientific research and engineering applications like distributed systems, large-scale machine learning, and neural networks optimization [1]–[3].
Meta-heuristic algorithms used to solve many current optimization problems in different domains such as system control [4], [5], machine design [6], Big Data [7], [8], engineering design problems [9]–[11], global optimization problems [12]–[14] and so forth. The algorithms could be classified roughly into two categories [15]:
Single-solution-based algorithms randomly generate a certain solution and gradually improve the solution until the best results are obtained. The typical example is Simulated Annealing. The main drawback of this technique is that it may be trapped into a local optimum, which prevents to find out global optimum.
Population-based algorithms generate a set of random solutions within a given search space. The typical examples are the Genetic Algorithm or Particle Swarm Optimization. These solutions are updated continuously until the best one is found out.
All meta-heuristic techniques operate the exploration and exploitation phases in the search space. At the same time, they also have to maintain the right balance between these two phases. The exploration investigates various promising areas in a search space while the exploitation finds optimal solutions around them [16]–[18]. Therefore, there is a need to control these two phases effectively to achieve optimal solutions for an optimization problem.
The standard meta-heuristic algorithms for solving the large-scale global optimization problems often suffer the degradation of optimization ability when the search space dimension goes up. Thus, the performance of these algorithms deteriorates when tackling serious dimensional problems. There are two major reasons for the performance deterioration of these algorithms: (1) the increasing problem dimension size leads to its space complexity and characteristic alterations; (2) the search space exponentially increases with the problem size; so an optimization algorithm must be able to explore the entire search space efficiently, and this is not a trivial task.
Following this context, the scope of this work presented in this paper aims to challenge the dimension increase in global optimization using meta-heuristic solutions. The most notable feature of this work is the proposed optimizer with the name Hybridization of Galactic Swarm Optimization and Evolution Whale Optimization Algorithm (called HGEW). This evolution hybridization optimizer composes two population-based meta-heuristic algorithms:
Galactic Swarm Optimization (GSO), and
Our proposed evolution version of the Whale Optimization Algorithm (WOA). The proposed evolution version (called EWOA in short) improves the original one by Levy-Flight trajectory and two-point crossover operator.
HGEW was carefully tested as the whole with the set of CEC 2014 competition benchmark functions consisting of both unimodal, multi-modal, hybrid, and composition types, whose complexity characteristic is expanded following the increase in the number of dimensions. The experiments also compare the performance of HGEW with variants of GSO and WOA. The obtained outcomes showed that HGEW brings outstanding performance in terms of accuracy, convergence speed, and stability in solving the global optimization problems robustly. The proposed method introduced in this paper could be applied to optimization problems in general, like training neural network serving prediction tasks.
The structure of this paper is as follows. Section II presents a background and related work. It contains an overview of the optimization problem, large-scale global optimization, meta-heuristic solutions, as well as GSO and WOA in detail. The HGEW hybridization design composed of GSO and EWOA are presented in Section III. The experimental setup is described in Section IV. Section V brings experimental obtained results, comparisons and evaluations. Finally, Section VI concludes the study and defines our plans.
Background and Related Work
A. Optimization and Meta-Heuristics
In general, an optimization problem can be solved using the exact or approximate approach. However, domain data usually have various and high characteristic dimensions, which lead to NP-hard problems. It is difficult to solve these problems by exact methods like fast steepest, sequential quadratic programming, conjugate gradient, and quasi-Newton methods. Approximate methods are promising to solve the above-mentioned issues by effectively finding feasible solutions. The approximate methods could be divided into two subclasses, namely heuristic and meta-heuristic algorithms.
Heuristic algorithms are often used for optimization problems based on several assumptions, such as knowledge inside a concrete domain [19]. Meta-heuristic algorithms are more domain-independent, and often operate based on probabilistic rules rather than deterministic ones. These algorithms are considered as a more general approach to solving almost all optimization problems with several advantages [20], [21], including: 1) High flexibility due to considering the optimization problems as black boxes; 2) Independence with gradient information; 3) Easy implementation for several fields; 4) Exploration capability enables to avoid local optima issue. As presented in Section I, meta-heuristic techniques are divided into single-solution-based and population-based algorithms. The works presented in this paper are in the population-based group.
B. Population-Based Meta-Heuristic Algorithms
Population-based meta-heuristic algorithms often mimic the natural phenomena [22]–[24]. In this direction, these algorithms start the optimization process by generating a set (or population) of individuals. Each individual represents a candidate solution for the optimization problem. The population will be evolved iteratively by replacing the current population with newly generated ones using some often-stochastic operators [25], [26]. The optimization process is proceeded until satisfying a stopping criterion (e.g., the maximum number of iterations [27], [28]). With inspiration, population-based algorithms are categorized into: 1) Evolutionary-based group mimics biological evolutionary behaviors such as recombination, mutation, and selection; 2) Physics-based group is inspired by the physical laws; 3) Human-based group mimics certain human behaviors; 4) Swarm-based group mimics the social behaviors, e.g., decentralized and self-organized systems of organisms living in swarms, flocks, or herds.
Particle Swarm Optimization (PSO) [29] is a very first well-known swarm-based optimization. It is the premise of many other meta-heuristic algorithms proposed in recent years. In PSO, a swarm contains several candidate solutions (also known as particles), which coexist in the search space of the problem with
GSO and WOA, which are in the center of interest of our works belonging to the swarm-based group. The common exciting feature of both algorithms is that they offer the simplicity and efficiency principles presented in the following subsections.
C. Galactic Swarm Optimization (GSO)
GSO [30] is inspired by the behaviors of stars in galaxies, and of galaxies in superclusters in the cosmos. It takes advantage of the original PSO by using multiple flexible cycles of exploration and exploitation phases to find new, better solutions. Due to the simple implementation and efficiency, GSO has been applied to several real-life problems [31]–[35] as well as in searching global optimization [36], [37].
In the original GSO, under the influence of gravity, stars in a galaxy are attracted to another star, which has greater gravity. From these ideas, the movement of stars inside a galaxy as well as the movement of galaxies are emulated in the GSO algorithm by the following rules:
Individuals in each galaxy are attracted to bigger ones (i.e., better solutions) in their galaxy. The attraction process is performed by using the PSO algorithm.
Global bests of all galaxies are chosen to treat as a super-warm. Here, the PSO algorithm is used again to represent the movement of particles in the super-swarm.
GSO’s pseudocode is presented in Algorithm 1, and the variable explanation is shown in detail by Table 1.
Algorithm 1 Galactic Swarm Optimization (GSO)
Initialize
Initialize
for
Begin PSO: Level 1
for
for
for
if
if
end if
end if
end for
end for
end for
Begin PSO: Level 2
Initialize Swarm
for
for
if
if
end if
end if
end for
end for
end for
Results:
1) Initializing Stage
The main control parameters of whole GSO algorithm consists of its parameters used in each phase (Table 1). Swarm (population) includes \begin{align*} X=&\left \{{X_{i}| i = 1, 2, \ldots, M}\right \} \tag{1}\\ X_{i}=&\left \{{X_{j}^{(i)}| j = 1, 2, \ldots, N}\right \} \tag{2}\\ X_{i} \cap X_{j}=&\varnothing \quad \forall i \neq j \tag{3}\end{align*}
2) Updating Stage (Exploration and Exploitation Phase)
This stage covers two phases as follows:
Exploration of sub-swarms. PSO algorithm is run for each sub-swarm. Since swarm
is initially divided into$X$ groups, PSO will run$M$ times independently with global bests$M$ tied to each sub-swarm.$g^{(i)}$ will be updated if any particles in the sub-swarm have personal best$g^{(i)}$ , which is a better solution than$p_{j}^{(i)}$ ,$g^{(i)}$ . Each sub-swarm independently explores its best solution in its search space. This task is initialized by calculating velocity$f(p_{j}^{(i)}) < f(g^{(i)})$ and position$v_{j}^{(i)}$ of particles.$x_{j}^{(i)}$ Exploitation in super-swarm. All global bests from
sub-swarms presented in the first phase above are gathered to form the super-swarm. In other words, a new super-swarm$M$ is created by collecting$Y$ global bests of each sub-swarm$M$ .$X_{i}$

In this way, velocity
However, GSO faces the problem of being trapped in local optima and slow convergence due to the poverty of exploitation and exploration [38]. Concretely, although PSO in the first phase of GSO could push initialized particles to move a giant step to the global minimum, the exploitation capability of PSO in the second phase is not good enough to escape the local minimums and to prevent premature convergence, which was indicated in [39].
D. Whale Optimization Algorithm (WOA)
WOA recently emerged as a beneficial algorithm, which is applied in many practical problems such as forecasting water resources demand [40], flow shop scheduling problem [41], opinion leader detection in online social network [42], medical diagnosis [43]. However, it was first proposed in [44], which mimicked the unusual social behaviors and the engaging hunting activities of humpback whales. Hunting prey operation of humpback whales is called bubble-net feeding, which is often done in groups. The group size can range from two or three and up to sixty whales participating at once. After the bubble net is executed, it is kept shrinking until the food is eaten [45].
In the beginning, due to the random initialization of the whale population, the position of prey is not known. WOA assumes that a solution with the best fitness will be the prey, and it is reset at the beginning of each iteration. After the target is recognized, other whales will try to update their position around the prey. The hunting process executed by humpback whales is represented perfectly by three operations as follows:
Shrinking encircling mechanism enables the reduction of the encircling magnitude to update position landed in the search space between the original solution position and the best one at the current iteration.
Spiral updating position is based on the way, or humpback whales approach their prey. A spiral equation is created to emulate the helix-shape movement of humpback whales within the space between the position of whale and prey.
Search for prey represents the exploration process while whales find prey randomly that enables WOA to escape the local minimum easily.
WOA, with its operations, are thoroughly discussed in [44]. We summarize the mechanisms of WOA by Algorithm 2. Although the original WOA with the encircling mechanism and the spiral path is good at exploring the search space, it outperforms other natural inspired algorithms from the perspective of simplicity and efficiency, WOA still requires a specific improvement for tackling the large-scale global optimization when it gets stuck into local optima and degrades accuracy [46].
Algorithm 2 Whale Optimization Algorithm (WOA)
Initialize the
Calculate fitness of each
for
for
Update
if
if
Shrinking encircling mechanism
else
Search for prey process
end if
else
Spiral updating position
end if
end for
Evaluate population: if a feature of a
Recompute the fitness of all
Update
end for
Results:
E. Motivation and Contribution of the Work
The work presented in this paper addresses the global optimization problem. In our research direction, this optimizer could be used to find suitable parameters for the neural network in workload prediction and auto-scaling problems [22], [23], [47] of distributed systems like cloud computing and blockchain networks [48], [49].
Our proposed approach is inspired by the simplicity and efficiency offered in GSO and WOA swarm-based algorithms. GSO acts well as a global controller of the whole optimization process, but it suffers early convergence in exploitation because of the limitations of the PSO algorithm used in the second phase of GSO (i.e., easily being trapped in a local minimum, especially in complex multi-peak search problems as shown in [50] and [38]. On the other hand, WOA possesses a decent exploitation capability as compared with other well-known nature-inspired algorithms (e.g., GA and PSO), as shown in [44]. In order to accelerate convergence speed as well as retain the diversity of the population in WOA, several variants of this algorithm are introduced in [51], which used chaotic technique, and in [52] used Levy-Flight trajectory technique. The results have shown in [52] that WOA variants outperform many algorithms such as Moth Flame Optimizer, Bat Algorithm, Artificial Bee Colony, and the origin WOA. However, the exploitation and exploration capabilities of WOA variants still need to be enhanced since they still face the problem of premature convergence when tackling problems having increasing dimensions. Based on those motivations, the main contributions of our works include:
Proposing an evolution hybridization between GSO and EWOA as a new optimizer to avoid early convergence in the exploitation phase while keeping advantages of the original GSO as the whole as well as its exploration power. The new evolution hybridization is called HGEW in short, which stands for the Hybridization of Galactic Swarm Optimization and Evolution Whale Optimization Algorithm.
Proposing an evolution version of WOA based on Levy-Flight trajectory and two-point crossover operator. The aim is to provide faster local search with adaptive step lengths in the HGEW exploitation phase and to reduce bias in the offspring creation procedure. The new evolution version of WOA is called EWOA in short, which stands for Evolution Whale Optimization Algorithm (EWOA).
Providing thorough evaluation of HGEW functionalities as the whole through extensive experiments with different high-dimensional benchmark functions of two types of unimodal and multi-modal functions.
Providing thorough validation HGEW functionalities in comparison with baseline optimizers such as GSO and WOA-based with Levy-Flight trajectory. HGEW is also carefully compared with other hybridization variances such as GSO with WOA and GSO with Levy-Flight WOA. The gained results prove the HGEW contribution values in the aspects: more stability and robustness in comparison with existing ones.
Evolution Hybridization of GSO and EWOA
As above presented in Section II-E, our proposed evolution hybridization HGEW is built as GSO hybridization with our evolved EWOA. HGEW acts as the global controller of the whole optimization process with exploration and exploitation phases. It replaces the exploitation phase of GSO by our proposed EWOA as follows.
A. Evolution Whale Optimization Algorithm (EWOA)
The original WOA (Section II-D) can easily tackle problems of low-dimensional optimization. However, when applying WOA to global optimization problems with a higher dimension, it often trappers in local optima because the diversity population is decreased rapidly. To improve the exploration, convergence speed, and local minimum avoidance, several variants of WOA were introduced [46], [52]–[56]. The Levy-Flight trajectory [52] is one of the popular techniques used to improve the performance of meta-heuristics algorithms such as in Firefly [57], Krill Herd [58], Grey Wolf Optimization [25], [59], [60], Butterfly [61], and Harris Hawks Optimization [62].
In our work, WOA is evolved as EWOA in evolution way with enhanced searchability based on two ideas:
Using Levy-Flight trajectory to escape the local optima and accelerate the convergence speed,
Using Crossover operator to create more potential random offspring candidates in an evolution way than in [44].
1) Levy-Flight
The Levy-Flight [52] is a random process with step length chosen from Levy distribution. In WOA, Levy-Flight could help to maximize the diversification of newly created solutions, and allows the algorithm to exploit more efficiently in the search space. Additionally, it also avoids the local minimum issue and increases the acceleration of convergence [46], [52]. The technique also supports to obtain the balance between exploration and exploitation phases. WOA thus can jump out easily when it gets stuck at a local minimum. The mathematical expression of the technique is as follows:\begin{equation*} X^{t+1} = (X^{t} + \mu sign[rand - 0.5]) \oplus Levy\tag{5}\end{equation*}
indicates the current position; | |
is a random number chosen by uniform distribution; | |
is random number in [0, 1]; | |
stands for entry-wise multiplication; | |
For \begin{equation*} L(s) \sim |s|^{-1-\beta } \quad with ~ 0 < \beta \leq 2\tag{6}\end{equation*}
\begin{equation*} s = \frac {\mu }{|v|^{1/\beta }}\tag{7}\end{equation*}
\begin{align*} \mu\sim&N(0, \sigma _{\mu }^{2}) \quad \text {and} \tag{8}\\ v\sim&N(0, \sigma _{v}^{2}) \tag{9}\\ \sigma _{\mu }=&\left[{ \frac {\Gamma (1 + \beta).\sin (\pi.\beta /2)}{\Gamma ((1 + \beta)/2).\beta.2^{(\beta - 1)/2}} }\right] ^{1/\beta } \tag{10}\\ \sigma _{v}=&1 \tag{11}\end{align*}
A step size avoiding Levy-Flight jumping out the search space is defined by Equation 12.\begin{align*} Levy=&random(size(D)) \oplus L(\beta) \tag{12}\\ L(\beta)\sim&\frac {\mu }{|v|^{1/\beta }}(X_{i} - X_{*}) \tag{13}\end{align*}
Levy-Flight models two movements of whale based on the infinite variance of Levy distribution: while the long-distance movement is used to enhance the ability of exploration, the short-distance is performed to increase the exploitation capability. In our EWOA, the Shrinking encircling mechanism is eliminated, and Levy-Flight is employed to replace it in order to explore and exploit in the search space more effectively. The mathematical model of the exploitation phase of EWOA thus is modified and represented as follows:\begin{align*} X^{t+1} = \begin{cases} (X^{t} + \mu sign[rand - 0.5])\oplus Levy, & \text {if } p\geq 0.5\\ D'.e^{bl}.\cos (2\pi l) + X_{*}^{t}, & \text {otherwise} \end{cases} \\\tag{14}\end{align*}
Algorithm 3 Evolution Whale Optimization (EWOA)
Initialize the
Calculate fitness of each
for
for
Update
if
if
Shrinking encircling mechanism based on Levy-Flight
else
if
Updating by Crossover operator
else
Search for prey (exploration phase)
end if
end if
else
Spiral updating position
end if
end for
Evaluate population: if a feature of a
Recompute the fitness of all
Update
end for
Results:
2) Crossover Operator
As described above, the problems of the original WOA biodiversity reduction in global optimization need to be tackled too. In our work, a new operation inspired by the behaviors of humpback whales when they migrate to tropical water and give birth in mating seasons is proposed. In evolutionary-based meta-heuristics such as Genetic Algorithm [63], [64], or Differential Evolution [65], crossover operator plays a role as increasing the diversity of population. Therefore, to keep the population diversity in WOA throughout the iteration, we apply the operator to WOA. Thus, the operator uses the current best solution
Firstly, two numbers
and$i$ (two crossover points) are chosen randomly, ranging from 0 to dimension$j$ . The choices of$D$ and$i$ must satisfy two main conditions:$j$ and$i < j$ with$j - i = cr.D$ is the operator crossover rate.$cr$ Secondly,
and$i$ divide a potential solution into two parts. In this way, the best solution$j$ , for example, is divided into$X_{*}$ part and the rest. The segments feature of solutions then is exchanged and creates two new offspring. One of them will be chosen as a new whale in the WOA model.$X_{*}[i, j]$
The mathematical process of Crossover operator is presented as follows:\begin{align*} Z[0,i]=&Y[0,i] \tag{15}\\ Z[i,j]=&X_{*}[i,j] \tag{16}\\ Z[j,D-1]=&Y[j,D-1] \tag{17}\end{align*}
Unlike single-point Crossover, which faces a bias problem when the chosen point is near the end of features, a two-point crossover still works effectively in that case. Apart from this, the best solution
B. HGEW: a Hybridization of GSO and EWOA
As presented in Section II-C, GSO takes the advantages of the original PSO algorithm by using flexible multiple cycles of exploration and exploitation phases to escape local minimums and land to new, better solutions. However, using PSO still causes mediocre convergence issue [38]. Specifically, although PSO in the first phase of GSO can help initialized particles move a giant step to the global minimum, PSO in the second phase would not be able to exploit enough to escape the local minimums and to prevent premature convergence due to the restrictions of original PSO algorithm [66].
According to [44], [46], [52], as compared with PSO, WOA, and other its variants could bring better results with their exploration and exploitation. For that reason, in our HGEW evolution hybridization design, PSO at the second phase of the original GSO is replaced by EWOA. The HGEW workflow is presented in Algorithm 4.
In the first phase, we retain PSO for exploration. PSO in the first phase thus works as an initializer, which decreases the search space and creates a better-initialized population used in the second phase.
After that, EWOA in the second phase takes the advantages by using the population generated from the first phase to move faster to the global minimum.
Considering at the beginning of each iteration, the population in the second phase is always created again, but sub-swarms in the first phase continue exactly at the location where they left in the previous iteration. This process always allows EWOA to collect better individuals from subswarms throughout iterations when it starts to run.
On the other hand, the global best position in the second phase is recorded at the end in each iteration, and then it is employed in the next iteration, so that leads to a faster move and convergence in EWOA.
Algorithm 4 HGEW Hybridization of GSO and EWOA
Initialize
Initialize
for
Begin PSO: Level 1
for
for
for
if
if
end if
end if
end for
end for
end for
Begin EWOA: Level 2
Initialize Swarm
for
for
Update
if
if
Shrinking encircling mechanism based on Levy-Flight
else
if
Updating by Crossover operator
else
Search for prey (exploration phase)
end if
end if
else
Spiral updating position
end if
Evaluate
Recompute the fitness of all
Update
end for
end for
end for
Results:
The source code of the proposed Hybridization of Galactic Swarm Optimization and Evolution Whale Optimization Algorithm (HGEW) is publicly available in [67] as Open Source under Apache 2.0 license.
Experiments
Our HGEW evolution hybridization between GSO and EWOA is evaluated as the whole optimizer using different scalable benchmark functions. The following optimizers are used to compare the effectiveness with HGEW:
GSO [30] original one with two PSO phases;
LWOA as the improved version of WOA based on Levy-Flight trajectory [52];
GSO-WOA as the hybridization with PSO [29] in the first phase and WOA [44] in the second phase;
GSO-LWOA as the hybridization with PSO [29] the first phase and LWOA [52] in the second phase.
Several well-known optimization algorithms such as GA, PSO, and their variances, as well as Artificial Bee Colony and Ant Colony Optimization, are not considered in our experiments because they were compared with the improved version of WOA based on Levy-Flight trajectory (LWOA) [52].
Except for LWOA, the work [68] presents a systematic and meta-analysis survey of WOA usage with its modifications such as adaptive, self-adaptive, improved, chaotic, modified, and memetic versions in different areas. From all of these modifications, LWOA has shown better performance compared to other variances in various domains. This is the reason to compare our proposed HGEW with LWOA also in combinations with GSO, i.e., LWOA, WOA-GSO, and LWOA-GSO in our experiments.
Thus, WOA and LWOA demonstrated its advantages over those above techniques. The demonstration is also the reason to explain why we used WOA as well as LWOA with GSO in our experiments to evaluate our proposed HGEW.
A. Benchmark Functions
The work presented in this paper aims to address the global optimization problem, which stems from optimizing neural networks for workload prediction tasks. The set of 30 functions on Real Parameter Single Objective Optimization (incorporates expensive function optimization) of the CEC 2014 competition [70], [71] as benchmark functions for testing HGEW’s performance in our experiments. Here is suitable to mention that many real-world problems belong to multi-objective optimization, which will be in the scope of our future works. These 30 benchmark functions are presented in Table 2 and they are categorized into four groups as follows.
Unimodal functions
, which contain only one global optimal point in the search space.$C1 - C3$ Multi-modal functions
, which have one global optimal point going along with local minimum points.$C4 - C16$ Hybrid functions
: the function variables are randomly divided into some sub-components. Different basic unimodal and multi-modal functions thus are used for the sub-components.$C17 - C22$ Composition functions
merge the properties of the sub-functions and maintains continuity around the global/local optima.$C23 - C30$
We carefully carried out experiments with dimension
B. Evaluation Method
An evaluation method widely used for many algorithms in evolutionary computing is based on the number of iterations, which the algorithms run through. It is notable that in each iteration, GSO-based algorithms run the optimization process in the first phase with
In our experiments, we assessed the performance of algorithms with benchmark functions by using several function evaluations (FE) [30]. The function evaluation number of an algorithm is merely the number of updating operators over time that the algorithm runs through as follow:
For PSO and WOA, the number of function evaluations
is computed by:$FE$ where\begin{equation*} FE = n * Iter_{max}\tag{18}\end{equation*} View Source\begin{equation*} FE = n * Iter_{max}\tag{18}\end{equation*}
is the number of particles in population, and max iteration is$n$ .$Iter_{max}$ For GSO-based algorithms:
where\begin{equation*} FE = (m.n.l_{1} + m.l_{2}) * Iter_{max}\tag{19}\end{equation*} View Source\begin{equation*} FE = (m.n.l_{1} + m.l_{2}) * Iter_{max}\tag{19}\end{equation*}
is the sub-swarm number. Each sub-swarm contains$m$ particles; The maximal iteration is$n$ .$Iter_{max}$ and$l_{1}$ are the iteration of the first and second phase;$l_{2}$ and$m.n.l_{1}$ are$m.l_{2}$ of the first and second phase in each iteration respectively.$FE\text{s}$
With the evaluation method described above, the experimental results of each model are produced by calculating the \begin{align*} mean = \bar {r}=&\frac {1}{n} \sum _{i=1}^{n}r_{i} \tag{20}\\ std=&\sqrt {\frac {1}{n-1} \sum _{i=1}^{n} (r_{i} - \bar {r})^{2}} \tag{21}\end{align*}
For each function, after
In the case of two algorithms that are as good as each other, their ranks are considered as the average rank of 2 positions, which they account for.
The
for each algorithm is calculated by averaging its positions in all functions.$average\_{}rank$ The
is determined based on the$overall\_{}rank$ .$average\_{}rank$
Finally, the rank of the algorithm which shows the best performance in each function is highlighted in bold numbers in Tables 5, 6, 7, 8, 9, and 10.
C. Setting Parameters
The parameters for each algorithms in our experiments after tuning process are set as follows:
For HGEW, the acceleration constant (
) of PSO in the first phase is set to be 2.05 for every$c_{i}$ parameter, and with EWOA in the second phase,$c$ for balancing exploitation and exploration in EWOA.$p_{2} = 0.4$ The number of function evaluations is 300000 for all the experiments on benchmark functions with dimensions of 20, 50, 100.
Corresponding parameters set up for GSO-based algorithms and LWOA with the number of functions are shown in Table 3 and Table 4. Note that in Table 3, there are many sub-iterations of each HGEW and other GSO-based algorithms, as presented in Algorithm 1.
Experiment Results and Discussion
A. UNIMODAL and Multi-Modal Benchmark Functions
The achieved results of the tested algorithms are shown in Table 5, 6, and 7. All the experimental results of algorithms on unimodal and multi-modal benchmark functions indicate that HGEW gained the best performance with almost benchmark functions as compared with other optimizers (with descriptions in Section IV). The obtained results stand in the first position, followed by overall and average rankings that are presented in Figure 3. In another perspective, HGEW reaches the optimal global position in many functions regardless of search space size.
1) The Accuracy and the Stability
From the gained results of unimodal functions
HGEW reaches the optimal values on almost benchmark functions from
with decent stability. It is notable that when the dimension of search space increases from 20 to 100, HGEW’s performance is still ranked in the first position among all algorithms, while the results of GSO become worse, particularly in$C1 - C16$ and$C1, C2, C10$ (Table 7). Looking more closely at those functions in Table 7, HGEW’s mean of best fitness values still reach the optimal values (C1, C10, C11) or nearly the optimal value (C2) with the best stability ($C11$ ), while the others cannot land in the optimal value (C1, C2).$10E-4$ HGEW outperforms almost other algorithms in term of
value on over a half number of functions, especially in$std$ , and$C1, C2, C3, C8, C9$ (Table 6, 7). The results also show that HGEW stays more robust than the others while running on each function 20 times.$C16$ Other algorithms such as LWOA and GSO-WOA bring decent results, being ranked in the second or third place on several functions regardless of the size of the search space. However, they cannot reach the optimal value in unimodal functions (
) with dimensions of 50 and 100 like HGEW.$C1, C2, C3$
In summary, all the gained results with unimodal and multi-modal functions figure out that HGEW has the excellent optimization ability, and can stay unchanged with the rise in dimension. For explanation, the participation of the current best solution and a random particle in the current swarm, Crossover operator enhances not only the exploitation capacity. It leads the entire swarm to the optimal global position, but also the diversity of the population that helps HGEW escape local minimums.
2) The Convergence Characteristics
The curve results for unimodal (
The convergence curve of average fitness value on selected unimodal and multi-modal functions.
As depicted in the Figure, it is evident that HGEW not only quickly converges towards the optimal global position but also provides the best accuracy. Besides, the other GSO-based algorithms show mediocre results in terms of accuracy, even though they perform competitive convergence speed. LWOA also brings good outcomes in some cases and has nearly the same convergence speed compared with the convergence speed of HGEW.
Also, from Figure 5, GSO shows the best performance with the dimension of 20, but when the dimension goes up to 50 or 100, it is outperformed by other algorithms such as HGEW and LWOA. LWOA is competitive in both accuracy and convergence speed, especially in the dimension of 50 in both cases, it shows excellent performance even as decent as HGEW at both unimodal and multi-modal benchmark functions. However, generally, HGEW mostly gives the best results among all algorithms in the term of accuracy.
B. Hybrid and Composition Benchmark Functions
1) The Accuracy and the Stability
To resolve the optimization problem in multi-modal functions
During the extension of dimension, our HGEW also achieves better results, which are illustrated in Table 10:
In summary, GSO-LWOA is also very competitive when working on multi-modal functions, but so far behind HGEW in case of unimodal. These gained results proved that our proposed HGEW could explore the search space efficiently and find promising regions of the search place. In this way, HGEW has a strong ability to accomplish local minimum avoidance as compared with LWOA and GSO-LWOA.
2) The Convergence Characteristics
The curve results are presented in Figure 6. The dimension is selected in
The convergence curve of average fitness value on selected hybrid and composition functions.
As depicted in Figure 6, analogous to the curves in unimodal and multi-modal functions tests, HGEW also generates high convergence speed in hybrid and compos functions with decent accuracy. For and
Conclusion and Future Work
The works presented in this paper aim to tackle the global optimization problem. It is inspired by the simplicity and efficiency principles presented in Galactic Swarm Optimization and Whale Optimization Algorithm. However, with many advantages, the original GSO suffers from premature convergence, and mediocre accuracy, as well as the original WOA, is not optimized in solving global optimization. The HGEW (Hybridization of Galactic Swarm Optimization and Evolution Whale Optimization Algorithm) proposed in this work can overcome those problems with enhancements as follows. Using the Levy-Flight trajectory for adaptive search steps to jump out the optimal local position, and two-point crossover operation for boosting the exploitation capacity in the mean of evolution offspring creation to help the optimizer move more in-depth to the optimal global position in the search space. The main contribution of the work was presented in Section II-E with the evolution hybridization design presented in Section III. The HGEW was validated in Section V with extensive experimental results, comparisons, and evaluations. The obtained results showed that HGEW has an outstanding performance as compared with the others in terms of accuracy and convergence speed.
In terms of the application, HGEW is one of the parts of the V-chain [72] project. In which, the optimizer helps find suitable parameters for neural networks in forecasting workload of distributed systems like cloud computing and blockchain. The main drawback of the proposed technique is similar to other meta-heuristic algorithms. Concretely, although HGEW brings good outcomes, it cannot ensure that the achieved result is the global optimum. Our improvement with Levy-Flight presented in this paper for HGEW also has the goal of enhancing the ability to search the best optima, but we believe that our algorithm’s optimization ability can be enhanced even more. Hence, in the future, we try to improve our HGEW by using different methods in the manner of increasing searchability performance as well as to integrate advanced features such as early-stopping condition into our realization.