Introduction
The Particle Swarm Optimization (PSO) algorithm is a metaheuristic based optimization approach regarded as efficient and effective in solving many optimization problems [1]. For example, the PSO was applied in tuning a fuzzy controller [2], and the removal of color cast in digital images [3]. Further to its success, variants aiming at simpler implementations and better performances had emerged. One of the noticeable proposals is the Bare Bones Particle Swarm Optimization (BBPSO) [4] that has become a popular algorithm in the class of swarm-based metaheuristics. The BBPSO, based on the multi-agent philosophy, simplifies the PSO implementation by relying more on randomized but guided search.
In many applications, it is found that BBPSO in addition to its simplicity which does not need to calculate the particle velocity as in conventional PSO, is performing satisfactorily in many practical and challenging problems [5]. For example, BBPSO was applied in parameter estimations of the mixed Weibull statistical distribution by maximizing the likelihood function [6]. In [7], a highly non-linear model for vapor-liquid equilibrium data was constructed using BBPSO. Furthermore, it was employed together with the co-evolution strategy in data clustering [8]. In another application, BBPSO was modified to handle binary problems and used in data feature selection [9]. In health science, it was applied for leukaemia diagnosis from microscopic images [10]. In mobile robotics, a path planning method was developed using BBPSO [11]. In [12], BBPSO with directional chaotic search was adopted in optimizing the economic dispatch of power grid systems.
It can be seen that the BBPSO algorithm can be applied to a variety of optimization problems. It is also worth to mention that, variations of BBPSO were mostly made according to the problem tackled. Moreover, those alternations to the basic algorithm were based on intuitions or heuristics. Hence, the behavior of the modified BBPSO was not explicitly known and controllable by the algorithm design. Modifications that were made often use other established metaheuristic algorithms, such as adopting the crossover and mutation operator from the Genetic Algorithm (GA) [13] to maintain search diversity. Other variants combined BBPSO with Differential Evolution (DE) to obtain the search attractor [14]. Despite their applicability, these hybridizations often added unnecessary complications. More details on the computation and implementation aspects will be highlighted in Section II where related works are reviewed.
In this work, an enhanced algorithm called First-Order Bare Bones Particle Swarm Optimizer (FODBB) is proposed. The design is approached using principles from first-order difference equations. Particles, as search agents, are driven to follow prescribed trajectories to a dynamic target location given by the global and particle best positions. The advantage is that particles are allowed to explore the search space in early iterations and focus on exploitation in later iterations with decreasing randomness. The implementation complexity is close to the canonical BBPSO and is lower than some other variants. The performance of the proposed FODBB is compared to several recently reported BBPSO-based algorithms over a collection of latest benchmark functions.
The rest of the paper is organized as follows. A review of related works is presented in Section II. Section III details the development of the FODBB algorithm. Experiments conducted are described in Section IV with results evaluated. The conclusion is given in Section V.
Related Works
The BBPSO and its recently reported variants are reviewed. Their procedural characteristics and the algorithm complexity are summarized. Furthermore, their merits and limitations are identified and discussed.
A. Basic BBPSO
The BBPSO [4] may be considered as a simplified form of the classic PSO algorithm [1]. In fact, the main idea is to remove the so-called inertia driven particle “flying” component. In addition, particle position updates are made according to the current global best and the particle best positions.
Let \begin{equation*} \mathbf {x}_{i,t+1} = \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}),\tag{1}\end{equation*}
\begin{equation*} \boldsymbol \mu _{i} = \frac {1}{2}(\mathbf {x}_{t}^{g} + \mathbf {x}_{i,t}^{p}),\tag{2}\end{equation*}
\begin{equation*} \boldsymbol \sigma _{i} = | \mathbf x_{t}^{g} - \mathbf x_{i,t}^{p} |,\tag{3}\end{equation*}
The computation complexity, in terms of per-iteration, per-particle, and per-dimension in particular, can be identified as follows. There is one division needed to calculate
B. Alternative Randomness
Since the BBPSO update uses a sample from the Gaussian distribution, a natural modification would be to use alternative randomization strategies. In [15], the Bare Bones PSO with Gaussian or Cauchy Jump (BBGCJ) algorithm was presented, the update is made either following the BBPSO or a mixture of Gaussian and Cauchy samples. The decision to adopt which strategy depends on the number of fitness value stagnation. If the particle evolution is stagnated, the update that adopts the mixture is \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathbf x_{i,t}^{p} (1+\mathcal N(\boldsymbol 0,\boldsymbol \sigma _{i})), & \mathcal U(0,1) < 0.5\\ \mathbf x_{i,t}^{p} (1+\mathcal C(\boldsymbol 0,\boldsymbol \sigma _{i})), & \text {otherwise}, \end{cases}\tag{4}\end{equation*}
The complexity of the update from the stagnated situation is
In the work reported in [16], the Bare Bones PSO with Adaptive Chaotic Jump (ACJMP), the update is also based on the number of stagnation situation, \begin{equation*} s = \left ({1+ \exp (2-n_{s}) }\right)^{-1}.\tag{5}\end{equation*}
\begin{equation*} \mathbf x_{i,t+1} = \mathbf x_{i,t}^{p} (1+(2z_{t} -1)),\tag{6}\end{equation*}
Assume again that the occurrence of stagnation and no stagnation is equally likely. The complexity in using the chaotic update comes from four multiplications in generating the chaotic sequence. Moreover, there are two floating-point operations in calculating the stagnation factor. The overall complexity is
C. Modified Particles
From a different idea, the Disruption Bare Bones PSO (DBPSO) algorithm was given in [17]. This approach uses an iteration dependent weight on a BBPSO produced particle position and its disrupted version according to a specified condition. The disruption factor is obtained from \begin{equation*} d_{s} = \begin{cases} \mathcal U(-R_{i,j}/2,R_{i,j}/2), & R_{i,g} \geq 1\\ R_{i,j} + \mathcal U(-R_{i,j}/2,R_{i,j}/2), & \text {otherwise}, \end{cases}\tag{7}\end{equation*}
\begin{equation*} \mathbf x_{i,t+1} = \frac {t}{T}\mathbf x_{i,t}^{B} + \left ({1-\frac {t}{T} }\right)\mathbf x_{i,t}^{B} d_{s},\tag{8}\end{equation*}
Concerning the complexity, the calculation of
D. Spatial Dependence
In [18], the Distribution Guided Bare Bone PSO (DGPSO) algorithm was presented. The update of particles is governed by their spatial distribution, given by \begin{equation*} s_{i} = \frac {\max _{i}\{\mathbf x_{i,t}^{d} \}-\min _{i}\{\mathbf x_{i,t}^{d} \}}{\max _{d}\{\mathbf x_{i,t}^{d}\}-\min _{d}\{\mathbf x_{i,t}^{d}\}}.\tag{9}\end{equation*}
\begin{equation*} p_{i}= \begin{cases} 0.1, & s_{i} < 0.25\\ 0.01, & \text {otherwise}. \end{cases}\tag{10}\end{equation*}
\begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathcal U(\min _{d}\{\mathbf x_{i,t}^{d}\},\max _{d}\{\mathbf x_{i,t}^{d}\}), & \mathcal U(0,1) < p_{i}\\ \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), & \text {otherwise}. \end{cases}\tag{11}\end{equation*}
In calculating the spatial distribution, there is one division and four min/max operations needed. There are at least three distribution sampling, on average, required in the particle update. The overall computation complexity is
E. Mixed Strategy
In the Mixed Bare Bones PSO (MXPSO) [19], the strategy adopted is to mix conventional PSO with BBPSO according to a mixing threshold \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathbf x_{i,t} + w(\mathbf x_{i,t} - \mathbf x_{i,t-1}) \\ \quad + c_{p}\varphi _{p} (\mathbf x_{i,t}^{p} - \mathbf x_{i,t}) \\ \quad + c_{g}\varphi _{g} (\mathbf x_{t}^{g} - \mathbf x_{i,t}), & \mathcal U(0,1) < \tau _{m}\\ \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), & \text {otherwise}. \end{cases}\tag{12}\end{equation*}
The complexity in updating according to the conventional PSO is
F. Sub-Swarms
Another strategy adopted in modifying BBPSO is found in the Hierarchical Bare Bones PSO (HIPSO) algorithm [20]. In its implementation, particles are divided into groups of three on the basis of their fitness function values. These particles are assigned as leader \begin{align*} \mathbf x_{i,t}^{L}=&\mathcal N(\boldsymbol \mu ^{L},\boldsymbol \sigma ^{L}), \\ \boldsymbol \mu ^{L}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{L,p})/2,\quad \boldsymbol \sigma ^{L} = | \mathbf x_{t}^{g} -\mathbf x_{i,t}^{L,p} |,\tag{13}\\ \mathbf x_{i,t}^{M}=&\mathcal N(\boldsymbol \mu ^{M},\boldsymbol \sigma ^{M}), \\ \boldsymbol \mu ^{M}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{M})/2,\quad \boldsymbol \sigma ^{M} = | \mathbf x_{t}^{g} -\mathbf x_{i,t}^{M} |,\tag{14}\\ \mathbf x_{i,t}^{W}=&\mathcal N(\boldsymbol \mu ^{W},\boldsymbol \sigma ^{W}), \\ \boldsymbol \mu ^{W}=&(\mathbf x_{t}^{W} + \mathbf x_{i,t}^{M})/2,\quad \boldsymbol \sigma ^{W} = | \mathbf x_{t}^{W} -\mathbf x_{i,t}^{M} |.\tag{15}\end{align*}
In the HIPSO, the number of particles used must be multiples of three. For the computation complexity, it is the same as BBPSO, i.e.,
In the Pair-Wise Bare Bones PSO (PWPSO) [21], particles are randomly divided into groups of two. These particles are assigned as leader \begin{align*} \mathbf x_{i,t}^{L}=&\mathcal N(\boldsymbol \mu ^{L},\boldsymbol \sigma ^{L}), \\ \boldsymbol \mu ^{L}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p})/2,\quad \boldsymbol \sigma ^{L} = | \mathbf x_{t}^{g} - \mathbf x_{i,t}^{p} |,\tag{16}\\ \mathbf x_{i,t}^{F}=&\mathcal N(\boldsymbol \mu ^{F},\boldsymbol \sigma ^{F}), \\ \boldsymbol \mu ^{F}=&(\mathbf x_{i,t}^{L} + \mathbf x_{i,t}^{F})/2,\quad \boldsymbol \sigma ^{F} = | \mathbf x_{i,t}^{L} - \mathbf x_{i,t}^{F} |.\tag{17}\end{align*}
In the PWPSO, the same sub-swarm strategy as HIPSO is adopted but forms a group with only two particles. Its computation complexity is also the same as the BBPSO, i.e.,
A similar algorithm, the Dynamic Allocation Bare Bones PSO (DAPSO), also makes use of the sub-swarm strategy [22]. The difference rests on the fact that particles are divided into two groups with respect to their fitness function values. Half of the particles, which are better than other particles, are denoted as the main group \begin{align*} \mathbf x_{i,t+1}^{M}=&\mathcal N(\boldsymbol \mu ^{M},\boldsymbol \sigma ^{M}), \\ \boldsymbol \mu ^{M}=&(\mathbf x_{i,t}^{p,M} + \mathbf x_{t}^{g})/2,\quad \boldsymbol \sigma ^{M} = | \mathbf x_{i,t}^{p,M} - \mathbf x_{t}^{g} |,\tag{18}\\ \mathbf x_{i,t+1}^{A}=&\mathcal N(\boldsymbol \mu ^{A},\boldsymbol \sigma ^{A}), \\ \boldsymbol \mu ^{A}=&(\mathbf x_{i,t}^{p,A} + \mathbf x_{i-1,t}^{p,A})/2,\quad \boldsymbol \sigma ^{A} = | \mathbf x_{i,t}^{p,A} - \mathbf x_{i-1,t}^{p,A} |.\tag{19}\end{align*}
If the sorting and selection overhead is ignored, the computation complexity is the same as BBPSO, i.e.,
In [23], the Dynamic Reconstruction Bare Bones PSO (DRPSO) algorithm was presented. Two particles are randomly selected to form a group. The particle with better fitness function is regarded as the elite \begin{align*} \mathbf x_{i,t}^{R}=&\mathcal N(\boldsymbol \mu ^{R},\boldsymbol \sigma ^{R}), \\ \boldsymbol \mu ^{R}=&(\mathbf x_{i,t}^{p,R} + \mathbf x_{i,t}^{p,E})/2,\quad \boldsymbol \sigma ^{R} = | \mathbf x_{i,t}^{p,R} - \mathbf x_{i,t}^{p,E} |.\tag{20}\end{align*}
In the Modified Bare Bones with Differential Evolution (MBBDE) algorithm [24], the particle best position is modified from randomly selecting three particles. That is \begin{equation*} \mathbf x_{i,t}^{p} = \mathbf x_{t}^{(1) } + (\mathbf x_{t}^{(2) } - \mathbf x_{t}^{(3) })/2.\tag{21}\end{equation*}
\begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), \\ \quad \boldsymbol \mu _{i}=(\mathbf x_{i,t}^{p} + \bar {\mathbf x}_{t})/2, \\ \quad \boldsymbol \sigma _{i}=|\mathbf x_{i,t}^{p} - \bar {\mathbf x}_{t}|, & \mathcal U(0,1) < 0.5\\ \mathbf x_{i,t}^{p}, & \text {otherwise}, \end{cases}\tag{22}\end{equation*}
There is one division involved in calculating the new particle best position, one division to obtain the average position, one division to calculate the distribution mean, and one distribution sampling. The overall computation complexity is
G. Discussion
From the review of the above class of BBPSO variants, it can be observed that algorithms adopting the sub-swarm strategy have low computation complexities. On the other hand, these algorithms need extra overheads in dividing particles into different groups.
It is also noticed that the DBPSO algorithm, using a particle modification approach, has high computational complexity. Algorithms using spatial dependence and mixed strategies, DGPSO and MXPSO, also have high complexities. Besides, the ACJMP that uses alternative sampling distributions has a higher complexity. If their performances are not directly improved, then these algorithms are unnecessarily complicated.
Furthermore, in many BBPSO variants, there are algorithm control parameters that were determined in an ad-hoc manner, e.g., the jump probability in DGPSO algorithm. Other designs also cannot be rationalized. For instance, there is a discrepancy in determining the stagnation condition in BBGCJ and ACJMP. It can be seen that these algorithms were designed largely based on heuristics. Hence, a systematic and simple design is needed. Here, we refer a systematic design as the development of the algorithm is based on theoretical support, such as the first-order differential equation instead of relying on natural metaphors from intuition. Furthermore, simple design means a straightforward implementation.
First-Order Bare Bones Particle Swarm Optimizer
In this section, the development of the FODBB algorithm is presented. We begin by discussing the limitation of conventional BBPSO. Then the FODBB is formulated, its properties and computational complexity are analyzed and estimated.
A. Limitations of the Conventional BBPSO
For the basic BBPSO process described in Section II-A, consider a particle
Particle
is the global best particle. Then, it must also be at the$\mathbf x_{i,t}$ th particle best position.$i$ Particle
is located at the particle best position, but is not the global best particle.$\mathbf x_{i,t}$ Particle
is neither the global best nor at the particle best position.$\mathbf x_{i,t}$
Depending on these cases, the update of the particle would lead to different outcomes. In case 1,
In case 2, we have
The most general situation is case 3. The center of the next position is
It can be noted that BBPSO does not make use of the current particle position in updates but only depends on the global best and particle best positions. This makes the algorithm very restrictive in exploring the search space.
B. The Proposed FODBB
In order to mitigate the limitations in conventional BBPSO, a new algorithm called First-Order Bare Bones Particle Swarm Optimizer (FODBB) is proposed. The algorithm design fulfills the following purposes:
The particle update is not restricted to the global best and particle best positions.
The search randomness exists irrespective of particle positions.
Before the first iteration, particles are randomly generated, by sampling from a uniform distribution, to cover the search region. In each iteration, fitness values are evaluated, the global best and particle best positions are identified. The update is carried out as \begin{equation*} \mathbf x_{i,t+1} = \mathcal N(\boldsymbol \mu _{i,t},\boldsymbol \sigma _{i,t}).\tag{23}\end{equation*}
\begin{equation*} \boldsymbol \mu _{i,t} = \frac {1}{3} (\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p} + \mathbf x_{i,t}),\tag{24}\end{equation*}
\begin{equation*} \boldsymbol \sigma _{t} = \left ({1-\frac {t}{T} }\right)R,\tag{25}\end{equation*}
Algorithm 1 First-Order Bare Bones Particle Swarm Optimizer (FODBB)
number of particles
Initialize particles
for each iteration do
calculate Gaussian distribution parameter
for each particle do
Evaluate fitness values
Identify global best
Identify particle best position
Calculate the Gaussian distribution center
Update particle position
end for
if
Re-initialize particle
end if
end for
Output:
C. Properties of FODBB
Although the FODBB update procedure is similar to BBPSO, its structure is purposefully designed to avoid problems encountered in BBPSO. Since the Gaussian distribution is symmetric and centered at \begin{align*} \mathbf x_{i,t+1}=&\boldsymbol \mu _{i,t} + \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}), \\=&\frac {1}{3}\mathbf x_{i,t} + \frac {1}{3}(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p}) + \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}).\tag{26}\end{align*}
The expanded update equation can be treated as containing a homogeneous part and a non-homogeneous part. The former contains only the term
Consider the homogeneous equation \begin{equation*} \mathbf x_{i,t+1}^{h} = w\mathbf x_{i,t}^{h},\tag{27}\end{equation*}
\begin{align*} \mathbf x_{i,2}^{h}=&w\mathbf x_{i,1}^{h}, \\ \mathbf x_{i,3}^{h}=&w\mathbf x_{i,2}^{h}=w^{2}\mathbf x_{i,1}^{h}, \\ \cdots\cdots&\cdots \\ \mathbf x_{i,T}^{h}=&w\mathbf x_{i,T-1}^{h} = \cdots ~\cdots = w^{T-1}\mathbf x_{i,1}^{h}.\tag{28}\end{align*}
For the deterministic term, the difference equation can be separately written as \begin{align*} \mathbf x_{i,t}^{d}=&w(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p}), \\ \mathbf x_{i,t+1}^{d}=&w(\mathbf x_{t+1}^{g} + \mathbf x_{i,t+1}^{p}).\tag{29}\end{align*}
Another case to be considered is when \begin{equation*} \mathbf x_{i,t+1}^{d} = w\mathbf x_{i,t}^{d} + w\mathbf x_{t}^{g}.\tag{30}\end{equation*}
\begin{equation*} \mathbf x_{i,t+1}^{d} = w(\mathbf x_{t}^{g}+\mathbf x_{i,t}^{p})=2w\mathbf x_{i,t}^{d}.\tag{31}\end{equation*}
For the stochastic component \begin{equation*} \mathbf x_{i,t+1}^{s} = \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}),\tag{32}\end{equation*}
Because of the linearity in the particle update process, which is expressed as a first-order difference equation, the total solution is the sum of solutions to the homogeneous, deterministic and stochastic components. That is \begin{equation*} \mathbf x_{i,t+1} = \mathbf x_{i,t}^{h} + \mathbf x_{i,t}^{d} + \mathbf x_{i,t}^{s}.\tag{33}\end{equation*}
The behavior of the particle evolution and its search behavior can be summarized as follows.
The history of the particle position influences its future position. This effect diminishes with an increase in iterations.
During the transient period, i.e., before the historical influence diminishes, more positions are searched and tested for optimum. Hence, this leads to wider coverage of the search space and increases the chance to obtain better search results.
Because the sampling Gaussian distribution has a non-zero standard deviation that is independent of the particle position, particle stagnation can be avoided.
D. Computation Complexity
Recall the FODBB process shown in Algorithm 1. For each particle in an iteration, there is only one division needed in calculating the Gaussian distribution center, see also Eq. 24. Furthermore, there is one distribution sampling needed to generate the new particle position. Since the standard deviation is constant for all particles, we can pre-compute in each iteration by expressing it as \begin{equation*} \boldsymbol \sigma _{t} = \left ({1-\frac {t}{T} }\right) R = \left ({R - t\frac {R}{T} }\right),\tag{34}\end{equation*}
Experiment
Experiments for function optimization were conducted to evaluate the performance of the proposed FODBB algorithm. Benchmark functions from the CEC 2017 dataset were employed [28]. These are collections of unimodal, simple multi-modal, hybrid, and composite functions, see Table 1. The functions are formulated as minimization problems.
The permitted search range is
The FODBB, BBPSO, and its variants were coded in the Matlab 2015b platform, running on a PC with 1.70 GHz CPU, 4 GB RAM, and 64 bit Windows 7 OS. Fifty independently initialized test runs were performed to remove the random effect and give a better indication of the algorithm performance. Records of final fitness values, i.e.,
A. Results
Plots of the fitness values with respect to the number of iterations are depicted in Fig. 1, 2, 3, 4 and 5 for test functions of 2-, 10-, 30-dimensions, 50-dimensions, and 100-dimensions respectively. For a better illustration of the results, plots are drawn in the semi-log10 scale. Results are discussed according to their problem dimensions, and on the basis of their mean, median, standard deviation values and the hypothesis test outcomes.
1) Two-Dimensional Problems
From the results of 2-dimensional functions, shown in Fig. 1, it is observed that the FODBB algorithm is performing well, in particular, in functions
Final fitness values from the fifty independent runs were summarized in Table 3 as the mean values, the median values are given in Table 4. It can be seen that FODBB is the best performing algorithm, with regard to the mean value, in all test functions. For the median value, the FODBB algorithm is the best in
For the solution standard deviation, it is a measure of the algorithm robustness against random effects in particle initialization and distribution sampling. A small value together with a low mean fitness signifies that the algorithm is robust. The FODBB is the best algorithm for all functions, see Table 5. Hence, it can be regarded that FODBB is an effective and robust algorithm for the majority of 2-dimensional minimization problems.
In the hypothesis test, namely, the non-parametric rank-sum test, the FODBB is performing satisfactorily, see Table 6. The results are significant, in the sense that the median is less than others, in functions
2) Ten-Dimensional Problems
For 10-dimensional functions, plots of fitness values in Fig. 2 indicate that FODBB is able to produce satisfactory solutions in
A comparison of resultant mean fitness values for 10-dimensional functions is listed in Table 7, the median values in Table 8, the standard deviation values in Table 9, and the rank-sum tests in Table 10. The proposed FODBB is the best algorithm, with regard to the mean fitness, in
The median values for the 10-dimensional problems, see Table 8, show that the FODBB is the best performer in functions
For the standard deviation measure listed in Table 9, together with low fitness values, FODBB is the best algorithm in
In the results of the hypothetical test shown in Table 10, the FODBB has all the null hypothesis rejected in functions
3) Thirty-Dimensional Problems
Results of the mean fitness values are plotted in Fig. 3. The FODBB has obvious small errors in functions
The mean fitness values are listed in Table 11. We see that FODBB is the best algorithm in functions
For the median values, the FODBB has the lowest values in functions
The standard deviations are listed in Table 13. The FODBB, together with a low mean fitness values, is performing well in functions
From the hypothetical test, results are shown in Table 14. For FODBB, the null hypotheses are rejected in functions
4) Fifty-Dimensional Problems
Plots of the resultant mean fitness values are shown in Fig. 4. The FODBB has promising results in functions
For the mean fitness, results are listed in Table 15. The FODBB has the best values in functions
In Table 16, median values are shown. The FODBB has the lowest values in functions
For the standard deviation, see Table 17, the FODBB has low values in functions
Table 18 lists the results from the hypothetical test. The complete null hypothesis rejection occurs in functions
5) Hundred-Dimensional Problems
The results of the mean values are plotted in Fig. 5. The performance of the FODBB algorithm and others are not easily distinguishable from the graphs. However, trends in decreasing fitness values are observable from FODBB.
The mean values are summarized in Table 19. The FODBB method has the lowest mean values in functions
Table 20 depicts the median values obtained in the 100-dimensional problems. The FODBB algorithm has low medians in functions
The standard deviations are listed in Table 21. The FODBB has low values in functions
Results from the statistical test are shown in Table 22. The null hypotheses for FODBB are rejected in functions
B. Discussion
Recall that the FODBB algorithm contains three components in its particle update process, namely, a homogeneous, a deterministic and a stochastic component. The homogeneous component makes use of the current particle position that prevents the particle evolution from directly moving to the position between the global best and particle positions. The benefit is to provide sufficient diversity in early iterations to allow exploration. The stochastic component, on the other hand, is independent of any of the particle, global best and best particle positions. Besides, it is determined by the search range and linearly decreases with increases in iterations. This ensures that randomness, as an essential contributor to the search, always exists. The outcome is that particle stagnation is prevented.
The FODBB algorithm has been shown to perform better than the conventional BBPSO and its variants. The performance is especially good in low 2-dimensional functions. Satisfactory results are also obtained from 10-, 30- and 50-dimensional problems. However, the PWPSO is the algorithm that performed better than others in a majority of test functions in the 100-dimension. Furthermore, the FODBB, as indicated by the final mean fitness values, median values, standard deviations and results from the rank-sum test, is an effective and robust algorithm among the class of BBPSO methods and its variants. It is worth to mention that the FODBB complexity is low, and is equivalent to that of conventional BBPSO. The improvements on FODBB for high dimensional problems will be considered in future work.
Conclusion
An improved Bare Bones Particle Swarm Optimization variant, the First-Order Bare Bones Particle Swarm Optimizer (FODBB) algorithm has been presented. This algorithm has a low computation complexity and is suitable for optimization in both low and medium dimensions, simple and difficult functions. The iterative particle update involves the current particle position that allows for better search space exploration in early iterations. Furthermore, a search region dependent Gaussian sampling with non-zero and decreasing standard deviation is used in the update. This ensures that particle stagnation can be avoided and exploitation of the search space is enforced in later iterations. Experimental results obtained from benchmark functions indicate that FODBB is performing both effectively and robustly across problems ranging from simple unimodal, multi-modal, hybrid, and difficult compositions functions.