Loading [MathJax]/extensions/MathMenu.js
First-Order Difference Bare Bones Particle Swarm Optimizer | IEEE Journals & Magazine | IEEE Xplore

First-Order Difference Bare Bones Particle Swarm Optimizer


Fitness values of 2-dimensional functions.

Abstract:

The Bare Bones Particle Swarm Optimization (BBPSO), because of its implementation simplicity, has been a popular swarm-based metaheuristic algorithm for solving optimizat...Show More

Abstract:

The Bare Bones Particle Swarm Optimization (BBPSO), because of its implementation simplicity, has been a popular swarm-based metaheuristic algorithm for solving optimization problems. However, as found in its many variants, their search behaviors were not considered in the design. Instead of employing heuristics, we formulate a low complexity particle swarm optimizer, called the First-Order Bare Bones Particle Swarm Optimizer (FODBB), whose behavior obeys the principle of first-order difference equations. The search trajectory can be constructed in a prescribed manner together with decreasing random searches that enable particles to explore the search space more completely. This characteristic thus allows for a wider search coverage at initial iterations and consequently improves the search performance. A comparative evaluation with recently reported BBPSO algorithms was conducted and experimental results indicate that the proposed optimizer outperforms others in a majority of benchmark optimization functions.
Fitness values of 2-dimensional functions.
Published in: IEEE Access ( Volume: 7)
Page(s): 132472 - 132491
Date of Publication: 12 September 2019
Electronic ISSN: 2169-3536

Funding Agency:

References is not available for this document.

CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

The Particle Swarm Optimization (PSO) algorithm is a metaheuristic based optimization approach regarded as efficient and effective in solving many optimization problems [1]. For example, the PSO was applied in tuning a fuzzy controller [2], and the removal of color cast in digital images [3]. Further to its success, variants aiming at simpler implementations and better performances had emerged. One of the noticeable proposals is the Bare Bones Particle Swarm Optimization (BBPSO) [4] that has become a popular algorithm in the class of swarm-based metaheuristics. The BBPSO, based on the multi-agent philosophy, simplifies the PSO implementation by relying more on randomized but guided search.

In many applications, it is found that BBPSO in addition to its simplicity which does not need to calculate the particle velocity as in conventional PSO, is performing satisfactorily in many practical and challenging problems [5]. For example, BBPSO was applied in parameter estimations of the mixed Weibull statistical distribution by maximizing the likelihood function [6]. In [7], a highly non-linear model for vapor-liquid equilibrium data was constructed using BBPSO. Furthermore, it was employed together with the co-evolution strategy in data clustering [8]. In another application, BBPSO was modified to handle binary problems and used in data feature selection [9]. In health science, it was applied for leukaemia diagnosis from microscopic images [10]. In mobile robotics, a path planning method was developed using BBPSO [11]. In [12], BBPSO with directional chaotic search was adopted in optimizing the economic dispatch of power grid systems.

It can be seen that the BBPSO algorithm can be applied to a variety of optimization problems. It is also worth to mention that, variations of BBPSO were mostly made according to the problem tackled. Moreover, those alternations to the basic algorithm were based on intuitions or heuristics. Hence, the behavior of the modified BBPSO was not explicitly known and controllable by the algorithm design. Modifications that were made often use other established metaheuristic algorithms, such as adopting the crossover and mutation operator from the Genetic Algorithm (GA) [13] to maintain search diversity. Other variants combined BBPSO with Differential Evolution (DE) to obtain the search attractor [14]. Despite their applicability, these hybridizations often added unnecessary complications. More details on the computation and implementation aspects will be highlighted in Section II where related works are reviewed.

In this work, an enhanced algorithm called First-Order Bare Bones Particle Swarm Optimizer (FODBB) is proposed. The design is approached using principles from first-order difference equations. Particles, as search agents, are driven to follow prescribed trajectories to a dynamic target location given by the global and particle best positions. The advantage is that particles are allowed to explore the search space in early iterations and focus on exploitation in later iterations with decreasing randomness. The implementation complexity is close to the canonical BBPSO and is lower than some other variants. The performance of the proposed FODBB is compared to several recently reported BBPSO-based algorithms over a collection of latest benchmark functions.

The rest of the paper is organized as follows. A review of related works is presented in Section II. Section III details the development of the FODBB algorithm. Experiments conducted are described in Section IV with results evaluated. The conclusion is given in Section V.

SECTION II.

Related Works

The BBPSO and its recently reported variants are reviewed. Their procedural characteristics and the algorithm complexity are summarized. Furthermore, their merits and limitations are identified and discussed.

A. Basic BBPSO

The BBPSO [4] may be considered as a simplified form of the classic PSO algorithm [1]. In fact, the main idea is to remove the so-called inertia driven particle “flying” component. In addition, particle position updates are made according to the current global best and the particle best positions.

Let $\mathbf x_{i,t}$ be the position of the $i$ th $D$ -dimensional particle at iteration $t$ , the global best position be $\mathbf x_{t}^{g}$ , and the particle best position be $\mathbf x_{i,t}^{p}$ ; the updated position of particle $i$ at the $t+1$ iteration is \begin{equation*} \mathbf {x}_{i,t+1} = \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}),\tag{1}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \begin{equation*} \boldsymbol \mu _{i} = \frac {1}{2}(\mathbf {x}_{t}^{g} + \mathbf {x}_{i,t}^{p}),\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features. is the mid-point of the line joining $\mathbf {x}_{t}^{g}$ and $\mathbf {x}_{i,t}^{p}$ , \begin{equation*} \boldsymbol \sigma _{i} = | \mathbf x_{t}^{g} - \mathbf x_{i,t}^{p} |,\tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features. is the separation between the global and particle historical best positions, $\mathcal N(\cdot,\cdot)$ denotes sampling from a Gaussian distribution, and $|\cdot |$ is the absolute operator on each element of the multi-dimensional vector. It can be seen that the current particle position $\mathbf x_{i,t}$ does not directly involve in the update, but it implicitly influences the Gaussian distribution specification. Furthermore, it should be noted that no past particle positions are used in the update, thus, removes the “flying” component.

The computation complexity, in terms of per-iteration, per-particle, and per-dimension in particular, can be identified as follows. There is one division needed to calculate $\boldsymbol \mu _{i}$ , one subtraction and one absolute operation to calculate $\boldsymbol \sigma _{i}$ . Moreover, there is one Gaussian function evaluation in performing the sampling. If only multiplication/division and distribution sampling are counted, the complexity is $\mathcal O(2) $ .

B. Alternative Randomness

Since the BBPSO update uses a sample from the Gaussian distribution, a natural modification would be to use alternative randomization strategies. In [15], the Bare Bones PSO with Gaussian or Cauchy Jump (BBGCJ) algorithm was presented, the update is made either following the BBPSO or a mixture of Gaussian and Cauchy samples. The decision to adopt which strategy depends on the number of fitness value stagnation. If the particle evolution is stagnated, the update that adopts the mixture is \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathbf x_{i,t}^{p} (1+\mathcal N(\boldsymbol 0,\boldsymbol \sigma _{i})), & \mathcal U(0,1) < 0.5\\ \mathbf x_{i,t}^{p} (1+\mathcal C(\boldsymbol 0,\boldsymbol \sigma _{i})), & \text {otherwise}, \end{cases}\tag{4}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $C(\boldsymbol 0,\boldsymbol \sigma _{i})$ denotes a Cauchy distribution, $\mathcal U(0,1)$ is a sample from the uniform distribution.

The complexity of the update from the stagnated situation is $\mathcal O(3) $ with one multiplication and two sampling operations. If the occurrences of stagnation and no stagnation are equally likely, then the overall complexity becomes $\mathcal O((2+3)/2) = \mathcal O(2.5)$ , which is slightly higher than BBPSO.

In the work reported in [16], the Bare Bones PSO with Adaptive Chaotic Jump (ACJMP), the update is also based on the number of stagnation situation, $n_{s}$ , and the stagnation factor calculated from \begin{equation*} s = \left ({1+ \exp (2-n_{s}) }\right)^{-1}.\tag{5}\end{equation*} View SourceRight-click on figure for MathML and additional features. The alternative update follows that of a chaotic jump. If there is no stagnation, i.e., $s$ is less than some threshold, the update is the same as the BBPSO. Otherwise, the update procedure can be described as \begin{equation*} \mathbf x_{i,t+1} = \mathbf x_{i,t}^{p} (1+(2z_{t} -1)),\tag{6}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $z_{t+1}=4z_{t}(1-z_{t})$ , is a chaotic sequence.

Assume again that the occurrence of stagnation and no stagnation is equally likely. The complexity in using the chaotic update comes from four multiplications in generating the chaotic sequence. Moreover, there are two floating-point operations in calculating the stagnation factor. The overall complexity is $\mathcal O((2+6)/2)=\mathcal O(4) $ .

C. Modified Particles

From a different idea, the Disruption Bare Bones PSO (DBPSO) algorithm was given in [17]. This approach uses an iteration dependent weight on a BBPSO produced particle position and its disrupted version according to a specified condition. The disruption factor is obtained from \begin{equation*} d_{s} = \begin{cases} \mathcal U(-R_{i,j}/2,R_{i,j}/2), & R_{i,g} \geq 1\\ R_{i,j} + \mathcal U(-R_{i,j}/2,R_{i,j}/2), & \text {otherwise}, \end{cases}\tag{7}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $R_{i,j}$ and $R_{i,g}$ are the Euclidian distance between particles $i$ and $j$ ; and the global best position respectively. The particle position update is \begin{equation*} \mathbf x_{i,t+1} = \frac {t}{T}\mathbf x_{i,t}^{B} + \left ({1-\frac {t}{T} }\right)\mathbf x_{i,t}^{B} d_{s},\tag{8}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $\mathbf x_{t}^{B}$ is the particle position obtained from BBPSO, $T$ is the maximum number of iterations. This update is only implemented if the condition $R_{i,j}/R_{i,g} < \gamma _{0}(1-t/T)$ is satisfied.

Concerning the complexity, the calculation of $R_{i,j}$ requires $\mathcal O(3) $ in evaluating the Euclidian distance and the calculation of $R_{i,g}$ also requires $\mathcal O(3) $ . The disruption factor needs one division, and the calculation of $d_{s}$ requires $\mathcal O(7) $ . To obtain the disruption condition, it requires $\mathcal O(3) $ , and the particle update needs $\mathcal O(3) $ . If the probability of disruption occurrence or not is equally likely, then the total complexity will be $\mathcal O ((2+ 3+7)/2+3)=\mathcal O(9) $ .

D. Spatial Dependence

In [18], the Distribution Guided Bare Bone PSO (DGPSO) algorithm was presented. The update of particles is governed by their spatial distribution, given by \begin{equation*} s_{i} = \frac {\max _{i}\{\mathbf x_{i,t}^{d} \}-\min _{i}\{\mathbf x_{i,t}^{d} \}}{\max _{d}\{\mathbf x_{i,t}^{d}\}-\min _{d}\{\mathbf x_{i,t}^{d}\}}.\tag{9}\end{equation*} View SourceRight-click on figure for MathML and additional features. Furthermore, a jump probability is defined as \begin{equation*} p_{i}= \begin{cases} 0.1, & s_{i} < 0.25\\ 0.01, & \text {otherwise}. \end{cases}\tag{10}\end{equation*} View SourceRight-click on figure for MathML and additional features. The particle update is determined from \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathcal U(\min _{d}\{\mathbf x_{i,t}^{d}\},\max _{d}\{\mathbf x_{i,t}^{d}\}), & \mathcal U(0,1) < p_{i}\\ \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), & \text {otherwise}. \end{cases}\tag{11}\end{equation*} View SourceRight-click on figure for MathML and additional features.

In calculating the spatial distribution, there is one division and four min/max operations needed. There are at least three distribution sampling, on average, required in the particle update. The overall computation complexity is $\mathcal O((2+1+4+3)/2+1)=\mathcal O(6) $ .

E. Mixed Strategy

In the Mixed Bare Bones PSO (MXPSO) [19], the strategy adopted is to mix conventional PSO with BBPSO according to a mixing threshold $\tau _{m}$ . The particle update is \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathbf x_{i,t} + w(\mathbf x_{i,t} - \mathbf x_{i,t-1}) \\ \quad + c_{p}\varphi _{p} (\mathbf x_{i,t}^{p} - \mathbf x_{i,t}) \\ \quad + c_{g}\varphi _{g} (\mathbf x_{t}^{g} - \mathbf x_{i,t}), & \mathcal U(0,1) < \tau _{m}\\ \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), & \text {otherwise}. \end{cases}\tag{12}\end{equation*} View SourceRight-click on figure for MathML and additional features.

The complexity in updating according to the conventional PSO is $\mathcal O(7) $ including five multiplication, two random number generation, and the determination of the threshold is $\mathcal O(1) $ . Assume that the threshold is 0.5, then the algorithm complexity is $\mathcal O((2+7)/2+1)=\mathcal O(5.5)$ .

F. Sub-Swarms

Another strategy adopted in modifying BBPSO is found in the Hierarchical Bare Bones PSO (HIPSO) algorithm [20]. In its implementation, particles are divided into groups of three on the basis of their fitness function values. These particles are assigned as leader $\mathbf x^{L}$ , team $\mathbf x^{M}$ , and worst $\mathbf x^{W}$ members in the randomly formed group. Their updates follow that of the basic BBPSO with different sets of parameters. That is \begin{align*} \mathbf x_{i,t}^{L}=&\mathcal N(\boldsymbol \mu ^{L},\boldsymbol \sigma ^{L}), \\ \boldsymbol \mu ^{L}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{L,p})/2,\quad \boldsymbol \sigma ^{L} = | \mathbf x_{t}^{g} -\mathbf x_{i,t}^{L,p} |,\tag{13}\\ \mathbf x_{i,t}^{M}=&\mathcal N(\boldsymbol \mu ^{M},\boldsymbol \sigma ^{M}), \\ \boldsymbol \mu ^{M}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{M})/2,\quad \boldsymbol \sigma ^{M} = | \mathbf x_{t}^{g} -\mathbf x_{i,t}^{M} |,\tag{14}\\ \mathbf x_{i,t}^{W}=&\mathcal N(\boldsymbol \mu ^{W},\boldsymbol \sigma ^{W}), \\ \boldsymbol \mu ^{W}=&(\mathbf x_{t}^{W} + \mathbf x_{i,t}^{M})/2,\quad \boldsymbol \sigma ^{W} = | \mathbf x_{t}^{W} -\mathbf x_{i,t}^{M} |.\tag{15}\end{align*} View SourceRight-click on figure for MathML and additional features.

In the HIPSO, the number of particles used must be multiples of three. For the computation complexity, it is the same as BBPSO, i.e., $\mathcal O(2) $ . Note that overheads in forming the groups also add to the complexity.

In the Pair-Wise Bare Bones PSO (PWPSO) [21], particles are randomly divided into groups of two. These particles are assigned as leader $\mathbf x_{t}^{L}$ and follower $\mathbf x_{t}^{F}$ depending on their objection function values. Their updates are determined from \begin{align*} \mathbf x_{i,t}^{L}=&\mathcal N(\boldsymbol \mu ^{L},\boldsymbol \sigma ^{L}), \\ \boldsymbol \mu ^{L}=&(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p})/2,\quad \boldsymbol \sigma ^{L} = | \mathbf x_{t}^{g} - \mathbf x_{i,t}^{p} |,\tag{16}\\ \mathbf x_{i,t}^{F}=&\mathcal N(\boldsymbol \mu ^{F},\boldsymbol \sigma ^{F}), \\ \boldsymbol \mu ^{F}=&(\mathbf x_{i,t}^{L} + \mathbf x_{i,t}^{F})/2,\quad \boldsymbol \sigma ^{F} = | \mathbf x_{i,t}^{L} - \mathbf x_{i,t}^{F} |.\tag{17}\end{align*} View SourceRight-click on figure for MathML and additional features.

In the PWPSO, the same sub-swarm strategy as HIPSO is adopted but forms a group with only two particles. Its computation complexity is also the same as the BBPSO, i.e., $\mathcal O(2) $ plus group forming overheads.

A similar algorithm, the Dynamic Allocation Bare Bones PSO (DAPSO), also makes use of the sub-swarm strategy [22]. The difference rests on the fact that particles are divided into two groups with respect to their fitness function values. Half of the particles, which are better than other particles, are denoted as the main group $\mathbf x^{M}$ . The rest of particles are denoted as the ancillary group $\mathbf x^{A}$ . Their updates are given by \begin{align*} \mathbf x_{i,t+1}^{M}=&\mathcal N(\boldsymbol \mu ^{M},\boldsymbol \sigma ^{M}), \\ \boldsymbol \mu ^{M}=&(\mathbf x_{i,t}^{p,M} + \mathbf x_{t}^{g})/2,\quad \boldsymbol \sigma ^{M} = | \mathbf x_{i,t}^{p,M} - \mathbf x_{t}^{g} |,\tag{18}\\ \mathbf x_{i,t+1}^{A}=&\mathcal N(\boldsymbol \mu ^{A},\boldsymbol \sigma ^{A}), \\ \boldsymbol \mu ^{A}=&(\mathbf x_{i,t}^{p,A} + \mathbf x_{i-1,t}^{p,A})/2,\quad \boldsymbol \sigma ^{A} = | \mathbf x_{i,t}^{p,A} - \mathbf x_{i-1,t}^{p,A} |.\tag{19}\end{align*} View SourceRight-click on figure for MathML and additional features.

If the sorting and selection overhead is ignored, the computation complexity is the same as BBPSO, i.e., $\mathcal O(2) $ plus grouping overheads.

In [23], the Dynamic Reconstruction Bare Bones PSO (DRPSO) algorithm was presented. Two particles are randomly selected to form a group. The particle with better fitness function is regarded as the elite $\mathbf x^{E}$ , while the other particle is assigned as the retinue $\mathbf x^{R}$ . The elite particle is updated according to the BBPSO procedure. The retinue is updated by \begin{align*} \mathbf x_{i,t}^{R}=&\mathcal N(\boldsymbol \mu ^{R},\boldsymbol \sigma ^{R}), \\ \boldsymbol \mu ^{R}=&(\mathbf x_{i,t}^{p,R} + \mathbf x_{i,t}^{p,E})/2,\quad \boldsymbol \sigma ^{R} = | \mathbf x_{i,t}^{p,R} - \mathbf x_{i,t}^{p,E} |.\tag{20}\end{align*} View SourceRight-click on figure for MathML and additional features. Furthermore, the elites are stored. If the stored elites are more than the total number of particles, the original particles are replaced by the elites. When ignoring the overheads, the computation complexity is the same as the BBPSO, i.e., $\mathcal O(2) $ .

In the Modified Bare Bones with Differential Evolution (MBBDE) algorithm [24], the particle best position is modified from randomly selecting three particles. That is \begin{equation*} \mathbf x_{i,t}^{p} = \mathbf x_{t}^{(1) } + (\mathbf x_{t}^{(2) } - \mathbf x_{t}^{(3) })/2.\tag{21}\end{equation*} View SourceRight-click on figure for MathML and additional features. The main update procedure is given as follows \begin{equation*} \mathbf x_{i,t+1} = \begin{cases} \mathcal N(\boldsymbol \mu _{i},\boldsymbol \sigma _{i}), \\ \quad \boldsymbol \mu _{i}=(\mathbf x_{i,t}^{p} + \bar {\mathbf x}_{t})/2, \\ \quad \boldsymbol \sigma _{i}=|\mathbf x_{i,t}^{p} - \bar {\mathbf x}_{t}|, & \mathcal U(0,1) < 0.5\\ \mathbf x_{i,t}^{p}, & \text {otherwise}, \end{cases}\tag{22}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $\bar {\mathbf x}_{t} = \sum _{i} \mathbf x_{i,t}$ is the average of all particle positions.

There is one division involved in calculating the new particle best position, one division to obtain the average position, one division to calculate the distribution mean, and one distribution sampling. The overall computation complexity is $\mathcal O((2+2)/2+1)=\mathcal O(3) $ .

G. Discussion

From the review of the above class of BBPSO variants, it can be observed that algorithms adopting the sub-swarm strategy have low computation complexities. On the other hand, these algorithms need extra overheads in dividing particles into different groups.

It is also noticed that the DBPSO algorithm, using a particle modification approach, has high computational complexity. Algorithms using spatial dependence and mixed strategies, DGPSO and MXPSO, also have high complexities. Besides, the ACJMP that uses alternative sampling distributions has a higher complexity. If their performances are not directly improved, then these algorithms are unnecessarily complicated.

Furthermore, in many BBPSO variants, there are algorithm control parameters that were determined in an ad-hoc manner, e.g., the jump probability in DGPSO algorithm. Other designs also cannot be rationalized. For instance, there is a discrepancy in determining the stagnation condition in BBGCJ and ACJMP. It can be seen that these algorithms were designed largely based on heuristics. Hence, a systematic and simple design is needed. Here, we refer a systematic design as the development of the algorithm is based on theoretical support, such as the first-order differential equation instead of relying on natural metaphors from intuition. Furthermore, simple design means a straightforward implementation.

SECTION III.

First-Order Bare Bones Particle Swarm Optimizer

In this section, the development of the FODBB algorithm is presented. We begin by discussing the limitation of conventional BBPSO. Then the FODBB is formulated, its properties and computational complexity are analyzed and estimated.

A. Limitations of the Conventional BBPSO

For the basic BBPSO process described in Section II-A, consider a particle $\mathbf x_{i,t}$ at iteration $t$ , there are three possible cases that this particle can be characterized from the fitness value obtained in the previous $t-1$ iteration. These cases are

  1. Particle $\mathbf x_{i,t}$ is the global best particle. Then, it must also be at the $i$ th particle best position.

  2. Particle $\mathbf x_{i,t}$ is located at the particle best position, but is not the global best particle.

  3. Particle $\mathbf x_{i,t}$ is neither the global best nor at the particle best position.

Depending on these cases, the update of the particle would lead to different outcomes. In case 1, $\mathbf x_{t}^{g} = \mathbf x_{i,t}^{p}$ gives $\boldsymbol \mu _{i,t} = \mathbf x_{t}^{g}$ , and $\boldsymbol \sigma _{i,t}=\boldsymbol 0$ . Consequently, $\mathcal N(\boldsymbol \mu _{i,t},\boldsymbol \sigma _{i,t}) = \mathcal N(\mathbf x_{t}^{g},\boldsymbol 0) = \mathbf x_{t}^{g}$ . That is, $\mathbf x_{i,t+1}= \mathbf x_{i,t}$ , and there is no change in the particle position. After the occurrence of this situation, all subsequent positions of this particle will not be changed, and its ability to search for the optimal solution becomes ineffective.

In case 2, we have $\mathbf x_{i,t}=\mathbf x_{i,t}^{p}$ . Then, $\boldsymbol \mu _{i,t} \neq \mathbf x_{t}^{g} \neq \mathbf x_{i,t}$ , $\boldsymbol \sigma _{i,t} \neq \boldsymbol 0$ and the particle will be moved to a new position in the search space. The new position is centered at $(\mathbf x_{t}^{g} + \mathbf x_{i,t})/2$ with scattering governed by the width of the Gaussian distribution that depends on the non-zero $\boldsymbol \sigma _{i,t}$ . It is possible that the update may not result in a better fitness value in the next iteration.

The most general situation is case 3. The center of the next position is $\boldsymbol \mu _{i,t}=(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p})/2$ , and the Gaussian distribution standard deviation is non-zero, i.e., $\boldsymbol \sigma _{i,t}\neq \boldsymbol 0$ . It can be seen that the current particle position $\mathbf x_{i,t}$ does not take part in calculating the Gaussian parameters $\boldsymbol \mu _{i,t}$ and $\boldsymbol \sigma _{i,t}$ . Hence, the updated particle $\mathbf x_{i,t+1}$ is totally independent of $\mathbf x_{i,t}$ . If the updated particle cannot improve the fitness value, i.e., having new $\mathbf x_{t}^{g}$ and $\mathbf x_{i,t}^{p}$ , the center and coverage of the search would remain un-altered.

It can be noted that BBPSO does not make use of the current particle position in updates but only depends on the global best and particle best positions. This makes the algorithm very restrictive in exploring the search space.

B. The Proposed FODBB

In order to mitigate the limitations in conventional BBPSO, a new algorithm called First-Order Bare Bones Particle Swarm Optimizer (FODBB) is proposed. The algorithm design fulfills the following purposes:

  1. The particle update is not restricted to the global best and particle best positions.

  2. The search randomness exists irrespective of particle positions.

The strategy adopted in designing FODBB is to make use of the current particle $\mathbf x_{i,t}$ in the update, and let the search region to depend on the permitted search bound and the iteration count.

Before the first iteration, particles are randomly generated, by sampling from a uniform distribution, to cover the search region. In each iteration, fitness values are evaluated, the global best and particle best positions are identified. The update is carried out as \begin{equation*} \mathbf x_{i,t+1} = \mathcal N(\boldsymbol \mu _{i,t},\boldsymbol \sigma _{i,t}).\tag{23}\end{equation*} View SourceRight-click on figure for MathML and additional features. This procedure is identical to the BBPSO, but $\boldsymbol \mu _{i,t}$ and $\boldsymbol \sigma _{i,t}$ are determined differently. For the center of the next search, we use \begin{equation*} \boldsymbol \mu _{i,t} = \frac {1}{3} (\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p} + \mathbf x_{i,t}),\tag{24}\end{equation*} View SourceRight-click on figure for MathML and additional features. which is the geometric center of three components including the global best, particle best and current particle positions. The search region adopts a linearly decreasing and iteration dependent value with respect to the permitted search bound. That is \begin{equation*} \boldsymbol \sigma _{t} = \left ({1-\frac {t}{T} }\right)R,\tag{25}\end{equation*} View SourceRight-click on figure for MathML and additional features. where $T$ is the maximum number of iterations, $R$ is the half-magnitude of the permitted search bound $\pm \mathbf R$ . Note that $\boldsymbol \sigma _{i,t} = \boldsymbol \sigma _{t},\,\,\forall i$ , is a constant for all particles. After the update, particles that are outside the permitted search region are removed and replaced by newly initialized particles. A pseudo code is given in Algorithm 1.

Algorithm 1 First-Order Bare Bones Particle Swarm Optimizer (FODBB)

Input:

number of particles $i_{max}$ , number of iterations $T$ , fitness value function $f(\mathbf x)$ , permitted search bound $\mathbf R$

1:

Initialize particles $\mathbf x_{i,1}\in [-\mathbf R,+\mathbf R]$

2:

for each iteration do

3:

calculate Gaussian distribution parameter $\boldsymbol \sigma _{t}$ , Eq. 25

4:

for each particle do

5:

Evaluate fitness values $f(\mathbf x_{i,t})$

6:

Identify global best $\mathbf x_{i}^{g}$

7:

Identify particle best position $\mathbf x_{i,t}^{p}$

8:

Calculate the Gaussian distribution center $\boldsymbol \mu _{i,t}$ , Eq. 24

9:

Update particle position $\mathbf x_{i,t+1}=\mathcal N(\boldsymbol \mu _{i,t},\boldsymbol \sigma _{t})$ , Eq. 23

10:

end for

11:

if $\mathbf x_{i,t} \not \in [-\mathbf R,+\mathbf R]$ then

12:

Re-initialize particle $\mathbf x_{i,t}$

13:

end if

14:

end for

15:

Output: $\mathbf x^{*}=\mathbf x_{T}^{g}$

C. Properties of FODBB

Although the FODBB update procedure is similar to BBPSO, its structure is purposefully designed to avoid problems encountered in BBPSO. Since the Gaussian distribution is symmetric and centered at $\boldsymbol \mu _{i,t}$ , we can expand the update equation, Eq. 23, as \begin{align*} \mathbf x_{i,t+1}=&\boldsymbol \mu _{i,t} + \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}), \\=&\frac {1}{3}\mathbf x_{i,t} + \frac {1}{3}(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p}) + \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}).\tag{26}\end{align*} View SourceRight-click on figure for MathML and additional features. It can be seen that Eq. 26 contains both particle positions $\mathbf x_{i,t}$ and $\mathbf x_{i,t+1}$ . This fact allows us to treat the update equation as a linear difference equation indexed by the iteration count [25]. Furthermore, Eq. 26 is a first-order difference equation and its computation is simpler than the second-order counterparts [26], [27].

The expanded update equation can be treated as containing a homogeneous part and a non-homogeneous part. The former contains only the term $\mathbf x_{i,t}$ with coefficient $w=1/3$ , while the latter contains a deterministic term $(\mathbf x_{t}^{g} +\mathbf x_{i,t}^{p})/3$ and a stochastic term $\mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t})$ . The last two components can also be regarded as the driving function to the system described by the homogeneous component. Based on the linearity of the update equation, the overall solution for the particle position is the sum of solutions to the three components.

Consider the homogeneous equation \begin{equation*} \mathbf x_{i,t+1}^{h} = w\mathbf x_{i,t}^{h},\tag{27}\end{equation*} View SourceRight-click on figure for MathML and additional features. where the superscript $h$ denotes the homogeneous component. By setting $t=1,2,\cdots,T$ , we have \begin{align*} \mathbf x_{i,2}^{h}=&w\mathbf x_{i,1}^{h}, \\ \mathbf x_{i,3}^{h}=&w\mathbf x_{i,2}^{h}=w^{2}\mathbf x_{i,1}^{h}, \\ \cdots\cdots&\cdots \\ \mathbf x_{i,T}^{h}=&w\mathbf x_{i,T-1}^{h} = \cdots ~\cdots = w^{T-1}\mathbf x_{i,1}^{h}.\tag{28}\end{align*} View SourceRight-click on figure for MathML and additional features. With $w=1/3 < 1$ , we have $\mathbf x_{i,T}^{h}\rightarrow 0$ , for $T\rightarrow \infty $ , in the homogeneous condition without any driving function. The effect is that previous particle positions will influence the update in every iteration. The new particle will not locate on the line joining $\mathbf x_{i}^{g}$ and $\mathbf x_{i,t}^{p}$ .

For the deterministic term, the difference equation can be separately written as \begin{align*} \mathbf x_{i,t}^{d}=&w(\mathbf x_{t}^{g} + \mathbf x_{i,t}^{p}), \\ \mathbf x_{i,t+1}^{d}=&w(\mathbf x_{t+1}^{g} + \mathbf x_{i,t+1}^{p}).\tag{29}\end{align*} View SourceRight-click on figure for MathML and additional features. A special case can be considered if $\mathbf x_{t}^{g}=\mathbf x_{t+1}^{g}$ and $\mathbf x_{i,t}^{p}=\mathbf x_{i,t+1}^{p}$ , i.e., the global best and particle best positions do not change between two consequent iterations. Then there will be no change in the particle position, i.e., $\mathbf x_{i,t}^{d}=\mathbf x_{i,t+1}^{d}$ . On the other hand, if there are changes in either $\mathbf x_{t}^{g}$ , $\mathbf x_{i,t}^{p}$ or both of them, then $\mathbf x_{i,t+1}^{d}$ cannot be directly related to $\mathbf x_{i,t}^{d}$ . Instead, it depends entirely on $w(\mathbf x_{t}^{g}+\mathbf x_{i,t}^{p})$ .

Another case to be considered is when $\mathbf x_{i,t}^{p}=\mathbf x_{i,t}^{d}$ , i.e., at the iteration when a new particle best is identified. Then we have\begin{equation*} \mathbf x_{i,t+1}^{d} = w\mathbf x_{i,t}^{d} + w\mathbf x_{t}^{g}.\tag{30}\end{equation*} View SourceRight-click on figure for MathML and additional features. This expression can be viewed as another first-order difference equation with $w\mathbf x_{t}^{g}$ as another driving function. It has been shown that the effect of $w\mathbf x_{i,t}^{d}$ will diminish at large iterations, and $\mathbf x_{i,t+1}^{d} \rightarrow w\mathbf x_{t}^{g}$ . Moreover, if the updated particle position coincides with both the global best and the particle best, i.e., $\mathbf x_{i,t}^{d}=\mathbf x_{t}^{g}=\mathbf x_{i,t}^{p}$ , then \begin{equation*} \mathbf x_{i,t+1}^{d} = w(\mathbf x_{t}^{g}+\mathbf x_{i,t}^{p})=2w\mathbf x_{i,t}^{d}.\tag{31}\end{equation*} View SourceRight-click on figure for MathML and additional features. Recall that $w=1/3$ then $2w < 1$ and $\mathbf x_{i,t}^{d} \rightarrow 0$ if the above condition continues to hold over a large number of iterations.

For the stochastic component \begin{equation*} \mathbf x_{i,t+1}^{s} = \mathcal N(\boldsymbol 0,\boldsymbol \sigma _{t}),\tag{32}\end{equation*} View SourceRight-click on figure for MathML and additional features. it is independent of the particle position. Furthermore, the stochastic component of the particle position is also independent of previous Gaussian distribution parameters. Thus, the accumulative effect of search region coverage is the summation of these independent distributions. Since the variance is decreasing, it allows for better exploitation in later iterations.

Because of the linearity in the particle update process, which is expressed as a first-order difference equation, the total solution is the sum of solutions to the homogeneous, deterministic and stochastic components. That is \begin{equation*} \mathbf x_{i,t+1} = \mathbf x_{i,t}^{h} + \mathbf x_{i,t}^{d} + \mathbf x_{i,t}^{s}.\tag{33}\end{equation*} View SourceRight-click on figure for MathML and additional features. In particular, the homogeneous component converges to zero after a transition period. This ensures that new positions will be tested in early iterations. The deterministic component determines the center of future searches. The stochastic component ensures that stagnation in particle position does not occur.

The behavior of the particle evolution and its search behavior can be summarized as follows.

  1. The history of the particle position influences its future position. This effect diminishes with an increase in iterations.

  2. During the transient period, i.e., before the historical influence diminishes, more positions are searched and tested for optimum. Hence, this leads to wider coverage of the search space and increases the chance to obtain better search results.

  3. Because the sampling Gaussian distribution has a non-zero standard deviation that is independent of the particle position, particle stagnation can be avoided.

D. Computation Complexity

Recall the FODBB process shown in Algorithm 1. For each particle in an iteration, there is only one division needed in calculating the Gaussian distribution center, see also Eq. 24. Furthermore, there is one distribution sampling needed to generate the new particle position. Since the standard deviation is constant for all particles, we can pre-compute in each iteration by expressing it as \begin{equation*} \boldsymbol \sigma _{t} = \left ({1-\frac {t}{T} }\right) R = \left ({R - t\frac {R}{T} }\right),\tag{34}\end{equation*} View SourceRight-click on figure for MathML and additional features. where the term $R/T$ is a constant and can also be pre-computed. Hence, it can be treated as an overhead of $\mathcal O(1) $ for one multiplication with $t$ per iteration. Therefore, the overall computation complexity per-iteration, per-particle, and per-dimension, of FODBB is $\mathcal O(2) $ as the conventional BBPSO, is lower than some variants discussed in the related works.

SECTION IV.

Experiment

Experiments for function optimization were conducted to evaluate the performance of the proposed FODBB algorithm. Benchmark functions from the CEC 2017 dataset were employed [28]. These are collections of unimodal, simple multi-modal, hybrid, and composite functions, see Table 1. The functions are formulated as minimization problems.

TABLE 1 Summary of Benchmark Functions
Table 1- 
Summary of Benchmark Functions

The permitted search range is $\mathbf R=\pm 100$ in all dimensions, and the optimal values are equal to the function number multiplied by one hundred, i.e., $f_{n}(\mathbf x^{*})=100 \times n$ . Different numbers of particles were used in functions of different dimensions, and the number of iterations was also different as shown in Table 2. Both settings are relatively small, as compared to that reported in the reviewed works in Section II, to demonstrate the FODBB performance under limited computation resources.

TABLE 2 Summary of Test Setup
Table 2- 
Summary of Test Setup

The FODBB, BBPSO, and its variants were coded in the Matlab 2015b platform, running on a PC with 1.70 GHz CPU, 4 GB RAM, and 64 bit Windows 7 OS. Fifty independently initialized test runs were performed to remove the random effect and give a better indication of the algorithm performance. Records of final fitness values, i.e., $f(\mathbf x_{T}^{g})$ , their mean values, median values, and standard deviations are used as measures of solution accuracy and robustness. For the minimization problems considered, a smaller mean together with a smaller median denotes that the algorithm is performing more satisfactorily. Furthermore, the standard deviation of final fitness values for each function and each method were determined to reflect solution robustness. The performance of FODBB on the record of $f(\mathbf x_{T}^{g})$ over the fifty runs, and the compared algorithms were also evaluated using the nonparametric rank-sum test that is applicable in situations where the data distributions can be non-Gaussian [29]. The null hypothesis is that results were drawn from equivalent distributions, while the alternative hypothesis is that the median values of the FODBB results were less than others. A significance level, $\alpha =0.05$ , was adopted.

A. Results

Plots of the fitness values with respect to the number of iterations are depicted in Fig. 1, 2, 3, 4 and 5 for test functions of 2-, 10-, 30-dimensions, 50-dimensions, and 100-dimensions respectively. For a better illustration of the results, plots are drawn in the semi-log10 scale. Results are discussed according to their problem dimensions, and on the basis of their mean, median, standard deviation values and the hypothesis test outcomes.

FIGURE 1. - Fitness values of 2-dimensional functions.
FIGURE 1.

Fitness values of 2-dimensional functions.

FIGURE 2. - Fitness values of 10-dimensional functions.
FIGURE 2.

Fitness values of 10-dimensional functions.

FIGURE 3. - Fitness values of 30-dimensional functions.
FIGURE 3.

Fitness values of 30-dimensional functions.

FIGURE 4. - Fitness values of 50-dimensional functions.
FIGURE 4.

Fitness values of 50-dimensional functions.

FIGURE 5. - Fitness values of 100-dimensional functions.
FIGURE 5.

Fitness values of 100-dimensional functions.

1) Two-Dimensional Problems

From the results of 2-dimensional functions, shown in Fig. 1, it is observed that the FODBB algorithm is performing well, in particular, in functions $f_{22}-f_{28}$ . These are difficult composite functions. Furthermore, a fitness value decreasing trend that denotes improvements on the optimal solution can be observed from the FODBB fitness plots of these functions. This is attributed to the fact that the sampling Gaussian distribution standard deviation is non-zero, such that particle stagnation is prevented. However, it is also observed that the improvement in the fitness value is not noticeable in the early iterations. This is a trade-off of employing the search effort in exploring the search space during that period. On the other hand, the accumulation of better solutions enables the search to obtain better solutions at later iterations.

Final fitness values from the fifty independent runs were summarized in Table 3 as the mean values, the median values are given in Table 4. It can be seen that FODBB is the best performing algorithm, with regard to the mean value, in all test functions. For the median value, the FODBB algorithm is the best in $f_{4}$ $f_{5}$ , $f_{8}$ , $f_{10}$ $f_{11}$ , $f_{23}$ -$f_{24}$ , and $f_{28}$ , that is, eight functions out of the sixteen test functions. Another high-performing algorithm is the PWPSO, which has the lowest median in five test functions including $f_{3}$ , $f_{9}$ , $f_{23}$ , and $f_{26}$ $f_{27}$ .

TABLE 3 Result Statistics for 2-Dimensional Test Functions – Mean Fitness Values
Table 3- 
Result Statistics for 2-Dimensional Test Functions – Mean Fitness Values
TABLE 4 Result Statistics for 2-Dimensional Test Functions – Median of Fitness Values
Table 4- 
Result Statistics for 2-Dimensional Test Functions – Median of Fitness Values

For the solution standard deviation, it is a measure of the algorithm robustness against random effects in particle initialization and distribution sampling. A small value together with a low mean fitness signifies that the algorithm is robust. The FODBB is the best algorithm for all functions, see Table 5. Hence, it can be regarded that FODBB is an effective and robust algorithm for the majority of 2-dimensional minimization problems.

TABLE 5 Result Statistics for 2-Dimensional Test Functions – Standard Deviation of Fitness Values
Table 5- 
Result Statistics for 2-Dimensional Test Functions – Standard Deviation of Fitness Values

In the hypothesis test, namely, the non-parametric rank-sum test, the FODBB is performing satisfactorily, see Table 6. The results are significant, in the sense that the median is less than others, in functions $f_{4}$ $f_{5}$ , $f_{8}$ , $f_{10}$ , $f_{25}$ and $f_{28}$ with all null hypotheses rejected at $\alpha =0.05$ , see Table 6. In functions $f_{22}$ $f_{24}$ , there are ten rejections. Most of the functions are difficult composite functions.

TABLE 6 Result Statistics for 2-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets
Table 6- 
Result Statistics for 2-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets

2) Ten-Dimensional Problems

For 10-dimensional functions, plots of fitness values in Fig. 2 indicate that FODBB is able to produce satisfactory solutions in $f_{6}$ , $f_{9}$ , $f_{16}$ , $f_{21}$ , $f_{24}$ , $f_{26}$ , $f_{28}$ and $f_{30}$ . These are mostly difficult hybrid and composite functions. In the plots of these functions, the FODBB shows a decreasing trend as observed from 2-dimensional function results. It verifies that FODBB, in 10-dimensional problems, is also able to explore the search space, accumulate high quality solutions, and produce promising solutions at later iteration with a good exploitation of the search space.

A comparison of resultant mean fitness values for 10-dimensional functions is listed in Table 7, the median values in Table 8, the standard deviation values in Table 9, and the rank-sum tests in Table 10. The proposed FODBB is the best algorithm, with regard to the mean fitness, in $f_{4}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{14}$ , $f_{16}$ , $f_{19}$ $f_{24}$ , $f_{26}$ $f_{30}$ . These functions contain most of the difficult hybrid and composite functions.

TABLE 7 Result Statistics for 10-Dimensional Test Functions – Mean Fitness Values
Table 7- 
Result Statistics for 10-Dimensional Test Functions – Mean Fitness Values
TABLE 8 Result Statistics for 10-Dimensional Test Functions – Median of Fitness Values
Table 8- 
Result Statistics for 10-Dimensional Test Functions – Median of Fitness Values
TABLE 9 Result Statistics for 10-Dimensional Test Functions – Standard Deviation of Fitness Values
Table 9- 
Result Statistics for 10-Dimensional Test Functions – Standard Deviation of Fitness Values
TABLE 10 Result Statistics for 10-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets
Table 10- 
Result Statistics for 10-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets

The median values for the 10-dimensional problems, see Table 8, show that the FODBB is the best performer in functions $f_{5}$ $f_{6}$ , $f_{9}$ , $f_{16}$ , $f_{19}$ , $f_{21}$ , $f_{23}$ $f_{24}$ , and $f_{26}$ $f_{29}$ . With the low mean and median values, the FODBB can be regarded as a promising method in the difficult benchmark optimization problems.

For the standard deviation measure listed in Table 9, together with low fitness values, FODBB is the best algorithm in $f_{4}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{14}$ , $f_{16}$ , $f_{19}$ $f_{24}$ , and $f_{26}$ $f_{30}$ . Similar to the 2-dimensional problems, the FODBB is able to produce high quality solutions effectively and robustly.

In the results of the hypothetical test shown in Table 10, the FODBB has all the null hypothesis rejected in functions $f_{6}$ , $f_{9}$ , $f_{21}$ , $f_{23}$ $f_{24}$ , and $f_{26}$ $f_{29}$ . There are also ten rejections in $f_{5}$ , $f_{14}$ , and $f_{16}$ . Hence, the FODBB is considered a significantly better performed algorithm among other compared methods.

3) Thirty-Dimensional Problems

Results of the mean fitness values are plotted in Fig. 3. The FODBB has obvious small errors in functions $f_{9}$ , $f_{20}$ , $f_{22}$ $f_{24}$ , and $f_{26}$ . These are the hybrid and composite functions. As found in 2- and 10-dimensional problems, FODBB also shows a decreasing trend in fitness plots, where search space exploration and exploitation were allocated appropriately to early and later iterations.

The mean fitness values are listed in Table 11. We see that FODBB is the best algorithm in functions $f_{5}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{14}$ , $f_{17}$ , $f_{20}$ $f_{24}$ , $f_{26}$ $f_{27}$ , and $f_{29}$ . The next performing algorithm is the PWPSO, which has the lowest fitness in functions $f_{3}$ $f_{4}$ , $f_{7}$ , $f_{11}$ $f_{12}$ , $f_{16}$ , $f_{25}$ , $f_{28}$ , and $f_{30}$ .

TABLE 11 Result Statistics for 30-Dimensional Test Functions – Mean Fitness Values
Table 11- 
Result Statistics for 30-Dimensional Test Functions – Mean Fitness Values

For the median values, the FODBB has the lowest values in functions $f_{5}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{14}$ , $f_{17}$ , $f_{20}$ , $f_{23}$ $f_{24}$ , $f_{26}$ $f_{27}$ , and $f_{29}$ , see Table 12. The other well performing algorithm is also the PWPSO, in functions $f_{3}$ $f_{4}$ , $f_{7}$ , $f_{10}$ $f_{12}$ , $f_{16}$ , $f_{19}$ , $f_{21}$ , $f_{25}$ , $f_{28}$ and $f_{30}$ . The number of functions that have low fitness values is the same for both algorithms.

TABLE 12 Result Statistics for 30-Dimensional Test Functions – Median of Fitness Values
Table 12- 
Result Statistics for 30-Dimensional Test Functions – Median of Fitness Values

The standard deviations are listed in Table 13. The FODBB, together with a low mean fitness values, is performing well in functions $f_{5}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{14}$ , $f_{17}$ , $f_{20}$ $f_{24}$ , $f_{26}$ $f_{27}$ , and $f_{29}$ . These include all types of functions tested. Other promising results were obtained from PWPSO in functions $f_{3}$ $f_{4}$ , $f_{7}$ , $f_{11}$ $f_{12}$ , $f_{16}$ , $f_{25}$ , $f_{28}$ , and $f_{30}$ . The FODBB algorithm can again be regarded as effective and robust in 30-dimensional problems.

TABLE 13 Result statistics for 30-Dimensional Test Functions – Standard Deviation of Fitness Values
Table 13- 
Result statistics for 30-Dimensional Test Functions – Standard Deviation of Fitness Values

From the hypothetical test, results are shown in Table 14. For FODBB, the null hypotheses are rejected in functions $f_{5}$ $f_{6}$ , $f_{8}$ $f_{9}$ , $f_{17}$ , $f_{20}$ , $f_{23}$ $f_{24}$ , and $f_{26}$ $f_{27}$ . It has ten rejections in $f_{21}$ and $f_{29}$ . Most results indicated that it is significant in which the median values from FODBB are smaller than the others.

TABLE 14 Result Statistics for 30-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets
Table 14- 
Result Statistics for 30-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets

4) Fifty-Dimensional Problems

Plots of the resultant mean fitness values are shown in Fig. 4. The FODBB has promising results in functions $f_{7}$ , $f_{11}$ , $f_{24}$ , and $f_{26}-f_{28}$ . On the other hand, trends of out-performance are not very obvious.

For the mean fitness, results are listed in Table 15. The FODBB has the best values in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{11}$ , $f_{16}$ $f_{18}$ , $f_{21}$ , and $f_{23}$ $f_{28}$ . The next performing algorithm is the PWPSO in functions $f_{1}$ $f_{3}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{12}$ $f_{14}$ , $f_{20}$ , and $f_{29}$ $f_{30}$ . However, there are more high performing results from FODBB.

TABLE 15 Result Statistics for 50-Dimensional Test Functions – Mean Fitness Values
Table 15- 
Result Statistics for 50-Dimensional Test Functions – Mean Fitness Values

In Table 16, median values are shown. The FODBB has the lowest values in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{16}$ $f_{17}$ , $f_{21}$ and $f_{23}$ $f_{28}$ . The PWPSO has low medians in functions $f_{3}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{14}$ , $f_{20}$ , and $f_{29}$ $f_{30}$ . The number of instances that FODBB performs better is more than that obtained from PWPSO.

TABLE 16 Result Statistics for 50-Dimensional Test Functions – Median of Fitness Values
Table 16- 
Result Statistics for 50-Dimensional Test Functions – Median of Fitness Values

For the standard deviation, see Table 17, the FODBB has low values in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{11}$ , $f_{16}$ $f_{18}$ , $f_{21}$ , and $f_{23}$ $f_{28}$ . The other performing algorithm is also the PWPSO, which has low medians in functions $f_{1}$ $f_{3}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{12}$ $f_{14}$ , $f_{20}$ , and $f_{29}-f_{30}$ . It can be seen that the FODBB algorithm is more robust.

TABLE 17 Result Statistics for 50-Dimensional Test Functions – Standard Deviation of Fitness Values
Table 17- 
Result Statistics for 50-Dimensional Test Functions – Standard Deviation of Fitness Values

Table 18 lists the results from the hypothetical test. The complete null hypothesis rejection occurs in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{16}$ $f_{18}$ , $f_{21}$ , and $f_{23}$ $f_{28}$ . Moreover, there are ten rejections in $f_{20}$ . The median values from the FODBB are therefore significantly less than others in these functions.

TABLE 18 Result Statistics for 50-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets
Table 18- 
Result Statistics for 50-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets

5) Hundred-Dimensional Problems

The results of the mean values are plotted in Fig. 5. The performance of the FODBB algorithm and others are not easily distinguishable from the graphs. However, trends in decreasing fitness values are observable from FODBB.

The mean values are summarized in Table 19. The FODBB method has the lowest mean values in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{11}$ , $f_{17}$ , $f_{21}$ , $f_{24}$ , and $f_{26}$ $f_{28}$ . The other performing algorithm is the PWPSO. It has low mean values in functions $f_{3}$ $f_{4}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{12}$ $f_{13}$ , $f_{19}$ $f_{20}$ , $f_{22}$ $f_{23}$ , $f_{25}$ , and $f_{29}$ $f_{30}$ . The number of functions that have better results is more than that from FODBB.

TABLE 19 Result Statistics for 100-Dimensional Test Functions – Mean Fitness Values
Table 19- 
Result Statistics for 100-Dimensional Test Functions – Mean Fitness Values

Table 20 depicts the median values obtained in the 100-dimensional problems. The FODBB algorithm has low medians in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{17}$ , $f_{21}$ , $f_{24}$ , and $f_{26}$ $f_{28}$ . The other well performing algorithm is the PWPSO. It has low median values in functions $f_{3}$ $f_{4}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{12}$ -$f_{14}$ , $f_{19}$ $f_{20}$ , $f_{22}$ $f_{23}$ , $f_{25}$ , and $f_{29}$ $f_{30}$ . The number of high performing instances in FODBB is less than that from PWPSO.

TABLE 20 Result Statistics for 100-Dimensional Test Functions – Median of Fitness Values
Table 20- 
Result Statistics for 100-Dimensional Test Functions – Median of Fitness Values

The standard deviations are listed in Table 21. The FODBB has low values in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{11}$ , $f_{17}$ , $f_{21}$ , $f_{24}$ , and $f_{26}$ $f_{28}$ . The PWPSO has low standard deviations in functions $f_{3}$ $f_{4}$ , $f_{6}$ , $f_{9}$ $f_{10}$ , $f_{12}$ $f_{13}$ , $f_{19}$ $f_{20}$ , $f_{22}$ $f_{23}$ , $f_{25}$ , and $f_{29}$ $f_{30}$ .

TABLE 21 Result statistics for 100-dimensional test Functions – Standard Deviation of Fitness Values
Table 21- 
Result statistics for 100-dimensional test Functions – Standard Deviation of Fitness Values

Results from the statistical test are shown in Table 22. The null hypotheses for FODBB are rejected in functions $f_{5}$ , $f_{7}$ $f_{8}$ , $f_{17}$ , $f_{21}$ , $f_{24}$ , and $f_{26}$ $f_{28}$ . There were nine rejections in functions $f_{4}$ , $f_{12}$ , $f_{16}$ , $f_{20}$ , and $f_{25}$ . The performance of the FODBB algorithm in the 100-dimension problems is below its performances in in other dimensions.

TABLE 22 Result Statistics for 100-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets
Table 22- 
Result Statistics for 100-Dimensional Test Functions – Rank-Sum Test: Hypothesis and p-Value in Brackets

B. Discussion

Recall that the FODBB algorithm contains three components in its particle update process, namely, a homogeneous, a deterministic and a stochastic component. The homogeneous component makes use of the current particle position that prevents the particle evolution from directly moving to the position between the global best and particle positions. The benefit is to provide sufficient diversity in early iterations to allow exploration. The stochastic component, on the other hand, is independent of any of the particle, global best and best particle positions. Besides, it is determined by the search range and linearly decreases with increases in iterations. This ensures that randomness, as an essential contributor to the search, always exists. The outcome is that particle stagnation is prevented.

The FODBB algorithm has been shown to perform better than the conventional BBPSO and its variants. The performance is especially good in low 2-dimensional functions. Satisfactory results are also obtained from 10-, 30- and 50-dimensional problems. However, the PWPSO is the algorithm that performed better than others in a majority of test functions in the 100-dimension. Furthermore, the FODBB, as indicated by the final mean fitness values, median values, standard deviations and results from the rank-sum test, is an effective and robust algorithm among the class of BBPSO methods and its variants. It is worth to mention that the FODBB complexity is low, and is equivalent to that of conventional BBPSO. The improvements on FODBB for high dimensional problems will be considered in future work.

SECTION V.

Conclusion

An improved Bare Bones Particle Swarm Optimization variant, the First-Order Bare Bones Particle Swarm Optimizer (FODBB) algorithm has been presented. This algorithm has a low computation complexity and is suitable for optimization in both low and medium dimensions, simple and difficult functions. The iterative particle update involves the current particle position that allows for better search space exploration in early iterations. Furthermore, a search region dependent Gaussian sampling with non-zero and decreasing standard deviation is used in the update. This ensures that particle stagnation can be avoided and exploitation of the search space is enforced in later iterations. Experimental results obtained from benchmark functions indicate that FODBB is performing both effectively and robustly across problems ranging from simple unimodal, multi-modal, hybrid, and difficult compositions functions.

Select All
1.
R. Poli, J. Kennedy and T. Blackwell, "Particle swarm optimization", Swarm Intell., vol. 1, pp. 33-57, Jun. 2007.
2.
G. Fang, N. M. Kwok and Q. Ha, "Automatic fuzzy membership function tuning using the particle swarm optimization", Proc. IEEE Pacific-Asia Workshop Comput. Intell. Ind. Appl., vol. 2, pp. 324-328, Dec. 2008.
3.
N. M. Kwok, H. Y. Shi, Q. P. Ha, G. Fang, S. Chen and X. Jia, "Simultaneous image color correction and enhancement using particle swarm optimization", Eng. Appl. Artif. Intell., vol. 26, no. 10, pp. 2356-2371, Nov. 2013.
4.
J. Kennedy, "Bare bones particle swarms", Proc. IEEE Swarm Intell. Symp., pp. 80-87, Apr. 2003.
5.
M. Campos and R. A. Krohling, "Entropy-based bare bones particle swarm for dynamic constrained optimization", Knowl.-Based Syst., vol. 97, pp. 203-223, Apr. 2016.
6.
R. A. Krohling, M. Campos and P. Borges, "Bare bones particle swarm applied to parameter estimation of mixed Weibull distribution", Soft Computing in Industrial Applications, pp. 53-60, 2010.
7.
H. Zhang, D. D. Kennedy, G. P. Rangaiah and A. Bonilla-Petriciolet, "Novel bare-bones particle swarm optimization and its performance for modeling vapor–liquid equilibrium data", Fluid Phase Equilibria, vol. 301, no. 1, pp. 33-45, Feb. 2011.
8.
B. Jiang and N. Wang, "Cooperative bare-bone particle swarm optimization for data clustering", Soft Comput., vol. 18, no. 6, pp. 1079-1091, Jun. 2014.
9.
Y. Zhang, D. Gong, Y. Hu and W. Zhang, "Feature selection algorithm based on bare bones particle swarm optimization", Neurocomputing, vol. 148, pp. 150-157, Jan. 2015.
10.
W. Srisukkham, L. Zhang, S. C. Neoh, S. Todryk and C. P. Lim, "Intelligent leukaemia diagnosis with bare-bones PSO based feature optimization", Appl. Soft Comput., vol. 56, pp. 405-419, Jul. 2017.
11.
J.-H. Zhang, Y. Zhang and Y. Zhou, "Path planning of mobile robot based on hybrid multi-objective bare bones particle swarm optimization with differential evolution", IEEE Access, vol. 6, pp. 44542-44555, 2018.
12.
Y. Zhang, D.-W. Gong, N. Geng and X.-Y. Sun, "Hybrid bare-bones pso for dynamic economic dispatch with valve-point effects", Appl. Soft Comput., vol. 18, pp. 248-260, May 2014.
13.
C.-F. Juang, "A hybrid of genetic algorithm and particle swarm optimization for recurrent network design", IEEE Trans. Syst. Man Cybern. B Cybern., vol. 34, no. 2, pp. 997-1006, Apr. 2004.
14.
H. Liu, Z. Cai and Y. Wang, "Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization", Appl. Soft Comput., vol. 10, no. 2, pp. 629-640, Mar. 2010.
15.
R. A. Krohling and E. Mendel, "Bare bones particle swarm optimization with Gaussian or cauchy jumps", Proc. IEEE Congr. Evol. Comput., pp. 3285-3291, May 2009.
16.
C. Qiu, "Bare bones particle swarm optimization with adaptive chaotic jump for feature selection in classification", Int. J. Comput. Intell. Syst., vol. 11, no. 1, pp. 1-14, 2018.
17.
H. Liu, G. Y. Ding and B. Wang, "Bare-bones particle swarm optimization with disruption operator", Appl. Math. Comput., vol. 238, pp. 106-122, Jul. 2014.
18.
C. Zeng and Y. Shen, "A distribution-guided bare-bones particle swarm optimization", Proc. 12th Int. Conf. Natural Comput. Fuzzy Syst. Knowl. Discovery (ICNC-FSKD), pp. 150-154, Aug. 2016.
19.
X. Zhao, H. Liu, D. Liu, W. Ai and X. Zuo, "New modified bare-bones particle swarm optimization", Proc. IEEE Congr. Evol. Comput. (CEC), pp. 416-422, Jul. 2016.
20.
J. Guo and Y. Sato, "A hierarchical bare bones particle swarm optimization algorithm", Proc. IEEE Int. Conf. Syst. Man Cybern. (SMC), pp. 1936-1941, Oct. 2017.
21.
J. Guo and Y. Sato, "A pair-wise bare bones particle swarm optimization algorithm", Proc. IEEE/ACIS 16th Int. Conf. Comput. Inf. Sci. (ICIS), pp. 353-358, May 2017.
22.
J. Guo and Y. Sato, "A dynamic allocation bare bones particle swarm optimization algorithm and its application", Artif. Life Robot., vol. 23, no. 3, pp. 353-358, Sep. 2018.
23.
J. Guo and Y. Sato, "A dynamic reconstruction bare bones particle swarm optimization algorithm", Proc. IEEE Congr. Evol. Comput. (CEC), pp. 1-6, Jul. 2018.
24.
S. Yang and Y. Sato, "Modified bare bones particle swarm optimization with differential evolution for large scale problem", Proc. IEEE Congr. Evol. Comput. (CEC), pp. 2760-2767, Jul. 2016.
25.
A. J. Jerri, Linear Difference Equations With Discrete Transform Methods, Dordrecht, The Netherlands:Springer, vol. 363, 2013.
26.
Q. Yang, J. Tian and W. Si, "An improved particle swarm optimization based on difference equation analysis", J. Difference Equ. Appl., vol. 23, no. 1, pp. 135-152, Jun. 2017.
27.
H. Shi, S. Liu, H. Wu, R. Li, S. Liu, N. Kwok, et al., "Oscillatory particle swarm optimizer", Appl. Soft Comput., vol. 73, pp. 316-327, Dec. 2018.
28.
N. Awad, M. Ali, J. Liang, B. Qu and P. Suganthan, "Problem definitions and evaluation criteria for the CEC 2017 special session and competition on single objective bound constrained real-parameter numerical optimization", 2016.
29.
J. D. Gibbons and S. Chakraborti, Nonparametric Statistical Inference, Berlin, Germany:Springer, 2011.

References

References is not available for this document.