Processing math: 100%
A dynamic competitive swarm optimizer based-on entropy for large scale optimization | IEEE Conference Publication | IEEE Xplore

A dynamic competitive swarm optimizer based-on entropy for large scale optimization


Abstract:

In this paper, a dynamic competitive swarm optimizer (DCSO) based on population entropy is proposed. The new algorithm is derived from the competitive swarm optimizer (CS...Show More

Abstract:

In this paper, a dynamic competitive swarm optimizer (DCSO) based on population entropy is proposed. The new algorithm is derived from the competitive swarm optimizer (CSO). The new algorithm uses population entropy to make a quantitative description about the diversity of population, and to divide the population into two sub-groups dynamically. During the early stage of the execution process, to speed up convergence of the algorithm, the sub-group with better fitness will have a small size, and worse large sub-group will learn from small one. During the late stage of the execution process, to keep the diversity of the population, the sub-group with better fitness will have a large size, and small worse sub-group will learn from large group. The proposed DCSO is evaluated on CEC'08 benchmark functions on large scale global optimization. The simulation results of the example indicate that the new algorithm has better and faster convergence speed than CSO.
Date of Conference: 14-16 February 2016
Date Added to IEEE Xplore: 09 April 2016
ISBN Information:
Conference Location: Chiang Mai, Thailand

I. Introduction

Particle swarm optimizer (PSO) is an evolutionary algorithm, introduced by Kennedy and Eberhart in [1] and [2]. The algorithm derives from the behavior of social animals like bird flocking and fish schooling. The algorithm contains a swarm of particles, which are randomly initialized in an ndimensional search space. Each particle has a velocity vector and a position vector. During the each generation, each particle update its velocity and position by learning from the particle's own best position and the historically best solution of the whole swarm. The two vectors of each particle are updated using the following equations:\begin{align*} V_{i}(t+1)&=\omega \cdot V_{i}(t)+c_{1}\cdot r_{1}(pBest_{i}-X_{i}(t)) {\tag{1}} \\ &\qquad \ +c_{2}\cdot r_{2}\cdot(gBest-X_{i}(t))\\ X_{i}(t+1)&=X_{i}(t)+V_{i}(t+1) {\tag{2}} \end{align*}

where pBestiis the i-th particle own historically best position, gBest is the historically best solution of the whole swarm. and are two parameters called learning factors, which keep a delicate balance between pBestiand gBest. and are random numbers uniformly distributed in [0, 1]. is a parameter called the inertia weight.

Contact IEEE to Subscribe

References

References is not available for this document.