Loading [MathJax]/jax/output/HTML-CSS/autoload/mtable.js
Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization | IEEE Journals & Magazine | IEEE Xplore

Neural Net-Enhanced Competitive Swarm Optimizer for Large-Scale Multiobjective Optimization


Abstract:

The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the...Show More

Abstract:

The competitive swarm optimizer (CSO) classifies swarm particles into loser and winner particles and then uses the winner particles to efficiently guide the search of the loser particles. This approach has very promising performance in solving large-scale multiobjective optimization problems (LMOPs). However, most studies of CSOs ignore the evolution of the winner particles, although their quality is very important for the final optimization performance. Aiming to fill this research gap, this article proposes a new neural net-enhanced CSO for solving LMOPs, called NN-CSO, which not only guides the loser particles via the original CSO strategy, but also applies our trained neural network (NN) model to evolve winner particles. First, the swarm particles are classified into winner and loser particles by the pairwise competition. Then, the loser particles and winner particles are, respectively, treated as the input and desired output to train the NN model, which tries to learn promising evolutionary dynamics by driving the loser particles toward the winners. Finally, when model training is complete, the winner particles are evolved by the well-trained NN model, while the loser particles are still guided by the winner particles to maintain the search pattern of CSOs. To evaluate the performance of our designed NN-CSO, several LMOPs with up to ten objectives and 1000 decision variables are adopted, and the experimental results show that our designed NN model can significantly improve the performance of CSOs and shows some advantages over several state-of-the-art large-scale multiobjective evolutionary algorithms as well as over model-based evolutionary algorithms.
Published in: IEEE Transactions on Cybernetics ( Volume: 54, Issue: 6, June 2024)
Page(s): 3502 - 3515
Date of Publication: 24 July 2023

ISSN Information:

PubMed ID: 37486827

Funding Agency:


I. Introduction

Multiobjective optimization problems (MOPs) usually contain several conflicting objectives that need to be optimized simultaneously [1], as defined by \begin{align*} \text {minimize}~&F(x)=\left ({f_{1}(x), \ldots, f_{m}(x)}\right) \\ {~\text {subject to}}~&x \in \Omega \tag{1}\end{align*}

where denotes the -dimensional decision vector of a solution from the search space and defines objective functions. Due to the conflicts that often arise in different objectives, there is not a single optimal solution, but a set of equally optimal solutions termed the Pareto-optimal set (PS) for solving MOPs [2]. The mapping of PS onto the objective space is termed the Pareto optimal front (PF) [2]. In particular, the problem in (1) is called a large-scale MOP (LMOP) when the number of decision variables is no less than 100 [3]. During the past few decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed with very effective performance for solving MOPs [4], [5], [6]. However, experimental results show that most of the existing MOEAs are not efficient when solving LMOPs with a large number of decision variables, due to their weak search abilities [7]. To better solve LMOPs, a number of large-scale MOEAs (LMOEAs) have been designed and most of them can be roughly divided into three categories [3], which are introduced sequentially as follows.

Contact IEEE to Subscribe

References

References is not available for this document.