Loading [MathJax]/extensions/TeX/boldsymbol.js
A Dimension Convergence-Based Evolutionary Algorithm for Many-Objective Optimization Problems | IEEE Journals & Magazine | IEEE Xplore

A Dimension Convergence-Based Evolutionary Algorithm for Many-Objective Optimization Problems


This work proposes a dimension convergence-based many-objective evolutionary algorithm to solve many-objective optimization problems. A convergence indicator, named dimen...

Abstract:

Recently, multi-objective evolutionary algorithms have become the most popular and efficient approach for multi-objective optimization problems involving two and three ob...Show More

Abstract:

Recently, multi-objective evolutionary algorithms have become the most popular and efficient approach for multi-objective optimization problems involving two and three objectives. However, as the number of objectives increases, the performance of multi-objective evolutionary algorithms tends to deteriorate. This can be mainly attributed to the loss of sufficient selection pressure towards the Pareto front. To address this issue, this paper proposes a dimension convergence-based many-objective evolutionary algorithm to solve many-objective optimization problems (MaOPs). To be specific, a convergence indicator, named dimension convergence, is presented to enhance the selection pressure toward the Pareto front. When the Pareto dominance-based indicator loses the discrimination, the new indicator can further measure the convergence performance of the candidates. Moreover, a new selection strategy is designed to balance the convergence and diversity of the evolutionary process. The mating selection is applied to strengthen the selection pressure, while the environmental selection is developed to comprehensive evaluate the candidates. The proposed algorithm is tested on 36 instances of 12 many-objective benchmarks and compared with five state-of-the-art algorithms. 3 real-world problems are also used to evaluate the performance of the comparison algorithms. Experimental results show that DC-MaOEA is competitive concerning the peer algorithms.
This work proposes a dimension convergence-based many-objective evolutionary algorithm to solve many-objective optimization problems. A convergence indicator, named dimen...
Published in: IEEE Access ( Volume: 8)
Page(s): 224631 - 224642
Date of Publication: 18 December 2020
Electronic ISSN: 2169-3536

Funding Agency:


SECTION I.

Introduction

A multi-objective optimization problem [1] (MOP) can be stated as follows:\begin{align*}&\mathrm {Minimize} ~\mathbf {F}\left ({\mathbf {x} }\right)=\left [{ f_{1}\left ({\mathbf {x} }\right),f_{2}\left ({\mathbf {x} }\right),\ldots,f_{m}\left ({\mathbf {x} }\right) }\right] \\&subject~to~~{\mathbf {x}}\in \Omega\tag{1}\end{align*} View SourceRight-click on figure for MathML and additional features. where the decision vector \mathbf {x = (}x_{1}, x_{2},\ldots,x_{n}) belongs to the decision space \Omega . {\Omega \subseteq }R^{n} represents the feasible decision vector set. The symbols m and n indicate the number of optimization objectives and decision variables, respectively. The objective function vector \mathbf {F: \Omega \to }R^{m} consists of m (m\ge 2 ) conflicting objective functions, and R^{m} is the objective space. Specifically, when the number of optimization objectives m is larger than 3, MOP is defined as many-objective optimization problem (MaOP) [2].

Due to the conflict nature among the optimization objectives, the improvement on one objective typically leads to the deterioration on the rest of optimization objectives [3]–​[5]. Thus, a single solution could not optimize all the objectives simultaneously, the result of a MOP usually is a set of trade-off solutions named Pareto optimal solutions.

In the past few decades, plenty of multi-objective evolutionary algorithms (MOEAs) have been proposed to obtain the Pareto optimal solutions of MOPs. MOEAs are population-based optimization methods with powerful meta-heuristics search ability, which can be categorized into three classes: 1) Pareto dominance-based MOEAs [6]; 2) decomposition-based MOEAs [7]; and 3) indicator-based MOEAs [8].

Pareto dominance-based MOEAs often use a Pareto dominance-based ranking strategy, which sorts the candidate population into different nondominated fronts. Then, a diversity maintenance mechanism is employed to enhance the diversity of the population. NSGA-II [6], PESA-II [9], SPEA2 [10], and MOPSO [11] are the representative Pareto dominance-based MOEAs. Grid dominance and grid difference were introduced to strengthen the selection pressure toward the true PF in GrEA [12]. Based on the natural character of the knee point, a knee point-driven EA was proposed to enhance the convergence performance in KnEA [13]. According to an efficient reference direction based density estimator, a Strength Pareto EA was presented in SPEA/R [14]. A Pareto-adaptive reference point selection scheme was designed in PaRP/EA [15]. The maximum-vector-angle-first principle was used to guarantee the wideness and uniformity of population in VaEA [16]. Average rank and weighted sum of objectives were employed to accelerate the convergence in PDMOEA [17]. In SdEA [18], a subregion division method was presented to balance the convergence and diversity of the population. Adaptive mating and environmental selection were proposed to enhance the converging capability in ad-MOEA [19].

Decomposition-based MOEAs adopt decomposition manners, which transform an MOP into a number of scalar optimization sub-problems. Then, decomposition-based MOEAs solve them simultaneously in a collaborative search process. A reference vector regeneration strategy was presented to dynamically adjust the distribution of the reference set in RVEA [20]. Decomposition-based approach and Pareto dominance-based approach were combined in MOEA/DD [21]. In SDEA [22], a reference point-based method was combined with a novel sorting strategy with utilized fitness values determined via scalarization. Based on a self-organizing map, an adaptive method was proposed to adjust the weight vectors in MOEA/D-SOM [23]. Based on a local manner, a weighted sum method was employed in MOEA/D-LWS [24]. In DECAL [25], coevolution mechanism was designed to enhance the search ability of decomposition-based methods. An adversarial decomposition method was developed to leverage the complementary characteristics of different subproblems in MOEA/AD [26]. The customized dominance relationship with adaptive strategy was combined with decomposition-based method in DrEA [27].

Indicator-based MOEAs use low-dimensional indicators to evaluate the candidate population and guide the search process. A hypervolume (HV) indicator-based MOEA was proposed in HypE [28], where a e Monte Carlo simulation was adopted to evaluate the HV contributions of the candidates. Based on an improved inverted generational distance indicator, an adaptive reference vectors alter strategy was presented in AR-MOEA [29]. In R2HCA-EMOA [30], an R2 indicator variant was employed to approximate the hypervolume contribution, while a utility tensor structure was designed to calculation of the R2 indicator variant. An indicator and reference points co-guided MaOEA was proposed in IREA [31], and indicator I_{\varepsilon +} and reference points were designed to improve the convergence and diversity, respectively. In [32], a new indicator, named I_{SDE}+ , was proposed to promote the convergence and diversity of the population. In MaOEA/IGD [33], the IGD indicator was used to select the solutions with favorable convergence and diversity in each generation.

The existing MOEAs can solve MOPs with two or three objectives successfully. However, when MOEAs solve MaOPs, their performance deteriorates due to the increasing number of objectives. The main difficulties in solving MaOPs can be summarized as follows. First, the percentage of the dominance resistance [34] individuals in a population sharply rises with the increasing number of objectives, which leads that the Pareto dominance-based convergence maintenance mechanism loses the selection pressure to the Pareto front in a high-dimensional objective space. Second, to design a good selection strategy, which balances convergence and diversity, becomes a challenging task as the number of objective increases. As the amount of nondominated increases, Pareto dominance-based indicator loses the discrimination in the selection strategy, the diversity maintenance-based selection plays a leading rule in the selection strategy.

To tackle the above challenges, this paper proposes a dimension convergence-based many-objective evolutionary algorithm, named DC-MaOEA. The major contributions of this work can be summarized as follows:

  1. A new indicator, named dimension convergence (DC), is presented to further measure the convergence performances of the candidates;

  2. In the DC-based mating selection procedure, the DC indicator and Pareto dominance rank criterion are applied to generate the mating population with good convergence performance;

  3. In the DC-environmental selection procedure, a comprehensive evaluation scheme is designed to balance diversity and convergence of the population.

The remainder of this paper is organized as follows. Section II introduces some background knowledge and relate work of this paper. Section III describes our proposed algorithm for many-objective optimization. Section IV presents the experimental settings and the comprehensive experiments. Section V concludes this paper.

SECTION II.

Related Work

In this section, we first review some basic concepts about many-objective optimization. Afterwards, we briefly introduce the general framework of a Pareto dominance-based MOEA, NSGA-III, which is related to this paper.

A. Definition of Many-Objective Optimization

The Pareto dominance concept is widely applied in candidate evaluation procedure of MaOEA. The Pareto dominance related concepts are defined as follows:

Definition 1:

Pareto dominance. For MOP in (1), given two solutions \mathbf {x}_{1},\mathbf {x}_{2}\in \Omega and their corresponding objective vectors \mathbf {F}\left ({\mathbf {x}_{1} }\right),\mathbf {F}\left ({\mathbf {x}_{2} }\right)\in R^{m} . \mathbf {x}_{1} Pareto dominances \mathbf {x}_{2} (denoted by \mathbf {x}_{1}\prec \mathbf {x}_{2} ) if and only if f_{i}\left ({\mathbf {x}_{1} }\right)\le f_{i}\left ({\mathbf {x}_{2} }\right) for all index \forall i=\{1,2,\ldots,m\} and f_{j}\left ({\mathbf {x}_{1} }\right) < f_{j}\left ({\mathbf {x}_{2} }\right) for at least one index \exists j=\{1,2,\ldots,m\} .

Definition 2:

Pareto optimal solution. A solution \mathbf {x}\in \Omega is said to be a Pareto optimal solution if there does not exist another solution \mathbf {x}^{ \boldsymbol {\ast }}\in \Omega such that \mathbf {x}^{ \boldsymbol {\ast }}\prec \mathbf {x} .

Definition 3:

Pareto set (PS). The union of all Pareto optimal solutions is called Pareto set, \mathrm {PS}=\{\left.{ \mathbf {x}\in \Omega }\right |\nexists \mathbf {y}\in \Omega, \mathbf {y}\prec \mathbf {x}\} .

Definition 4:

Pareto front (PF). The corresponding objective vector set of PS is called the Pareto front, \mathrm {PF}=\{\left.{ \mathbf {F}\left ({\mathbf {x} }\right) }\right |\mathbf {x}\in \mathrm {PS}\} .

Definition 5:

Goal of an MaOEA. Goal of an MaOEA is to search a solution set of which the corresponding objective vector set A is including the following two subgoals:

  1. All solutions in A distribute as close as possible to the true PF;

  2. All solutions in A distribute as diverse as possible in the objective space.

Definition 6:

Ideal point objective vector. Ideal point objective vector (\mathbf {F}\left ({\mathbf {x}^{ide} }\right)\in R^{m}) is constructed by the minimum objective value of each objective, respectively. The i th-dimension of the ideal point objective vector is defined as f_{i}\left ({\mathbf {x}^{ide} }\right)=\min _{\mathrm {x}\in \mathrm {PS}}{f_{i}\left ({\mathbf {x} }\right)},(i=1,2,\ldots,m) .

Definition 7:

Nadir point objective vector. Nadir point objective vector (\mathbf {F}\left ({\mathbf {x}^{nad} }\right)\in R^{m}) is constructed by the worst objective value of each objective, respectively. The i th-dimension of the nadir point objective vector is defined as f_{i}\left ({\mathbf {x}^{nad} }\right)=\max _{\mathrm {x}\in \mathrm {PS}}{f_{i}\left ({\mathbf {x} }\right)},(i=1,2,\ldots,m) .

B. NSGAIII

As a representative of the Pareto dominance-based MOEA, NSGA-III has a similar basic framework of NSGAII. To enhance the diversity performance of selection operator, NSGA-III adaptively maintains a set of well-spread reference points to evaluate the diversity fitness of individuals in the objective space.

In each round of iterative process: the parent population and their offspring population are first combined to form a hybrid population; then, the individuals in the hybrid population are sorted into several nondomination ranks; afterwards, the higher ranks are moved into next generation one by one until a the last rank is encountered, whose individuals cannot be selected to the next generation completely; finally, individuals in the last acceptable rank are selected based on the distance information, where individuals associated with a less crowded reference point have a larger probability to be selected.

SECTION III.

Proposed Algorithm

In this section, the proposed algorithm DC-MaOEA is introduced. First, the convergence and diversity measures of DC-MaOEA is presented. Then, the complete algorithm of DC-MaOEA is provided to show all components of the proposal. At last, the details of mating selection and environmental selection is given.

A. Convergence and Diversity Measures

1) Convergence Measures

In the selection strategy of DC-MaOEA, the convergence performance of candidates is measured by the Pareto dominance rank and DC value. In high-dimensional objective space, due to the high proportion of nondominated individuals, Pareto dominance-based convergence maintenance mechanisms (e.g. NSGA-III) lose the selection pressure to drive the population toward the Pareto front. Therefore, the Pareto dominance-based criterion fails to discriminate the convergence degrees of individuals. To solve the above issue, dimension convergence function is proposed based on the distance relationship between the individual and the virtual ideal point. As candidates of the selection procedure have the same Pareto dominance rank, the DC value could further measure the convergence of them.

The DC value of individual x in solution set \boldsymbol {P} is defined as, \begin{equation*} \mathrm {DC}\left ({\mathbf {x} }\right)=\sqrt {\sum \nolimits _{i=1}^{m} \left ({\frac {z_{i}^{\mathrm {min}}-f_{i}\left ({\mathbf {x} }\right)}{z_{i}^{\mathrm {min}}-z_{i}^{\mathrm {nad}}} }\right)^{2} }~~(i={1,2,\ldots,}m)\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features. where z_{i}^{\mathrm {min}} is the best value of the i th objective in \boldsymbol {P} (z_{i}^{\mathrm {min}}={\mathrm {min}}_{\mathrm {x}\in {P}}f_{i}\left ({\mathbf {x} }\right)),z_{i}^{\mathrm {nad}} is the worst value of the i th objective in \boldsymbol {P} (z_{i}^{\mathrm {nad}}={\mathrm {max}}_{\mathrm {x}\in {P}}f_{i}\left ({\mathbf {x} }\right)) . The DC value is numerically proportional to the normalized Euclidean distance between the individual x and the virtual ideal point. It is easy to see that the smaller DC value reflects the better convergence degree of an individual.

2) Diversity Measures

In the diversity maintenance mechanism, a reference point-based diversity approach is adopted to reflect the diversity performance of candidates. To eliminate the range differences brought by the objectives, a population normalization procedure is applied to candidate population \boldsymbol {P} , the normalized objective function is \begin{equation*} f_{i}^{nor}\left ({\mathbf {x} }\right)=\frac {f_{i}\left ({\mathbf {x} }\right){-z}_{i}^{\mathrm {min}}}{{z_{i}^{\mathrm {nad}}-z}_{i}^{\mathrm {min}}},\quad (i={1,2,\ldots,}m)\tag{3}\end{equation*} View SourceRight-click on figure for MathML and additional features. Then, in the normalized objective space, the systematic approach proposed by Das and Dennis [35] is adopted to generate H reference points. Based on the perpendicular distance of each population member to each of the reference line, the normalized individuals are associated with the minimum distance reference point. After the association process, the diversity of the candidates is reflected by the association number of the corresponding reference point. This is illustrated in Fig. 1. It is easy to see that the smaller association number reflects the better diversity performance of an individual.

FIGURE 1. - Illustration of association in a two objective space.
FIGURE 1.

Illustration of association in a two objective space.

B. Framework of the Proposed Algorithm

The framework of DC-MaOEA is given in Algorithm 1. The input of DC-MaOEA are: the optimization problem; the size of population N ; the maximum number of function evaluations MFE . The output of DC-MaOEA is the final population \boldsymbol {P} with N individuals.

Algorithm 1 Framework of DC-MaOEA

Input:

MOP; Population size N ; The maximum number of function evaluations MFE

Output:

Population \boldsymbol {P}

1:

\boldsymbol {P}\leftarrow Initalization (N ); 0 \leftarrow FE

2:

while FE < MFE do

3:

\boldsymbol {P}'\leftarrow DC-based_Mating_Selection (\boldsymbol {P} )

4:

\boldsymbol {Q}\leftarrow Reproduction (\boldsymbol {P}' )

5:

\boldsymbol {P}\leftarrow DC-based_Environmental_Selection (\boldsymbol {P}\cup \boldsymbol {Q} )

6:

update FE

7:

end while

8:

return \boldsymbol {P}

As shown in Algorithm 1, initialization procedure is firstly performed, in which N individuals are randomly generated to constitute an initial population (line 1). Then, the iterative process is executed until the number of function evaluations FE is greater than MFE (line 2). In each round of iteration: DC-based mating selection procedure is performed to select some better performed solutions to produce the offspring population (line 3); simulated binary crossover (SBX) [36] and the polynomial mutation (PM) [37] are applied to generate the offspring population in the reproduction procedure (line 4); DC-based environmental selection procedure is implemented to survive N promising individuals from the union of the current population and their offspring (line 5).

C. Mating Selection

Mating selection procedure is implemented to select promising convergence performance individuals for reproduction procedure. In our design, Pareto dominance rank and dimension convergence are combined to evaluate the convergence of mating candidates. The pseudo-code of DC-based mating selection is given by Algorithm 2. The input of DC-based mating selection are: the parent population \boldsymbol {P} ; the size of population N . The output of DC-based mating selection is the mating population \boldsymbol {P}' .

Algorithm 2 DC-Based_Mating_Selection

Input:

Population \boldsymbol {P} , mating pool size N

Output:

Mating Population \boldsymbol {P}'

1:

\boldsymbol {P}'\leftarrow \emptyset

2:

while \left |{ \boldsymbol {P}' }\right | \boldsymbol { < }N do

3:

randomly select p,q from \boldsymbol {P}

4:

if p\prec q then

5:

\boldsymbol {P}^{\prime }\leftarrow \boldsymbol {P}^{\prime }\cup \{p\}

6:

else if q\prec p then

7:

\boldsymbol {P}^{\prime }\leftarrow \boldsymbol {P}^{\prime }\cup \{q\}

8:

else if DCD\left ({p }\right) < DCD\left ({q }\right) then

9:

\boldsymbol {P}^{\prime }\leftarrow \boldsymbol {P}^{\prime }\cup \{ p\}

10:

else

11:

\boldsymbol {P}^{\prime }\leftarrow \boldsymbol {P}^{\prime }\cup \{q\}

12:

end if

13:

end while

14:

return \boldsymbol {P}'

As shown in Algorithm 2, the procedure is started with a initialization process which set the mating population \boldsymbol {P}' as an empty set (line 1). Then, a binary tournament based selection is applied to fill the mating population (line 2). In each iterative step: two individuals p,q are randomly selected from the parent population (line 3), the one with a higher Pareto dominance rank is chosen to the mating population (line 4-7); if the Pareto dominance rank of the two individuals are same, the one with a smaller DC value is preferred to the mating population. (line 8-11)

D. Environmental Selection

To create the next generation, environmental selection procedure is implemented to survive N better performance individuals from the union of the parent population and the offspring population. The individuals in next generation should have better convergence and diversity than the elimination ones. However, in the Pareto dominance-based convergence maintenance mechanism, convergence criterion fails to discriminate the convergence of the population, which leads to a diversity-leading environmental selection. To balance the convergence and diversity, the DC-based environmental selection procedure comprehensive evaluates the Pareto dominance relation, the DC value and the reference point-based diversity of the candidates.

The pseudo-code of DC-based environmental selection is given by Algorithm 3. The input of DC-based environmental selection are: the union population \boldsymbol {U} of the parent population \boldsymbol {P} and the offspring \boldsymbol {Q} ; the size of population N . The output of DC-based environmental selection is the next population \boldsymbol {P} . The environmental selection procedure is started with Pareto dominance rank sorting of the union population \boldsymbol {U} and the initialization of \boldsymbol {P} (line 1-2). Then, the higher ranks are moved into next population \boldsymbol {P} one by one until a special rank \boldsymbol {F}_{i} is encountered (line 4-9), whose individuals cannot be selected to the next generation completely. Finally, a DC&RPD-based selection is applied to construct the \boldsymbol {P} from \boldsymbol {F}_{i} (line 10).

Algorithm 3 DC-Based_Environmental_Selection

Input:

Population size N , Union Population \boldsymbol {U}= \boldsymbol {P}\cup \boldsymbol {Q}

Output:

Population \boldsymbol {P}

1:

(\boldsymbol {F}_{1}, \boldsymbol {F}_{2},\ldots \boldsymbol {F}_{t}) \leftarrow Pareto_Nondominated_Sorting (\boldsymbol {U} )

2:

\boldsymbol {P}\leftarrow \emptyset,i{ \leftarrow 1}

3:

while \left |{ \boldsymbol {P\cup } \boldsymbol {F}_{i} }\right | < N do

4:

\boldsymbol {P}\leftarrow \boldsymbol {P\cup } \boldsymbol {F}_{i}

5:

i{\leftarrow }i {+ 1}

6:

end while

7:

if \left |{ \boldsymbol {P\cup } \boldsymbol {F}_{i} }\right |=N then

8:

\boldsymbol {P}\leftarrow \boldsymbol {P\cup } \boldsymbol {F}_{i}

9:

else

10:

\boldsymbol {P}\leftarrow DC & RPD_based_Selection ({ \boldsymbol {P}, \boldsymbol {F}}_{i} )

11:

end if

12:

return \boldsymbol {P}

To balance the convergence and diversity of the population, (dimension convergence & reference point-based diversity) DC&RPD-based selection is implement to further select individuals from \boldsymbol {F}_{i} . The pseudo-code of DC&RPD-based selection is given by Algorithm 4. The input of DC&RPD-based selection are: the uncompleted population \boldsymbol {P} ; the size of population N and the Pareto front \boldsymbol {F}_{i} . The output of DC&RPD-based selection is the next population \boldsymbol {P} .

Algorithm 4 DC&RPD-Based Selection

Input:

Uncompleted Population \boldsymbol {P} , Pareto Front \boldsymbol {F}_{i} , Population size N

Output:

Population \boldsymbol {P}

1:

\boldsymbol {P}^{N}\cup \boldsymbol {F}_{i}^{N}\leftarrow Normalization (\boldsymbol {P}\cup \boldsymbol {F}_{i} )

2:

\boldsymbol {R}\leftarrow Create_Reference_Point (\boldsymbol {P}^{N}\cup \boldsymbol {F}_{i}^{N} )

3:

\boldsymbol {A}\leftarrow Association (\boldsymbol {R} , \boldsymbol {P}^{N}\cup \boldsymbol {F}_{i}^{N} )

4:

while \vert \boldsymbol {P} {\vert < {\mathrm {N}}} do

5:

\boldsymbol {R}^{min}\leftarrow min (\boldsymbol {A} )

6:

r\leftarrow random (\boldsymbol {R}^{min} )

7:

\boldsymbol {F}_{i-r}^{N}\leftarrow IsAssociation (r , \boldsymbol {F}_{i}^{N} )

8:

if \vert \boldsymbol {F}_{i-r}^{N}{\vert }==0 then

9:

\boldsymbol {R}^{min}\leftarrow \boldsymbol {R}^{min} \boldsymbol {-}\{r\}

10:

else if \vert \boldsymbol {F}_{i-r}^{N}{\vert }==1 then

11:

\boldsymbol {P}\leftarrow \boldsymbol {P\cup } \boldsymbol {F}_{i-r}^{N}

12:

else

13:

\boldsymbol {P}\leftarrow \boldsymbol {P\cup } minDC \boldsymbol {(} \boldsymbol {F}_{i-r}^{N} \boldsymbol {)}

14:

end if

15:

update \boldsymbol {R}^{min}

16:

end while

17:

return \boldsymbol {P}

As shown in Algorithm 4, the procedure is started with the normalization of the union \boldsymbol {P}\cup \boldsymbol {F}_{i} of the uncompleted population \boldsymbol {P} and the Pareto front \boldsymbol {F}_{i} in objective space (line 1). Then, Das and Dennis’s [35] systematic approach is used to the generation of the reference point set \boldsymbol {R} (line 2). Based on the distance information, the individuals in \boldsymbol {P}^{N} and \boldsymbol {F}_{i}^{N} is associated with the reference point in \boldsymbol {R} , respectively (line 3). After the association, the iterative process is executed until the next generation population \boldsymbol {P} satisfies the quantitative requirements (line 4). In each iterative step, the reference point r with the smallest association in \boldsymbol {R} is selected to survive candidate in \boldsymbol {F}_{i} (line 5-6). The candidates in \boldsymbol {F}_{i} associated with r , who has the smallest DC value, is survived to the next generation (line 7-13).

SECTION IV.

Experiment Results

A. Benchmark Problems

To measure the performance of the peer MOEAs, the following 12 benchmark functions are chosen for our empirical studies: DTLZ1, DTLZ2 and DTLZ4 to DTLZ7 from the Deb Thiele Laumanns Zitzler test problem set in [38] and MaF1 to MaF6 from the MaF test problem set in [39]. These two test suites are widely applied in the existing literature, and these benchmark instances are designed to investigate the performance of the test algorithm from different perspectives. Three real-world engineering problems, which include Four bar truss design [40], Hatch cover design [40], Car side impact design [40], are also used to evaluate our proposed DC-MaOEA.

B. Performance Measures

In this study, we consider the widely used Inverted Generational Distance [41] (IGD ) and HV [28] as the performance metrics to quantify the algorithm performance on benchmark test suites and real-world engineering problems. IGD value and HV value can simultaneously evaluate the convergence and diversity of obtained solutions.

Let \boldsymbol {P}' be a set of points uniformly distributed over the true PF, and \boldsymbol {S} be the set of solutions obtained by multi-objective optimization algorithm. The IGD value of \boldsymbol {S} is defined as \begin{equation*} IGD\left ({\boldsymbol {S}, \boldsymbol {P} {'} }\right)=\frac {\sum \nolimits _{x\in \boldsymbol {P}^{ {'}}} {dist(x, \boldsymbol {S})}}{\left |{ \boldsymbol {P}{'} }\right |}\tag{4}\end{equation*} View SourceRight-click on figure for MathML and additional features. where dist(x, \boldsymbol {S}) is the Euclidean distance between a point x in \boldsymbol {P}' and its nearest neighbor in \boldsymbol {S} . If \left |{ \boldsymbol {P}' }\right | is large enough to represent the true PF, IGD\left ({\boldsymbol {S}, \boldsymbol {P}' }\right) could measure both the convergence and diversity of \boldsymbol {S} . The lower is the IGD value, the better is the quality of \boldsymbol {S} for approximating the true PF. The larger is the HV value, the better is the quality of the given solution set for approximating the true PF.

C. Peer Algorithms and Parameter Settings

1) Peer Algorithms

In order to quantitatively verify the performance of the proposed DC-MaOEA, five state-of-the-art algorithms for many-objective optimization are considered as peer algorithm in this study: 1) MaOEA-R&D [42]; 2) MaOEA-DDFC [43]; 3) MOEA/D-M2M [44]; 4) NSGA-III [45]; and 5) KnEA [13]. The brief description of the five peer algorithms are as follows.

  1. MaOEA-R&D: In MaOEA-R&D, a two-stage-strategy is designed to enhance the selection pressure, while the diversity improvement strategy is employed to facilitate population to spread and distribute well.

  2. MaOEA-DDFC: MaOEA-DDFC is based on directional diversity and favorable convergence. In MaOEA-DDFC, the selection strategy is enhanced to improve both convergence and diversity.

  3. MOEA/D-M2M: MOEA/D-M2M is a new version of decomposition-based MOEA, which decomposes a MOP into a set of simple multi-objective optimization subproblems.

  4. NSGA-III: NSGA-III is an improvement version of NSGA-II. A reference vector-based approach is developed to maintain the diversity of population and strengthen the convergence of population.

  5. KnEA: KnEA is a knee point-driven evolutionary algorithm. In KnEA, the bias to the knee points in the nondominated set is applied to enhance the convergence performance, while an adaptive strategy for identifying knee points is employed to promote diversity performance.

For these five competitive algorithms, their source codes were embedded into the open source PlatEMO [46], which is a MATLAB platform for evolutionary multi-objective optimization. All the MaOEAs were implemented in Matlab codes and run on a personal computer with an Intel (R) Core (TM) i7-8700 CPU, 3.20GHz, and 16 GB (RAM).

2) Parameter Settings

For fair reasons, the parameter settings and termination conditions are set as follows. For each test problem, the number of objectives m varies from 5 to 15, i.e. m is set as m\in {5, 10, 15}. Each peer algorithm runs 30 times independently for each test instance. For all the five peer algorithms, the termination conditions of them are set as the maximum number of function evaluations, which is set to 30000 on DTLZ test suite, 50000 on MaF test suite and 100000 on real-world problems. In this study, the comparative experiments follow the settings of peer algorithms and problems in their published edition.

D. Experiment Results

For statistical comparisons, the mean values and the standard deviations of the IGD results and HV results for DTLZ test suite, MaF test suite and real-world engineering problems are summarized in Table 1, Table 2, Table 3, Table 4 and Table 5, respectively.

TABLE 1 Comparison Result of DC-MaOEA and Five Competitive MaOEAs on DTLZ Problems Using IGD
Table 1- 
Comparison Result of DC-MaOEA and Five Competitive MaOEAs on DTLZ Problems Using IGD
TABLE 2 Comparison Result of DC-MaOEA and Five Competitive MaOEAs on DTLZ Problems Using HV
Table 2- 
Comparison Result of DC-MaOEA and Five Competitive MaOEAs on DTLZ Problems Using HV
TABLE 3 Comparison Result of DC-MaOEA and Five Competitive MaOEAs on MaF Problems Using IGD
Table 3- 
Comparison Result of DC-MaOEA and Five Competitive MaOEAs on MaF Problems Using IGD
TABLE 4 Comparison Result of DC-MaOEA and Five Competitive MaOEAs on MaF Problems Using HV
Table 4- 
Comparison Result of DC-MaOEA and Five Competitive MaOEAs on MaF Problems Using HV
TABLE 5 Comparison Result of DC-MaOEA and Five Competitive MaOEAs on Real-World Problems Using HV
Table 5- 
Comparison Result of DC-MaOEA and Five Competitive MaOEAs on Real-World Problems Using HV

The Wilcoxon rank-sum test with \alpha =5 % significance level is applied to verify the significant differences of the IGD values on all the test problems over 30 runs. The symbols ‘+’, ‘-’ and ‘~’ indicate that the result is significantly better, significantly worse and statistically similar in comparison with the proposed DC-MaOEA, respectively. For each test problem, the best IGD value or HV value is highlighted.

1) Comparison Results on DTLZ Test Suite

The IGD values of the six peer algorithms on 18 DTLZ benchmark functions with 5-, 10- and 15- objective are reported in Table 1. From the experimental results in Table 1, DC-MaOEA obtained the best mean IGD values in 13 out of 18 cases, while MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA get the best performance 0, 1, 0, 0 and 4 cases. The HV values of the six peer algorithms on 18 DTLZ benchmark functions with 5-, 10- and 15- objective are reported in Table 2. From the experimental results in Table 2, DC-MaOEA obtained the best mean HV values in 11 out of 18 cases, while MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA get the best performance 0, 1, 0, 3 and 3 cases. Such better performance demonstrates the superiority of the proposed DC-MaOEA on these DTLZ problems.

The PF of DTLZ1 is a linear hyper-plane, which is designed to assess whether the test algorithm could effectively converge to the Pareto optimal hyper-plane. The IGD value results show that DC-MaOEA performs better than the other five peer algorithms in all 5-, 10- and 15-objective instances. The HV value results show that DC-MaOEA performs better than the other five peer algorithms in 5- and 15-objective instances. NSGAIII achieves the best IGD value in 10-objective instance, while DC-MaOEA gets the 2nd best result in 10-objective instance. These results verify the convergence capability of DC-MaOEA.

DTLZ2 is developed to investigate the test algorithms’ ability to scale up their performance in a large number of objectives. The performance of KnEA and DC-MaOEA is comparable in this instance. DC-MaOEA achieves the best IGD value and HV value in 5-objective instance, while KnEA gets the best IGD and HV result in 10- and 15-objective instances.

DTLZ3 has the identical PF shape as DTLZ2, but DTLZ3 is used to test the test algorithms’ capability to converge to the global Pareto front. The IGD value results and the HV value results show that DC-MaOEA performs better than the other five peer algorithms in all 5-, 10- and 15-objective instances. The experimental results verify the convergence capability of DC-MaOEA in global PF search.

The PF of DTLZ5 is a curve hyper-plane, and DTLZ5 is designed to evaluate the performance of the test algorithm in converging to a curve hyper-plane. The IGD value results show that DC-MaOEA performs significantly better than the other five peer algorithms in all 5-, 10- and 15-objective instances. The HV value results show that DC-MaOEA performs significantly better than the other five peer algorithms in all 5- and 10-objective instances. NSGAIII achieves the best HV value in 15-objective instance. The experimental results verify the convergence capability of DC-MaOEA in curve PF search.

The PF of DTLZ6 is also a curve hyper-plane, but DTLZ6 is designed to evaluate the performance of the test algorithm in dealing with the misleading information. The IGD value results show that DC-MaOEA performs significantly better than the other five peer algorithms in all 5- and 10-objective instances. Moreover, the best IGD value for 15-objective is also achieved by DC-MaOEA. The HV value results show that DC-MaOEA performs better than the other five peer algorithms in all 10- and 15-objective instances. NSGAIII achieves the best IGD value in 5-objective instance. The experimental results verify the search capability of DC-MaOEA in overcoming the misleading information.

The PF of DTLZ7 is in disconnected regions, and DTLZ7 is proposed to evaluate the performance of the test algorithms’ ability to maintain subpopulation in different Pareto-optimal regions. The IGD value results show that KnEA gets the best result in 5- and 10-objective instances, while MaOEA-DDFC achieves the best IGD value in 15-objective instance. The HV value results show that DC-MaOEA gets the best result in 10-objective instances, while KnEA and MaOEA-DDFC achieve the best IGD value in 5- and 15-objective instances, respectively.

To further analyze the results of the Wilcoxon rank-sum test in Table 1, DC-MaOEA significantly performs the best on 9 of the 18 test instances. To be specific: DC-MaOEA significantly performs better than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 14, 12, 17, 12 and 13 DTLZ test instances, respectively; DC-MaOEA significantly performs similar with MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 4, 3, 1, 2 and 0 DTLZ test instances, respectively; DC-MaOEA significantly performs worse than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 0, 3, 0, 4 and 5 DTLZ test instances, respectively. The results of the Wilcoxon rank-sum test in Table 2 show that, DC-MaOEA significantly performs the best on 9 of the 18 test instances. To be specific: DC-MaOEA significantly performs better than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 12, 13, 17, 11 and 13 DTLZ test instances, respectively; DC-MaOEA significantly performs similar with MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 5, 2, 1, 3 and 2 DTLZ test instances, respectively; DC-MaOEA significantly performs worse than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 1, 3, 0, 4 and 3 DTLZ test instances, respectively.

Moreover, to further analyze the performance of the comparison algorithms on the DTLZ test suite, the box plots of the IGD value obtained by the comparison algorithms on DTLZ3, 5 and 6 over 30 runs are showed in Fig. 2, Fig. 3 and Fig. 4, respectively.

FIGURE 2. - Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ3(M = 5,10,15) using IGD over 30 runs.
FIGURE 2.

Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ3(M = 5,10,15) using IGD over 30 runs.

FIGURE 3. - Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ5(M = 5,10,15) using IGD over 30 runs.
FIGURE 3.

Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ5(M = 5,10,15) using IGD over 30 runs.

FIGURE 4. - Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ6(M = 5,10,15) using IGD over 30 runs.
FIGURE 4.

Box plots of DC-MaOEA and five competitive MaOEAs on DTLZ6(M = 5,10,15) using IGD over 30 runs.

As the box plots in Fig. 2, Fig. 3 and Fig. 4 shown, the IGD value obtained by DC-MaOEA is clear better than the other comparison algorithms. The better results shown in Table 1, Fig. 2, Fig. 3 and Fig. 4 illustrate the superiorities of the proposed DC-MaOEA with respect to both the convergence and diversity on the chosen DTLZ problems.

2) Comparison Results on MAF Test Suite

The IGD values of the six peer algorithms on 18 MaF benchmark functions with 5-, 10- and 15-objective are reported in Table 3. From the experimental results in Table 3, DC-MaOEA obtained the best mean IGD values in 10 out of 18 cases, while MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA get the best performance 2, 1, 1, 0 and 4 cases. The HV values of the six peer algorithms on 18 MaF benchmark functions with 5-, 10- and 15- objective are reported in Table 4. From the experimental results in Table 4, DC-MaOEA obtained the best mean HV values in 9 out of 18 cases, while MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA get the best performance 2, 3, 0, 1 and 3 cases. Such better performance demonstrates the superiority of the proposed DC-MaOEA on these MaF problems.

The PF of MaF1 is a linear hyper-plane, MaF1 is designed to assess whether the test algorithm could effectively deal with the inverted PFs. The performance of DC-MaOEA and KnEA is comparable in this instance. The IGD value results and the HV value results show that DC-MaOEA performs better than the other five peer algorithms in 10- and 15-objective instances. KnEA achieves the best IGD value and HV value in 5-objective instance. These results verify the convergence capability of DC-MaOEA in inverted PF search.

MaF2 is proposed to assess whether the test algorithm could perform concurrent convergence on different objectives. The performance of DC-MaOEA, MaOEA-DDFC, MOEA/D-M2M and KnEA is comparable in this instance. The IGD value results show that DC-MaOEA performs better than the other five peer algorithms in 5-objective instance. MOEA/D-M2M and KnEA achieve the best IGD value in 10- and 15-objective instances, respectively. The HV value results show that DC-MaOEA performs better than the other five peer algorithms in 5- and 10- objective instances, while MaOEA-DDFC gets the best HV value in 15- objective instance. These results verify the convergence capability of DC-MaOEA on different objectives.

The PF of MaF3 is a convex hyper-plane, MaF3 is used to assess whether the test algorithm could effectively deal with the convex PFs. The IGD value and HV value results show that DC-MaOEA performs better than the other five peer algorithms in 5- and 15-objective instances. MaOEA-R&D achieves the best IGD value and HV value in 10-objective instance. These results verify the convergence capability of DC-MaOEA in convex PF search.

The PF of MaF4 is a concave and multimodal hyper-plane, and MaF4 is developed to evaluate the performance of the test algorithm in dealing with the badly scaled PFs and the highly multimodal fitness landscape. The performance of DC-MaOEA, MaOEA-DDFC and KnEA is comparable in this instance. The IGD value results show that DC-MaOEA performs best in 10-objective instances. KnEA and MaOEA-DDFC achieve the best IGD value in 5- and 15-objective instance, respectively. The HV value results show that DC-MaOEA performs best in 10- and 15-objective instances. KnEA achieves the best HV value in 5-objective instance. These results verify the convergence capability of DC-MaOEA in concave PF search.

The PF of MaF5 is a convex hyper-plane, and MaF5 is used to evaluate the performance of the test algorithm in dealing with badly scaled PFs/PSs. The IGD value results show that DC-MaOEA performs best in 5- and 10-objective instances, while MaOEA-R&D achieves the best IGD value in 15-objective instance. The HV value results show that DC-MaOEA performs best in 5-objective instance, while NSGA-III and KnEA achieve the best HV value in 10- and 15-objective instances, respectively. These results verify the convergence capability of DC-MaOEA in convex PF search.

The PF of MaF6 is a concave and degenerate hyper-plane, and MaF4 is developed to evaluate the performance of the test algorithm in dealing with the degenerate PFs. The IGD value results show that DC-MaOEA performs best in 10- and 15-objective instances, while KnEA achieves the best IGD value in 5-objective instance. The HV value results show that MaOEA-DDFC performs best in 5- and 10-objective instances, while MaOEA-R&D achieves the best HV value in 15-objective instance. These results verify the convergence capability of DC-MaOEA in degenerate PF search.

To further analyze the results of the Wilcoxon rank-sum test in Table 3: DC-MaOEA significantly performs better than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 15, 11, 14, 12 and 10 MaF test instances, respectively; DC-MaOEA significantly performs similar with MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 1, 1, 2, 4 and 1 MaF test instances, respectively; DC-MaOEA significantly performs worse than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 2, 6, 2, 2 and 6 MaF test instances, respectively. The results of the Wilcoxon rank-sum test in Table 4: DC-MaOEA significantly performs better than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 14, 12, 18, 12 and 11 MaF test instances, respectively; DC-MaOEA significantly performs similar with MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 1, 2, 0, 4 and 1 MaF test instances, respectively; DC-MaOEA significantly performs worse than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 3, 4, 0, 2 and 6 MaF test instances, respectively. These better results illustrate the superiorities of the proposed DC-MaOEA with respect to both the convergence and diversity on the chosen MaF problems. From the experimental studies on DTLZ test suite and MaF test suite, we conclude that the promising results obtained by DC-MaOEA could be attributed to its improvement in selection pressure and selection strategy.

3) Comparison Results on Real-World Problems

In general, the Pareto solution information of the real-world problems is unavailable. For this reason, the HV value is selected to evaluate the performance of the peer algorithms on the real-world engineering problems.

The HV values of the six peer algorithms on 3 real-world engineering problems are reported in Table 5. From the experimental results in Table 5, DC-MaOEA obtained the best mean HV values in all 3 cases. Such better performance demonstrates the superiority of the proposed DC-MaOEA on these real-world problems.

Four bar truss design is a 2-objective problem with 4 variables, which is designed to minimize the structural volume and the joint displacement of the four bar truss. Hatch cover design is a 2-objective problem with 4 variables, which is developed to minimize the weight of the hatch cover. Car side impact design is a 4-objective problem with 7 variables, which is proposed to minimize the weight of the car, the pubic force experienced by a passenger, the average velocity of the V-pillar responsible for withstanding the impact load and the sum of the 10 constraint violations. The HV value results show that DC-MaOEA performs better than the other five peer algorithms on Four bar truss design, Hatch cover design and Car side impact design.

To further analyze the results of the Wilcoxon rank-sum test in Table 5: DC-MaOEA significantly performs better than MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 2, 2, 3, 2 and 3 real-world engineering problems, respectively; DC-MaOEA significantly performs similar with MaOEA-R&D, MaOEA-DDFC, MOEA/D-M2M, NSGA-III and KnEA on 1, 1, 0, 1 and 0 real-world engineering problems, respectively.

SECTION V.

Conclusion

In this paper, a dimension convergence based many-objective evolutionary algorithm was proposed to enhance the selection pressure to the Pareto front. A convergence indicator, named dimension convergence, was presented to further evaluate the convergence performance of the population. The convergence maintenance mechanism of the proposal combines the Pareto dominance relation and the distance information. Moreover, the mating selection was developed to improve the convergence performance of the mating population. To balance the convergence and diversity, the environmental selection was designed to comprehensively evaluate the population.

In order to measure the effectiveness of the proposed DC-MaOEA, the comparative experiment was designed on 36 instances of 12 many-objective benchmarks and compared to five state-of-the-art algorithms. Some real-world engineering problems were also used to verify our proposal. The performance of experiments was evaluated by IGD indicator and HV indicator, and the experiment result showed the superiority of the proposed DC-MaOEA.

In our future work, DC-MaOEA will be extended to solve constrained multi- and many-objective optimization problems. Moreover, the application of DC-MaOEA to some complex real-world problems will be further explored.

References

References is not available for this document.