I. Introduction
Multiobjective optimization problems (MOPs) usually contain several conflicting objectives that need to be optimized simultaneously [1], as defined by \begin{align*} \text {minimize}~&F(x)=\left ({f_{1}(x), \ldots, f_{m}(x)}\right) \\ {~\text {subject to}}~&x \in \Omega \tag{1}\end{align*}
where denotes the -dimensional decision vector of a solution from the search space and defines objective functions. Due to the conflicts that often arise in different objectives, there is not a single optimal solution, but a set of equally optimal solutions termed the Pareto-optimal set (PS) for solving MOPs [2]. The mapping of PS onto the objective space is termed the Pareto optimal front (PF) [2]. In particular, the problem in (1) is called a large-scale MOP (LMOP) when the number of decision variables is no less than 100 [3]. During the past few decades, a number of multiobjective evolutionary algorithms (MOEAs) have been proposed with very effective performance for solving MOPs [4], [5], [6]. However, experimental results show that most of the existing MOEAs are not efficient when solving LMOPs with a large number of decision variables, due to their weak search abilities [7]. To better solve LMOPs, a number of large-scale MOEAs (LMOEAs) have been designed and most of them can be roughly divided into three categories [3], which are introduced sequentially as follows.