Introduction
In manufacturing shops, production scheduling plays an important role on achieving one or more manufacturing optimal objectives, such as minimizing the maximum completion time (makespan), mean flow time, earliness and tardiness. Flow-shop scheduling problem(FSP) is one of classical production scheduling problems and widely exists in chemical [1], metallurgy [2], micro-electronics [3], and so on. Hybrid flow shop with unrelated parallel machines (HFS) is an extension of FSP, but it is more complex than FSP, because at least in one stage there are multiple parallel machines to choose from to process a job. In 1971, Arthanari and Ramamurthy [4] first studied a two-stage HFS scheduling problem. Since then, three-stage and m-stage HFS scheduling problems are widely studied by scholars. In 1988, Gupta and Jatinder [5] proved that HFS restricted to two processing stages is Non-deterministic Polynomial-time hard(NP-hard).
In recent years, with the environment pollution and the running out of the energy, more and more countries make policy on energy-saving and emission-reduction, high energy consumption enterprises face the risk of elimination. Therefore, saving energy conservation becomes one of objectives that many manufacturers pursue and an important issue for industry. The steel making and continuous casting process in the steel industry is a typical HFS. Since there are many different machines at the same stage can process the same job with different energy consumption, energy efficient scheduling becomes more significant. On the other hand, productivity is related to the efficiency of enterprises. Under this background, it is of great significance to study the HFS scheduling problem to minimize the makespan and energy consumption. The twofold novelties of this paper are given in the following:
To reduce energy consumption, the processing energy consumption, standby energy consumption and turning on/off energy consumption is considered simultaneously and a greedy shifting algorithm is proposed to reduce standby consumption and turning on/off energy consumption during the decoding phase.
A hybrid multi-objective teaching-learning based optimization is proposed to solve the scheduling problem of HFS with unrelated machine. Three contributions of the proposed HMOTLBO lie in: (a) a new solution presentation method that support five different decoding rules is proposed in this paper; (b) at the preliminary stage of the algorithm, a weight matching algorithm is proposed to avoid the random matching between students and weight vectors and improve the convergence speed of the algorithm; (c) To enhance the exploration and exploitation capacities of the algorithm, a teaching operator based on crossover and a self-learning operator based on variable neighborhood search(VNS) are proposed.
The rest of this paper is organized as follows: Section II reports the relevant literature review. Section III formulates the HFS scheduling problem. Then, in section IV, the algorithm HMOTLBO is proposed. Section V analyzes the experimental results and compares them with those of other algorithms in the literature to evaluate the performance of the proposed algorithm. Finally, the last section presents the conclusions of our work.
Literature Review
A. Single Objective HFS Scheduling Problems
For HFS scheduling problem, most literature focus on the objective of makespan, such as Engin and Deyen [6] proposed an artificial immune system, Alaykran et al. [7] and Khalouli et al. [8] proposed an ant colony optimization (ACO) respectively. Liao et al. [9] proposed a particle swarm optimization(PSO). Li and Pan [10] proposed a hybrid artificial bee colony algorithm. Song [11] proposed an improved greedy genetical algorithm(IGA). The second hot objective is to minimize the sum of the earliness and tardiness costs, such as Han et al. [12] proposed a differential evolution (DE) algorithm based on multiple decision rules. M’Hallah and Alhajraf [13] proposed an Ant colony systems. Fei and Hui [14] proposed an allied genetic algorithm(GA), ect.
B. Multi-Objective HFS Scheduling Problems
In recent years, with more and more manufacturing enterprises pursue multiple production objectives and the proposition of multi-objective optimization algorithms, such as NSAG [15], NSGAII [16], MOEA/D [17], SPEA [18] and SPEA2 [19], more and more researches about HFS scheduling problem focus on the multi-objective. To minimize makespan and mean flow time, Solano-Charris et al. [20] presented an ACO; Marichelvam et al. [21] proposed a discrete firefly algorithm. To minimize makespan and mean tardiness, Mundim and Queiroz [22] and De Siqueira et al. [23] developed a variable neighborhood search algorithm(VNS) respectively; Ying et al. [24] proposed an iterated pareto greedy algorithm, Tran and Ng [25] proposed a hybrid water flow algorithm; Asefi et al. [26] considered the no-wait constraint and proposed a hybrid NSGA-II and VNS algorithm; Cho et al. [27] developed a Pareto genetic algorithm while considering reentrant. To minimize the total flow time and the number of tardy jobs, Wang et al. [28] developed a NSGAII. To minimize the production cost and maximize the customer’s satisfaction, Shahvari and Logendran [29] proposed a tabu search enhanced with path-relinking. For more than two objectives, Zhu-Min [30] proposed a multi-objective master scheduling algorithm to minimize delay penalties, proceeding time, setup cost and holding cost; Lu et al. [31] proposed a novel multi-objective cellular grey wolf optimizer (MOCGWO) to minimize noise pollution, makespan and energy consumption, etc.
C. Energy Oriented HFS Scheduling Problems
With the increasing awareness of environment protection, more and more industries pay more attention to the energy saving while manufacturing. Thus, more and more researchers begin considering the energy efficiency scheduling. Gao et al. [32] reviewed literature about production scheduling for intelligent manufacturing systems with the energy-related constraints and objectives in the past 10 years. To minimize the total electricity cost, Che et al. [33], [34] proposed an efficient greedy insertion heuristic while production scheduling under time-of-use electricity tariffs. Wu and Sun [35] developed a non-dominated sorted genetic algorithm to solve the flexible job shop scheduling (FJS) problem considering the makespan, the energy consumption and the numbers of turning-on/off machines simultaneously. Wang et al. [36] proposed a two stages optimization algorithm for the same problem. Meng et al. [37] integrate the process planning and scheduling of FJS problem, build a mixed integer linear programming(MILP), and solve it by CPLEX. Dai et al. [38] proposed a modified genetic algorithm to minimize the makespan and the total energy consumption in job shop horizons. For HFS scheduling problem, Luo et al. [39] presented a multi-objective ant colony optimization considering production efficiency and electric power cost. Dai et al. [40] presented a genetic simulated annealing algorithm to minimize the makespan and the total energy consumption. Zhang et al. [41] developed a time-indexed integer programming formulation to minimize the electricity cost and reduce CO2 emissions. Rao et al. [42] proposed an improved particle swarm optimization for the dynamic scheduling. Li et al. [43] proposed an energy-aware multi-objective optimization algorithm (EA-MOA) consideration of the setup energy consumption.
D. Motivations
In the actual HFS with unrelated parallel machines environment, there are multiple machines that can be selected to process a job in some stages, and the processing time and energy consumption of different machines are often different. At the same time, the machines can have three different states, such as processing state, standby state, and shutdown restart state. The unit energy consumption of a machine is different in different states. Therefore, it is of great significance to consider both the processing efficiency and the energy consumption of the machines when scheduling HFS problem. From the above review, there is less literature on minimizing both the makespan and the energy consumption for the HFS with unrelated machine scheduling problems, and there is no published literature in which the processing energy consumption, the standby energy consumption and the turning on/off energy consumption are considered simultaneously. Therefore, in our study, we consider to minimize the energy consumption and makespan simultaneously of the HFS scheduling problem.
Problem Formulation
A. Problem Description
In a HFS scheduling problem with unrelated machines, there are
All machines and jobs are available at time zero.
Preemption is not allowed, that is, no task can be interrupted before the completion of its current operation.
There is infinite buffer or storage between any two consecutive stages.
Each machine can process at most one operation at a time and each operation is processed on at most one machine at a time.
All operations are assigned strictly to one machine at each stage.
B. Pnotations
In order to analyses the problem easily, some notations are defined as follows:
Indexs:
AbbreviationExpansionindex of jobs, and | |
index of stages, and | |
index of jobs on one machine, and | |
index of machine at one stage, and |
Parameters:
AbbreviationExpansiontotal number of jobs. | |
total number of the processing stages. | |
total number of the machines at stage | |
processing time that job | |
processing energy consumption per unit time of job | |
standby energy consumption per unit time of machine | |
turning on/off energy consumption per one time of machine |
Decisions variables:
AbbreviationExpansionif the processing of job | |
complete time of job | |
start time of the | |
complete time of the | |
total number of jobs that assigned to machine | |
job index of the | |
standby time between the | |
if machine | |
the completion time of the last job on the last machine at the last stage. | |
total energy consumption. | |
total processing energy consumption. | |
total standby energy consumption. | |
total on/off energy consumption | |
a solution representation. |
C. Mixed-Integer Linear Programming Model (MILP)

Eq (1) is the objective function of the makespan. Eq (2) is the objective function of the energy consumption. Eq (3) constraints that just allocate one machine to process job
The Proposed Algorithm HMOTLBO
TLBO [42], proposed by Rao et al. in 2011. As a population-based algorithm, the performance of TLBO is not affected by specific parameters and is easy to implement. Since there are few TBLO algorithms is used to solve the multi-objective HFS scheduling problem, this paper proposed a hybrid multi-objective TLBO algorithm(HMOTLBO). Inspired by MOEA/D, HMOTLBO is implemented based on objective decomposition, and uses an external archive set(EA) to record the non-dominated solutions. More details of the algorithm is introduced in the following subsections.
A. Solution Representation
For a solution representation of HFS, it must describe the jobs’ processing sequence at each stage and the corresponding processing machine assigned at that stage. In this study, a novel encoding method is developed, which is a vector containing
B. Decoding Rules
In order to guide the search direction of the algorithm, five different decoding rules based on different machine assignment rules are proposed: (1) Assign machine to jobs completely according to the solution representation, which is named SI and Figure 1 is an example. (2) When a job is scheduled, assign the machine which can complete the job earlier. If there are more than one machine satisfy this rule, then select the one with the lowest processing energy consumption. This decoding rule ignores the elements of the solution vector from the
C. Greedy Shifting Strategy
During the decoding phase, only processing energy can be considered, more energy can be saved by reducing the machines’ restart times, prolonging the restart interval or reducing the machines’ standby time. Therefore, to achieve this purpose, a greedy shifting algorithm is proposed without changing the maximum completion time and the processing sequence of the jobs on each machine. The pseudo code is shown in Algorithm 1 and an instance is shown in Figure 2 with gantt chart.
Algorithm 1. Greedy Shift Strategy
a solution
a solution with same complete time and less energy consumption
Step1: //right shifting
//From the last stage to the second stage
for (j=s; j<=2; j−)
// For each machine
for (k=1; k
for each job on machine
{if there are idle time behind the job, then
{move the job right as far as possible; }
}
step2: // left shifting
// From the second stage to the penultimate stage
for(j=2; j<=s-1;j++)
// For each machine
for(k=1; k
for each blocks on machine
if move it left can reduce the non-processing energy consumption, then
{ move it left as far as possible; }
In Figure 2, gantt chart (a) shows the original decoding result of an instance, where, the makespan is 293 and the energy consumption is 9542. Gantt chart (b) shows the right shifting result of (a). Here, the makespan keeps unchanged while the energy consumption is reduced to 9294. To clearly show the changed part, we use the red dotted line surrounded it. Gantt chart (c) shows the left shifting result of (b) and purple dotted line surrounds the changed part. In figure (c), the energy consumption is reduced to 9278 while the makespan unchanged.
D. Decomposition Mechanism
Inspired by MOEA/D, we convert the multi-objective problem into a series of scalar optimization problems and the weights of each problem can be decided by Eq (16). Here, \begin{align*}&\hspace {-1pc}\left ({{\begin{array}{cccccccccccccccccccc} \lambda _{1}^{ii}, &\quad \lambda _{2}^{ii}\\ \end{array}} }\right)=\left ({{\begin{array}{cccccccccccccccccccc} &&\frac {ii-1}{Size-1}, &\quad \frac {Size-ii}{Size-1}\\ \end{array}} }\right),\quad 1\le ii\le Size\qquad \tag{16}\\&\qquad \qquad \quad ~~\bar {f_{o}}= \frac {f_{o}-f_{o}^{min}}{f_{o}^{max}-f_{o}^{min}},\quad 1\le o\le 2 \tag{17}\\&\qquad \min g\left ({x\thinspace \vert \thinspace {\lambda ^{ii},z^{\ast }}}\right) =\underset {1\leq 0\leq 2}{\max }{\{\lambda _{o}^{ii}\vert \bar {f_{o}}\left ({x }\right)-z_{o}^{\ast }\vert \}}\tag{18}\end{align*}
E. Matching Strategy
At the initialization phase of the MOEA/D, the weight vector is randomly allocated to a solution, if a weight vector can’t match a solution well, the relationship between them will look like Figure 3. In Figure 3, weight vector
Algorithm 2 Weight Matching
The weights vector
The solutions and their matched weight;
Step 1: Sort the solutions non-ascendingly according to the values of the first objective;
Step 2: Sort the weights of vector
Step 3: for(i=1;i<=N; i++)
Assign weight
F. Teaching Operator
During the teaching phase, the crossover operator is used to implement the teaching work. The specific crossover process is illustrated in Figure 4 and the pseudocode is provides in Algorithm 3. At this stage, teachers transfer knowledge among students, and strive to improve the score of the whole class. A good teacher will improve students’ score greatly. Therefore, teachers will be selected from \begin{align*} \theta _{o}^{t}=&1-\frac {f_{o}^{t}-f_{o}^{min}}{f_{o}^{max}-f_{o}^{min}} \mathord {\left /{ {\vphantom {\frac {f_{o}^{t}-f_{o}^{min}}{f_{o}^{max}-f_{o}^{min}} \sum \nolimits _{i=1}^{2} \frac {f_{i}^{t}-f_{i}^{min}}{f_{i}^{max}-f_{i}^{min}}}} }\right. } \sum \nolimits _{i=1}^{2} \frac {f_{i}^{t}-f_{i}^{min}}{f_{i}^{max}-f_{i}^{min}}, \\&t=1,2,\ldots,NT;\quad o=1,2 \tag{19}\\ D^{t}=&\sqrt [{2}]{\left ({\lambda _{1}^{i}-\theta _{1}^{t} }\right)^{2}+\left ({\lambda _{2}^{i}-\theta _{2}^{t} }\right)^{2}}, \quad t=1,2,\ldots,NT \qquad \tag{20}\\ P^{t}=&D^{t} \mathord {\left /{ {\vphantom {D^{t} \sum \nolimits _{k=1}^{NT} D^{k} }} }\right. } \sum \nolimits _{k=1}^{NT} D^{k}, \quad t=1,2,\ldots,NT\tag{21}\end{align*}
Algorithm 3 Teaching Operator
the selected teacher and the current student;
the updated student and its neighbors;
Step1: randomly generate two integers between
Step2: //Execute cross operation
if (h1>N) then
{if (the decoding mode is not SII and SIII) then generate two children with the cross method of (a) in Figure 4;
else
goto Step1;}
else if (h2<=N) then
generate two children with the cross method of (b) in Figure 4;
else
generate two children with the cross method of (c) in Figure 4;
endif
Step3: evaluate the fitness of the two children.
Step4: // update
choose the best solution from the two children and if it’s better than the current student, then update the student;
update the neighbors of the current student with the two children;
Algorithm 4 Teacher Selection
archive set
the teacher selected for the
Step 1: for(t=1;t<=NT; t++)
{ compute the
compute the
Step2: for(t=1;t<=NT; t++)
compute the selection probability of the
Step 3: choose a teacher by roulette according the probability of each teacher in
G. Self-Learning Operator
To speed up the learning process, students tend to study by themselves or learn from his neighbors. In this study, each student and teacher in
Reverse: Randomly select two positions from 1 to
in the solution vector, then reverse the elements between these two positions of the solution.n Swap: Randomly select two positions from 1 to
in the solution vector, swap the elements of these two positions.n Insert: Randomly select two positions from 1 to
in the solution vector, insert an element in one position before the other.n
Algorithm 5 Self-Learning Based on VNS
a solution
updated solution
Step 1: initialize the maximum iteration number
Step 2: for each neighborhood, performs the follow operations
;k=0 while(k<iternum)
{ perform the neighborhood operations;
if(
then\boldsymbol {f}_{ \boldsymbol {\pi }}\mathbf { < }\boldsymbol {f}_{ \boldsymbol {\pi }_{\mathbf {new}}}\mathbf {)} {
;\pi =\pi _{\mathrm {new}} update the current student’s neighbors with
;\pi _{\mathrm {new}} ;k=k+1 }
}
H. Framework of the Proposed Algorithm
To show the proposed algorithm HMOTLBO more clearly, the pseudocode of it is described in Algorithm 6 and the flow chart is shown in Figure 5.
Algorithm 6 HMOTLBO
the HFS problem;
external archive set
Step1: //Initialization
set the size of the population
;N set the neighbor size of each weight
;NB set the external archive set
;E\text{A}=\phi set the maximal run time
;\omega for(ii=1; ii<=N; ii++)
{calculate weight
with Eq (16);\lambda ^{ii} randomly generate solution
;\pi _{ii} calculate the two objectives’ value of
;}\pi _{ii} for each weight
\lambda ^{ii} find its closest
neighbor weights and put them into setNB ;B(ii) sort all the solutions with the non-dominated sorting algorithm in NSGAII [16];
find all the non-dominated solutions and put them into set
;EA for(ii=1; ii<=N; ii++)
Allocate weight to each student
with Algorithm 2.\pi _{ii}
Step 2: //evolutionary phase
TIME = system_time; //record the current time
while(system_time-TIME
for(ii=1; ii<=N; ii++)
{
://teaching
select a teacher from set
with Algorithm 4 for studentEA ;\pi _{ii} execute teaching phase with Algorithm 4;
//self-learning
execute Algorithm 5 on
;\pi _{ii}
}
// for each teacher, perform self-learning
for(t=1; t<|EA<; t++)
execute self-learning with Algorithm 5 on
;\pi _{t} // merge all the students and teachers, set EA
, find all the non-dominated solutions and put them into EA.=\emptyset }
Step3: //output
output solutions of
Since HMOTLBO is a swarm intelligence and evolutionary algorithms, it’s clearly that the computational complexity(CC) of it is mainly decided by its problem size, population size, evolution times, etc. For HFS scheduling problem, the problem size is mainly decided by the number of jobs
Experiment Results
A. Environment and Test Instance
In this section, we carry out the computational experiments to validate the performance of the proposed algorithm HMOTLBO. The algorithm is implemented in C++ language on an intel core i7-8550U, 1.88GHz PC with 16GB memory. All cases are named as j*c*a*, here, * represents an integer. For example, case j15c5a1 represents that there are 15 jobs to be processed in a workshop with 5 processing stages, and 1 means the first case of this class. All cases are randomly generated, in which the processing time of each job is between 10 and 30, the number of machines at each stage is between 2 and 3, the processing energy consumption per unit time of a machine is between 5 and 10, the standby energy consumption per unit time of a machine is between 1 and 5, the energy consumption of a single start-up of the machine is between 20 and 30, and the number of jobs is 15, 20 and 30 respectively.
We set up four types of experiments, the first one is to verify the effectiveness of the decoding rules and then we will chose a better decoding rule for HMOTLBO. The second one is to verify the effectiveness of the weight matching strategy and teacher selection strategy. The third one is to verify the effectiveness of the self-learning operator based on VNS. The fourth one is to verify the effectiveness of the proposed algorithm in this paper through comparing with other two better algorithms.
Through orthogonal experiment, the parameters for the proposed HMOTLBO algorithm are set as follows:
Population size is 60.
The maximum running time
(stop condition) equal to\omega (s), here,s\times n\times \alpha is the number of processing stages,s is the number of jobs andn is a parameter ant it’s value is set as 1, 2, 3 based on the number of jobs 15,20,30 respectively.\alpha The maximum iterations of VNS of the self-learning for each neighbor is 5.
The size of external Pareto archive set
is 4 times of the population size.EA
To check the effectiveness of HMOTLBO, two better multi-objective algorithms about HFS scheduling problem, EA-MOA [43] and NIW-PSO [44] are selected for comparison. All three algorithms adopt the same population size, the greedy shift strategy and the stopping condition, the other parameters are recommended by their original papers.
B. Evaluation Metrics
To evaluate the performance of the multi-objective algorithms on solving HFS, we firstly need to address the following issues:
Inverted generation distance(IGD), the distance of the true Pareto frontier solutions with the obtained Pareto frontier solutions. It can reflect the distribution and convergence of the solution at the same time. The smaller the IGD is, the better the solution is. Eq (22) is the formula for calculating IGD. Here,
is the true Pareto frontier solution set. In this study, the true Pareto frontier solutions are found by running all the compared algorithms and merged all the non-dominated solutions they got.A^{\ast } is a set of Pareto frontier solutions obtained by an algorithm,A is the minimum Euclidean distance of solutiond(v,A) to the solutions ofv .A \begin{equation*} IGD\left ({{A,A}^{\ast } }\right)=\frac {\sum \nolimits _{v\in A^{\ast }}^{\vert A^{\ast }\vert } {d\left ({v,A }\right)}}{\vert A^{\ast }\vert }\tag{22}\end{equation*} View Source\begin{equation*} IGD\left ({{A,A}^{\ast } }\right)=\frac {\sum \nolimits _{v\in A^{\ast }}^{\vert A^{\ast }\vert } {d\left ({v,A }\right)}}{\vert A^{\ast }\vert }\tag{22}\end{equation*}
The rate of the Pareto optimal solutions (
):V_{r} is used to compute the ratio of non-dominated solutions in setV_{r} . Obviously, a larger value ofA represents setV_{r} with a higher probability for the obtained solution to be a non-dominated solution. IfA is close to one, almost all of the solutions inV_{r} are inA . Whereas, ifA^{\ast } is close to zero, almost all of the obtained solutions are dominated by solutions inV_{r} .A^{\ast } is calculated by Eq (23).V_{r} \begin{equation*} V_{r}=\frac {\vert A\vert }{\vert A^{\ast }\vert }\tag{23}\end{equation*} View Source\begin{equation*} V_{r}=\frac {\vert A\vert }{\vert A^{\ast }\vert }\tag{23}\end{equation*}
C. Effectiveness of the Decoding Rule
To verify which decoding rule of the five have a positive impact on the algorithm optimization results, five variants of HMOTLBO will be compared. Except for the different decoding rules, the five algorithms are same in the other aspects. According to the decoding methods adopted in the algorithm, the five algorithms are named as HMOTLBO-SI, HMOTLBO-SII, HMOTLBO-SIII, HMOTLBO-SIV and HMOTLBO-SV respectively. Figure 6 shows the scatter plot of the Pareto frontier solutions of each algorithm on 15 cases.
In Figure 6, it’s clearly the solutions obtained by algorithm HMOTLBO-SI are far away from the Pareto frontier solution. None of these 15 cases can converge to the Pareto frontier solution. The reason is that decoding rule SI needs to use a vector of length
D. Effectiveness of Matching Strategy and Teacher Selection
To check the effectiveness of the proposed weight matching strategy and teacher selection strategy, two algorithm, HMOTLBO with the randomly strategy named HMOTLBO-I and HMOTLBO with the proposed strategy named HMOTLBO-II, are compared. Each algorithm will be run 10 times, the average IGD and the average
In Table 1, in the comparisons of the IGD, HMOTLBO-II has 12 better out of the given 15 test cases, which is significantly better than HMOTLBO-I. Hence, the Pareto frontier solutions obtained by HMOTLBO-II has the better distribution and convergence. In the comparison of the
E. Effectiveness of the Self-Learing Operator Based on VNS
To check the effectiveness of the proposed self-learning operator based on VNS, in this section, we compared four different HMOTLBO algorithms. The algorithm with self-learning operator based on insert search named HMOTLBO-I, based on reverse search named HMOTLBO-II, based on swap search named HMOTLBO-III and the algorithm based on VNS named HMOTLBO. Each algorithm will be run 10 times, the average IGD and average
In Table 2, the results of IGD show that HMOTLBO obtained 11 best values for the 15 cases. The average IGD that HMOTLBO got on 15 instances is approximately equal to 0.045, this is smaller than the average IGD obtained by the other three algorithms, this illustrates that the proposed VNS has the strongest local exploration ability compared with the reverse search, insert search and swap search. In the comparison of the
F. Comparisons With Other Algorithms
After the previous experiments, the matching strategy, decoding method SV and the self-learning operator based on VNS are assembled in algorithm HMOTLBO. In the following comparison experiments, two better multi-objective algorithms EA-MOA [43] and NIW-PSO [44] are selected to compare. The parameters of each algorithm are described in the subsection A of this section. Here, we use IGD to measure the algorithms. Each algorithm is run 10 times independently on each case and the ideal Pareto frontier solutions set
The following conclusion can be drawn from Figure 7, in the comparison of the mid-value value of IGD, HMOTLBO gets 11 better out of the given 15 cases, which is significantly better than the other compared algorithms. EA-MOA gets 4 best out of the given 15 cases and NIW-PSO gets none of them. Hence, HMOTLBO has the best convergence ability and the best distribution of solutions, EA-MOA is the second, and NIW-PSO is the third. In the comparison of the height of boxes, HMOTLBO obtains the box with smaller height on 8 cases out of the 15 test instances, EA-MOA gets 7 out of the 15 test cases, and NIW-PSO gets zero. These shows that the stability of HMOTLBO is the best, EA-MOA is the second, and NIW-PSO is the third.
To compare the computation time and the convergence of HMOTLBO, EA-MOA and NIW-PSO, set the number of iteration of each algorithm 500 and the other parameters is not changed. Figure 8 shows the average computation time histogram of the three algorithms on three scale cases. Figure 9 shows the convergence curves of the three algorithms on cases j15c5a5, j20c5a5 and j30c5a4 respectively.
From Figure 8, we can draw the conclusion that under the same number of iterations, algorithm NIW-PSO has the shortest computation time on three scale cases, followed by the algorithm HMOTLBO, and algorithm EA-MOA has the longest computation time. Moreover, with the increase of the scale of the cases, the computation time of the three algorithms increase. From Figure 9, we can draw the conclusion that under the same number of iterations, the algorithm HMOTLBO can converge to the Pareto front solutions faster and better, followed by EA-MOA. NIW-PSO has the slowest convergence speed and the worst convergence quality.
For test case j15c5a1, Figure 10 reports the Gantt chart, here, figure (a) has the minimum makespan and figure (b) has the minimum energy consumption. This shows the effectiveness of the proposed HMOTLBO algorithm.
Conclusion
In this study, the scheduling problem of HFS with unrelated machine considering the makespan and the minimum energy consumption objectives is investigated. To solve this bi-objective scheduling problem effectively, a mixed integer linear programming (MILP) model is formulated and an efficient multi-objective optimization algorithm based on decomposition is proposed.
In the outline of the proposed algorithm HMOTLBO, we devised a new encoding method to represent a solution, which can cover the most solution space and support the five different decoding method as discussed in this paper. An external archive(EA) set is used to record the Pareto frontier and all teachers come from this set in view of good teachers can teach excellent students. To enhance the exploration and exploitation ability of the algorithm during each iteration, besides the normal teaching process, each students and teachers will perform a self-learning operator based on variable neighborhood search(VNS). To improve the efficiency of the algorithm, for each student, the weight is assigned by a weight matching algorithm rather than randomly assigned during the initialize phase. Based on the experimental comparisons, we can conclude that the proposed algorithm exhibits high capabilities in handling the multi-objective scheduling problem of HFS.
Since the HFS is a workshop with strong industrial background, we believe that scheduling HFS with energy considerations will be an important research topic for a long time, which is a very meaningful tool to achieve the sustainable manufacturing goal of the energy-intensive companies. Our future work will extend the proposed HFS scheduling problem to a more complex environment, such as dynamic environment or an uncertain environment.