A Discrete Group Search Optimizer for Hybrid Flowshop Scheduling Problem with Random Breakdown

The scheduling problems have been discussed in the literature extensively under the assumption that themachines are permanently available without any breakdown. However, in the real manufacturing environments, the machines could be unavailable inevitably for many reasons. In this paper, the authors introduce the hybrid flowshop scheduling problem with random breakdown (RBHFS) together with a discrete group search optimizer algorithm (DGSO). In particular, two different working cases, preempt-resume case, and preempt-repeat case are considered under random breakdown. The proposed DGSO algorithm adopts the vector representation and several discrete operators, such as insert, swap, differential evolution, destruction, and construction in the producers, scroungers, and rangers phases. In addition, an orthogonal test is applied to configure the adjustable parameters in the DGSO algorithm. The computational results in both cases indicate that the proposed algorithm significantly improves the performances compared with other high performing algorithms in the literature.


Introduction
The hybrid flowshop scheduling (HFS) problem, being also referred to as multiprocessor or flexible flowshop, is one kind of production scheduling problems, which has been widely used in the process industry such as the paper, oil, petrochemical, and pharmaceutical industries [1].In the HFS problem, it is assumed that the jobs have to pass through all stages in the same order and that there is at least one stage that must have multiple machines.Each job is processed on only one machine at each stage.The HFS problem minimizing the makespan or maximum completion time is denoted as HFc// max using the three-field notation by Graham et al. [2], and it has been proved to be strongly NP-hard when the number of machines is no less than two [3].The research efforts on the HFS problem generally presume a static environment and no unexpected events [4].However, the real natural environments are stochastic and dynamic [5,6], and the real manufacturing processes tend to suffer a wide range of uncertainties, such as machine breakdown, job cancellation, due-date changes, shortage of materials, and changes in job priority.In this paper, the authors concentrate only on the hybrid flowshop scheduling problem with random breakdown (RBHFS), which is referred to as HFc/brkdwn/ max .In the RBHFS problem, the breakdown of machines may happen at any time and the job being processed on the machine must be stopped until the machine has been repaired.On the basis of the characters of jobs and the breakdown of machines, the RBHFS problem falls into two categories: the preempt-resume case and the preemptrepeat case.In the preempt-resume case, the completed part of the job will not be lost and the processing time of the job is cumulative after the machine breakdown is repaired.While in the preempt-repeat case, the completed part will be lost so that the operation of the job should be started from the beginning, which is commonly used in the preprocessing phase, for example, the heating and cooling processes.
Compared with a lot of works on the HFS problem, the studies on the RBHFS problem are still in their infancy.Alcaide et al. [7] considered the stochastic flowshop scheduling problem subject to random breakdowns where the objective is to minimize the expected makespan and proposed a dynamic procedure that enables us to consider a random breakdowns stochastic problem as a sequence of without-breakdowns stochastic problems.In [8] the dynamic scheduling problems in random flexible manufacturing systems subject to machine breakdowns are addressed and an adaptive scheduling approach is proposed to make coupled decisions about the part/machine scheduling and operation/tool assignments on a rolling horizon basis.Allaoui and Artiba [9] studied the hybrid flowshop scheduling problem to minimize flow time and due date based criteria under maintenance constraints.In this model, the setup, cleaning, and transportation time are also considered.Tang et al. [10] studied the single machine stochastic just-in-time scheduling problem subject to machine breakdowns for the preemptresume and the preempt-repeat.The objective function of the problem is the sum of squared deviations of the job expected completion time from the due date.Gholami et al. [11] examined a heuristic to find schedules minimizing expected makespan in the hybrid flowshop problem with sequencedependent setups and machines with random breakdowns.This method employed the random keys genetic algorithm approaches (RKGA) to find the optimum solutions and utilized the simulator algorithm to evaluate the objective function under this condition.From the analysis results of the Taguchi parameter design, the number of job, the number of stage, and the population size play important roles in the algorithm.And the same problem was studied later by Zandieh and Gholami [12].Based on a clonal selection principle and an affinity maturation mechanism of the immune response, they proposed an immune algorithm (IA) and applied the Taguchi parameter design method to analyze the proposed algorithm.Safari and Sadjadi [13] explored flowshop configuration under the assumption of condition-based maintenance to minimize expected makespan and proposed a conditionbased maintenance (CBM) strategy and a hybrid algorithm based on genetic algorithm and simulated annealing.Their simulation results showed its superiority.Wang and Choi [14] proposed a decomposition-based approach that integrates the completely reactive approach with the predictive reactive approach to minimize the makespan of the flexible flowshop scheduling problem under machine breakdown.Mirabi et al. [15] considered a two-stage hybrid flowshop scheduling problem where the machine may not always be available during the scheduling period.The objective is to find the optimal job combinations and the job schedules such that the makespan is minimized.
Recently, a population-based optimization algorithm group search optimizer (GSO) has been proposed by He et al. [16], which is inspired by the animal social foraging behavior and group living theory.Some recent researches indicate that the GSO algorithm is better than some other evolutionary algorithms for solving the large-scale multimodal benchmark functions [17,18].At the same time, the GSO algorithm has been successfully applied in a variety of fields, such as artificial neural network training [19,20], power system [21,22], and mechanical design [23].
Considering the successes of the GSO algorithm, the authors proposed to use a discrete group search optimizer (DGSO) algorithm for the production scheduling problem.The proposed DGSO algorithm maintains the optimization mechanism of the basic GSO algorithm and abandons the angle evolution strategy to improve the effectiveness and efficiency of the algorithm.In the DGSO algorithm, an encoding scheme based on the vector representation is introduced in order to adapt the GSO algorithm to the discrete problems.Meanwhile, an improved variable neighborhood search, a novel differential evolution operation, and the destruction and construction procedures are proposed in the producer, scrounger, and ranger phases, respectively.In addition, in order to achieve a good performance of the DGSO algorithm, an orthogonal experiment design is carried out for getting a guideline on tuning of the parameters in the algorithm.In both the preempt-resume case and the preempt-repeat case, our proposed DGSO algorithm shows the state-of-the-art results on benchmarks.
The rest of the paper is organized as follows.In Section 2, the RBHFS problem statement and the approach to dealing with random breakdown are formulated.Section 3 presents the details of the proposed DGSO algorithm.The tests on parameter selection and the simulation results are provided in Section 4. Finally, the authors draw conclusions in Section 5.

Description of the Problem.
The RBHFS problem with makespan criterion can be described as follows.There are  jobs J = {1, 2, . . ., , . . .,  − 1, } that have to be performed on  stages S = {1, 2, . . ., , . . .,  − 1, }, and each stage  has   identical machines.At least one stage  must have more than one machine.Each job consists of  operations, each of which must be sequentially processed at stage 1 to stage  and by exactly one machine at every stage.At any time, a machine can process at most one job and a job can be processed by at most one machine.The processing time required for job  at stage  is given as   .These identical machines are not available at all times, and they suffer random breakdown.Once the machine begins to work, it will continue to run until breakdown occurs or all jobs are finished.The setup and release time of all jobs are negligible.Preemption is not allowed and infinite intermediate buffers exist between two successive stages.The scheduling problem is to choose a machine at each stage for each job and determine the sequence of jobs on each machine so as to minimize the makespan [14].

An Approach to Dealing with Random Breakdown.
It is assumed that a machine keeps processing the jobs sequentially until it break down or it has finished all the jobs, and machine breakdowns may arise at any time in working periods.The authors find that the possible positions where the breakdown may happen in are shown in Figure 1, where A means the breakdown happened during machine's processing time interval, B, C, and D represent the breakdown happened during machine's idle time interval.
This paper considers two different cases while dealing with an interrupted job.Case 1 is the preempt-resume case and Case 2 is the preempt-repeat case.In Case 1, after repairing, only the unprocessed part of the interrupted job needs to be processed.And in Case 2, after repairing, the whole interrupted job needs to be reprocessed.It is assumed that   and   are the starting time and the completing time of job  on machine  ( ∈ {1, 2, . . ., ∑  =1   }) at stage , respectively, and   denotes therepairing time of machine .Then   can be calculated as follows:

The Discrete Group Search Optimizer (DGSO) for the RBHFS Problem
The basic GSO algorithm adopts the framework of the biological model, Producer-Scrounger (PS) model [24], which uses two foraging strategies within groups: producing and scrounging.In order to avoid entrapping in local minima, GSO also employs rangers foraging strategies which perform random walks [17].The population of GSO is called a group and each individual in the group is called a member.In the GSO scheme, a group contains three types of members, namely, the producers, the scroungers, and the rangers [18].Each individual in the group has its own position, search angle, and search direction.At each iteration cycle, the producers perform producing strategy to search for the optimal positions; the scroungers perform scrounging strategy to join resources found by the producers; the remaining members are the rangers which walk randomly in the searching space to find their new positions.In the GSO algorithm, a position of the individual corresponds to a solution of the optimization problem, and the fitness of the position corresponds to the fitness of the solution.The processes of the basic GSO algorithm can be shown in pseudocode as Algorithm 1.
Following the procedure for the continuous function optimization, in this paper the authors propose a discrete version of the GSO algorithm for the RBHFS problem.It is discussed in detail below.

Individual Representation and Initialization.
Owing to its continuous nature, the GSO algorithm dose not directly fit for the discrete flowshop scheduling problem.So it is important to find a suitable mapping which can conveniently convert individuals to solutions.In the HFS problem, there are two formats to represent a solution: the matrix representation and the vector representation [25].In this paper, the RBHFS problem is formulated by the vector representation, which considers the sequence of jobs only at stage one and the sequences of jobs at other stages are decoded by employing the List Scheduling (LS) algorithm [26,27].As a result, the vector representation is also adopted in the proposed DGSO algorithm.In such a case, the individual in the DGSO algorithm can be represented by a permutation of jobs at stage one  = {(1), (2), . . ., ()}.The population size of the proposed DGSO algorithm is determined by parameter PS and the initial population is generated randomly in the search space.

Producer.
Recently, Couzin et al. [28] indicated that the larger the group, the smaller the proportion of informed individuals needed to guide the group, and that only a very small proportion of informed individuals is required to achieve great accuracy.As a result, for the great accuracy and the simplicity, there is only one producer in the DGSO algorithm, which means that the best individual is the producer and the remaining members in the population are scroungers or rangers.Furthermore, considering the efficiency of the algorithm, the producing strategy is executed on the producer only if it is changed after the evolutionary procedure.
Aiming at improving the efficiency of the proposed DGSO algorithm, the authors introduce an improved variable neighborhood search (IVNS) [29] as the producing strategy in this paper.The IVNS contains two structures of neighborhoods: the insert local search and the swap local search.The pseudocodes of the insert local search and the swap local search are given in Algorithm 2. The IVNS keeps doing the insert local search followed by the swap local search until there is no improvement.If the newly generated individual is better than the current producer, the new individual is set to be the producer.Otherwise the old producer is retained.

Scrounger.
In the population, the left individuals except the producer are divided into the scroungers and the rangers.Each individual is set to be scrounger or ranger with the probability of  and (1 − ), respectively.
As for each scrounger, a novel discrete differential evolution scheme is employed for improving the scrounging performance, which consists of three steps: mutation, crossover, and selection.
In the mutation part, the bestrandinsert operation [30] is introduced.The procedure of the bestrandinsert is described as follows.It first randomly chooses pr1 jobs without repetition; then it inserts each chosen job into other  − 1 position, and pr2 better permutations with relatively smaller makespan are remembered; finally one of the pr2 permutations is randomly chosen to replace the incumbent scrounger.For each scrounger, the producer undergoes the bestrandinsert operation with a mutation rate (MR) to obtain the mutant individual.On the other hand, the mutant individual is the same as the producer with a probability of about (1 − MR).That is to say, the mutant individual is obtained as where    denotes the mutant individual at generation  and rand(0, 1) is a random function returning a number between 0 and 1 with uniform distribution.
Next a crossover operation called CRO is employed in the crossover part, and it will be able to work effectively even though the individuals in the population are very close to each other in the later stage of evolution.The crossed individual    = CRO(scrounger,    ).The authors first randomly select a part of job permutation in the scrounger and insert it on the front or back of the mutant individual    and then delete the jobs in    which are already in the selected segment.In this case, two valid crossed individuals will be obtained: one is the front one and the other the back one.
Following the crossover operation, the selection is conducted.The best one which has the lowest objective value among the two crossed individuals and the incumbent scrounger is accepted.In other words, if either of these two crossed individuals yields a better makespan than the scrounger, then the better individual becomes the scrounger; otherwise the old scrounger is retained.

Ranger.
In the basic GSO algorithm, the rangers search randomly in the predefined space to increase the population diversity and avoid getting trapped in local optima.Here the rangers employ the destruction and construction procedures of the iterated greedy (IG) algorithm with one parameter: destruction size (d) [31].Regardless of whether the newly generated individual is better than the original ranger, the ranger is replaced to enhance the global search ability.Throughout the course of evolution, once the new individual generated by scroungers or rangers is superior to the producer, the producer will be updated.
3.5.Computational Procedure.Based on the above operations, the procedure of the DGSO algorithm for the RBHFS problem is summarized as follows.
Step 1. Set the algorithm parameters PS, P, MR, pr1, pr2, and d, and initialize the population.
Step 2. The producer conducts an improved variable neighborhood search to search for a better solution if the producer is changed.
Step 3. The scroungers employ the discrete differential evolution operation to keep searching for the high-quality solutions, which includes mutation, crossover, and selection.
Step 4. The rangers produce new solutions by using the destruction and construction procedures to avoid local optimum.
Step 5. Evaluate each member in the population and utilize the best individual to update the producer.
Step 6.If the given termination criterion is satisfied, end the procedure and return the producer; otherwise go back to Step 2.
Considering the character of the RBHFS problem, some discrete operators are introduced for the producers, scroungers, and rangers.The DGSO algorithm has the producer and the scroungers to play the part of exploitation and employs the rangers to play the part of exploration.Since both the exploitation and exploration are improved and well balanced, it is expected to generate good results for the RBHFS problem under the criterion of makespan minimization.In the next section, the performance of the DGSO algorithm is investigated based on simulation results and comparisons.

Experimental Setup.
To fully examine the performance of the DGSO algorithm, a parameter discussion, a preliminary experiment, and an extensive experimental comparison with other powerful methods are provided.As no test instances are available for the RBHFS problem, some benchmark problems for the HFS problem are modified.Ten benchmark problems proposed by Liao et al. [32] have been selected.In each instance of Liao's benchmark problems, there are 30 jobs and 5 stages.At each stage, the machine number has a uniform distribution in the range of [3,5].The processing time in these problems is within [1,100].It is assumed that the breakdown time point and repairing time obey the normal distributions and there are five breakdown time points.The mean values of the five breakdown time points are equal to 200, 400, . . ., 1000, respectively, and the variances of all the breakdown points are 10.The mean value of repairing time of each machine is 20, and the variance is 5.All the algorithms were coded in Visual C++ and run on an Intel Pentium 3.06 GHz PC with 2 GB RAM under Windows 7 operating system.The maximum computation time is fixed at 200 seconds for all the parameter tests, the preliminary experiment, and compared algorithms in the following subsections.

Parameter Discussion
. Tuning parameters properly is critical for an evolutionary algorithm to achieve a good performance [33,34].In the proposed DGSO algorithm, there are six main parameters: PS, P, MR, pr1, pr2, and .
All the six parameters are regarded as factors at five different levels, as illustrated in Table 1.If the authors carry out the full factorial experiment design in this case, it requires 5 6 = 15, 625 experiments which is not necessary and economical.Therefore, an orthogonal experiment design [35] is applied to provide a receipt for tuning the adjustable parameters of the DGSO algorithm, which just needs 5 2 = 25 experiments.
Parameter experiments are conducted on ten Liao's benchmark problems.The breakdowns in the parameter discussion are subject to preempt-resume case (Case 1) for simplicity.Taking the test on instance j30c5e1, for example, Table 2 shows the orthogonal parameter table L 25 (5 6 ) including 25 groups of the parameter test samples.Each group of parameters is trialed for ten times.The unabridged result tables similar to Table 2 in other nine instances are omitted here.After the detailed analyses of the orthogonal design, the authors set the parameters PS = 10,  = 0.6, MR = 0.8, 1 = 4, 2 = 2, and  = 2 in the following experiments.

Preliminary Experiment.
Comparing with the group search optimizer (GSO) algorithm, there are three new elements in the proposed DGSO algorithm: the improved variable neighborhood search in the producer phase, the differential evolution in the scrounger phase, and the destruction and construction procedures in the ranger phase.The following abbreviations represent the variants considered: the GSO-IVNS (GSO with improved variable neighborhood search) and the GSO-IVNS-DE (DHS with improved variable neighborhood search and differential evolution).To verify the effect of each element in the algorithm for solving the RBHFS problem, the authors test four algorithms: GSO, GSO-IVNS, GSO-IVNS-DE, and DGSO in the preempt-repeat case (Case 2).Each method was run twenty times for each Liao's benchmark problem and its performance, including the average and minimum value, was recorded in Table 3.
In Table 3, the overall mean values of AVE and MIN yielded by the DGSO algorithm are equal to 710.9 and 701, respectively, and the DGSO algorithm is the best one among the four variants for all the problems with the same computational time.It is shown that the utilization of the improved variable neighborhood search, the differential evolution, and the destruction and construction procedures is the key to stress the balance of exploration and exploitation so as to improve the performance of GSO algorithm.[32], a RKGA by Gholami et al. [11], and an IA by Zandieh and Gholami [12].In order to establish more accurate and objective comparisons, the parameters of these algorithms are set following the corresponding literature.All the algorithms were run twenty independent replications for each problem.
The comparison results on Liao's benchmark problems of Case 1 and Case 2 are summarized in Tables 4 and 5.In these two tables, AVE, MIN, STD, and T indicate the values of average, minimum, standard deviation, and the convergence time, respectively.In Tables 4 and 5, the smallest values of AVE, MIN, STD, and T in the rows are shown in bold.It can be noted that in the preempt-resume case, the overall mean values of AVE, MIN, and STD yielded by the DGSO algorithm are equal to 652.1, 651, and 1.1, respectively, which are much better than those generated by PSO, RKGA, and IA.The DGSO algorithm can converge to better solutions than other three compared algorithms in about the same amount of time, and the same performance is provided in the preempt-repeat case.From these observations, it is concluded that the DGSO algorithm is more effective and efficient in comparison with other methods for the RBHFS problem in both cases.
To confirm whether the observed differences are indeed statistically significant, the authors carry out an analysis of variance (ANOVA).ARE is analyzed by multicompare method using least significant difference (LSD) procedure, where ARE denotes the average relative error to the best solution found by any of the compared algorithms.Obviously, the smaller ARE value is the better result the algorithm yields.The means plots of ARE for all the compared algorithms with LSD intervals at a 95% confidence level in Case 1 and Case 2 are shown in Figures 2 and 3. Note that if the LSD intervals for two means are not overlapping, then the means are significantly different.In Figures 2 and 3, the four algorithms can be divided into two homogenous groups: (PSO, RKGA, and IA) and DGSO, where no statistically significant differences can be found within each group.From these two figures, it is clear that the proposed DGSO algorithm is statistically better than PSO, RKGA, and IA.Considering the stability and robustness [36,37] of the algorithm, these figures also demonstrate the superiority of the DGSO algorithm under the random breakdown cases.

Comparisons of the RBHFS Problem in Different
Breakdown Cases. Figure 4 is the computational result for different breakdown cases of Liao's benchmark problems, which are achieved by the DGSO algorithm.The authors found that the problem without breakdown takes less time than Case 1, and Case 1 takes less time than Case 2, which is consistent with the theoretical fact.It means that the job without interrupt needs less processing time than the job being interrupted by breakdown, and the job being interrupted by  breakdown needs less processing time than the job to be reprocessed.In conclusion, the machine breakdown affects the processing time of the interrupted jobs and influences the scheduling solution, and different breakdown cases have different scheduling schemes.

Conclusions
This paper models the hybrid flowshop scheduling problem with random breakdown by analyzing the random breakdown time point and offering an approach to dealing with the breakdown.Then a discrete group search optimizer algorithm is proposed to minimize the makespan of the RBHFS problem.Several efficient operators are introduced for the producers, scroungers, and rangers in the DGSO algorithm.
In addition, an orthogonal test is applied to configure the
Procedure the GSO algorithmInitial population While (criterion) Choose the producers and perform the producing Choose the scroungers and perform the scrounging Disperse the rest individuals to perform ranging Evaluate the individuals End while End procedure Algorithm 1: The procedure of the basic GSO algorithm.

Figure 2 :
Figure 2: Means plot of ARE with 95% LSD intervals for different algorithms in Case 1.

Figure 3 :
Figure 3: Means plot of ARE with 95% LSD intervals for different algorithms in Case 2.

Table 1 :
Factors and levels for orthogonal experiment.

Table 3 :
Performance of HDHS with different versions.

Table 4 :
Comparison results on Liao's benchmark problems of Case 1.

Table 5 :
Comparison results on Liao's benchmark problems of Case 2. Comparisons among PSO, RKGA, IA, and DGSO in Different Breakdown Cases.Several metaheuristics have been applied to the RBHFS problem.To evaluate the performance of the proposed DGSO algorithm in solving the RBHFS problem under the criterion makespan, the DGSO algorithm is compared with a PSO algorithm proposed by Liao et al.