A Bee Colony Optimization Approach for Mixed Blocking Constraints Flow Shop Scheduling Problems

The flow shop scheduling problems with mixed blocking constraints with minimization of makespan are investigated.The Taguchi orthogonal arrays and path relinking along with some efficient local search methods are used to develop a metaheuristic algorithm based on bee colony optimization. In order to compare the performance of the proposed algorithm, two well-known test problems are considered. Computational results show that the presented algorithmhas comparative performancewithwell-known algorithms of the literature, especially for the large sized problems.


Introduction
Flow shop scheduling problem has been extensively studied for over 50 years because of its significance in both theory and industrial applications [1].As an important branch of flow shop scheduling problems, the blocking flow shop scheduling has attracted much attention in recent years.In the blocking flow shop, because of the lack of intermediate buffer storage between consecutive machines, a job has to stay in the current machine until the next machine is available for processing, even if it has completed processing in this machine [2].This increases the waiting time and thus influences the efficiency of production.In the classical flow shop problem, the buffer space capacity between machines is considered unlimited.Some flow shop problems are concerned with the blocking constraints such as RSb, RCb, and RCb * (these blocking constraints will be described in Section 2).For problems with classical blocking constraint (RSb), Wang et al. [3,4] developed a hybrid genetic algorithm for flow shop scheduling with limited buffers and a hybrid harmony search algorithm and Ribas et al. [5] proposed an iterated greedy algorithm.RCb constraint was introduced for the first time by Dauzere-Peres [6].Regarding RCb constraint, an integer linear programming model, lower bounds, and a metaheuristic are presented in [7].These problems have also been solved in [8] by a metaheuristic algorithm.A new blocking constraint called RCb * has been proposed by Trabelsi et al. [9] which is an RCb constraint simultaneously subjected to different types of blocking constraints on successive machines in a process.In [10], authors proposed some heuristic for the flow shop problem with RSb, RCb, and RCb * constraints and solved it with genetic algorithm.In [11], the complexity of a production system where the RSb and RCb constraints are mixed is studied.
In this paper, we propose a constructive heuristic for minimizing the makespan in flow shop scheduling problems with mixed blocking constraints.We propose a new heuristic method for the problem, which is based on the ideas of the bee colony optimization.The presented algorithm is developed by using the ideas of the Taguchi orthogonal arrays and the path relinking method.Moreover, some heuristic local search methods and perturbation procedures are also designed to improve the efficiency of the resulting algorithm.To compare the presented algorithm with some efficient algorithms of the literature, two sets of well-known test problems are used.The numerical results show that the presented algorithm is comparative with well-known algorithms of the literature.
This paper is structured as follows: the mixed blocking flow shop problem is described in Section 2. Section 3 is concerned with the description of the presented algorithm of this paper.The numerical results are given in Section 4. Finally, the paper is concluded in Section 5.

Problem Description
To describe the problem, we followed the same approach as [10].Flow shop scheduling problems are a class of scheduling problems with a workshop or group shop in which the flow control will enable an appropriate sequencing for each job and for processing on a set of machines or with other resources 1, 2, . . .,  in compliance with given processing orders.Particularly, the maintaining of a continuous flow of processing tasks is desired with a minimum of idle time and a minimum of waiting time.Flow shop scheduling is a special case of job shop scheduling where there is strict order of all operations to be performed on all jobs.Flow shop scheduling may apply as well to production facilities as to computing designs.The deterministic flow shop scheduling problem consists of a finite set  of  jobs to be processed on a finite set  of  machines.Each job   must be processed on every machine in its routing consisting of  operations  1 ,  2 , . . .,   .Operation   needs an execution time   on machine  ∈ , performed in order.For 1 ≤  ≤ , only one job can be executed on machine   , at any time.
Here, preemptive operations are not authorized.Objective function is to reduce the time when all operations are completed, that is, makespan.Several different cases of flow shop can be considered such as the classical flow shop problem without any blocking constraint, the flow shop problem with only one blocking constraint between all machines, and the flow shop problems in which different blocking constraints are mixed.One constraint met in industrial problems is that a job blocks a machine until this job starts on next machine in routing.This classical blocking constraint is denoted by RSb (Release when Starting Blocking).Another constraint is that a machine will be immediately available to treat its next operation after its job on the following machine in process is finished without regard to whether or not it leaves the machine.This blocking constraint is denoted by RCb * (Release when Completing Blocking * ).In a large production line, different types of blocking constraints can be encountered which depend on intermediate storage between machines, characteristics of machines, and technical constraints.To characterize a flow shop problem where different blocking constraints are mixed, a vector  is introduced that contains blocking constraints between machines.Element   is blocking constraint between machines   and  +1 .This vector has  − 1 elements (as many elements as the number of transitions between machines).See [10] for details and a mathematical programming formulation of the problem.For the flow shop scheduling problem, we use a permutation  of jobs as a solution representation.For example, suppose there are six jobs and four machines in a flow shop scheduling problem.A permutation  = (2, 3, 1, 6, 5, 4) is a permutation of six jobs and this solution represents a scheduling in which the sequence of jobs on each machine is  2 ,  3 ,  1 ,  6 ,  5 ,  4 .

Bee Colony Algorithm
To describe the bee colony algorithm, we use the same approach as [12].The bee colony algorithm is a stochastic Pmetaheuristic that belongs to the class of swarm intelligence algorithms.BC is the one, which has been most widely studied and applied to solve the real-world problems.The basic artificial bee colony algorithm [12] classifies bees into three categories: employed bees, onlooker bees, and scout bees that both the onlookers and the scouts are also called unemployed bees.Employed bees are having no knowledge about food sources and looking for a food source to exploit.Onlooker bees are waiting for the waggle dances exerted by the other bees to establish food sources and scout bees carry out a random search in the environment surrounding the nest for new food sources.In the BC algorithm, each solution to the problem under consideration represented by an dimensional real-valued vector is called a food source and the nectar amount of the food resource is evaluated by the fitness value.The following steps show the template of the bee algorithm inspired by the food foraging behavior in a bee colony: (1) Initialization: compute an initial population.
(2) Employed bee stage: find food sources by exploring the environment randomly.
(3) Onlooker bee stage: select the food sources after sharing the information with employed bees.
(4) Scout bee stage: select one of the solutions; then, replace it with a new randomly generated solution.
(5) Remember the best food source found so far.
(6) If a termination criterion has not been satisfied, go to step (2); otherwise, stop the procedure and report the best food source found so far.
Due to the fact that the basic BC algorithm was originally designed for continuous function optimization, for making it applicable for solving the mixed blocking flow shop problems with makespan criterion, a new variant of the BC algorithm is presented in this section.

Initialization.
In the BC algorithm, initial population is often generated randomly.To guarantee an initial solution with certain quality and diversity, we generate one solution by using the NEH heuristic of Nawaz et al. [13].Then, to maintain the diversity of the initial population, the other solutions are randomly generated in the entire search space.
In Section 4, we show the impact of selecting NEH algorithm and some random solutions instead of selecting just random solution.The framework of the NEH heuristic is presented as follows: (1) Compute the total processing time on all machines for each job.Then, obtain a sequence  = ( 1 , . . .,   ) by sorting jobs according to their nonincreasing total processing time.
(2) The first two jobs of  are taken, and the two possible subsequences of these two jobs are evaluated, and then select the better one as the current sequence .
(3) Set  =  + 1.Take the th job of  and insert it into  ( ∈ [1,  + 1]) possible positions of the current sequence  to obtain  subsequences.Select the subsequence with the minimum makespan as the current sequence  for the next generation.Repeat this step until all jobs are sequenced and the final sequence is found.

Employed Bee Phase.
According to the basic BC algorithm, the employed bees find new solutions in the neighborhood of their current positions.Let  0 = { 1 , . . .,   } be the population.For each member   , at first, we apply a perturbation method to   and generate a new solution    .Then, the path relinking procedure is executed on   and    and produces a set of candidate solutions.Finally, a local search (which will be described later) is applied to the best candidate solution with a small probability  LS and the current solution is replaced by the best candidate solution.The pseudocode of employed bee phase procedure is given as follows: (1) Let  = { 1 , . . .,   } be the current population and set  = 1.
(2) Apply the perturbation method to   to obtain a new solution    .
(3) Let  = { 1 , . . .,   } be the set of candidate solutions obtained by executing the path relinking method to   and    .(4) Use the local search method to improve the best candidate solution with probability  LS .
(5) Replace   with the best candidate solution.
To complete the discussion, we need to describe the procedures for perturbation, path relinking, and local search.The perturbation procedure is based on the insertion operator.In the insertion operator Ins(, ) jobs  and  are selected, the job at position  is inserted into position , and all jobs between positions  and  are shifted accordingly [14].In the perturbation method, at first, a random position  is selected.Then, for all  ̸ = , the operator Ins(, ) is applied to the current solution and the best obtained solution is chosen as the perturbed solution.The path relinking is applied in order to explore the search space between   and    .Path relinking is based on the interchange operator.Consider two distinct positions  and .The operator Exch(, ) exchanges the positions of the job at position  and the job at position .In what follows, the path relinking procedure is explained by using an example.Let  = (2, 1, 4, 5, 3) and   = (3, 4, 2, 1, 5).At first, job 2 which is in position 1 of  is chosen.Since job 2 is in position 3 in   , we apply the operator Exch(1, 3) on  and generate  1 = (4, 1, 2, 5, 3).In the next step, job 4 which is in position 1 of  is selected.Since job 4 is in position 2 of   , we apply Exch(1, 2) on  1 to obtain  2 = (1, 4, 2, 5, 3).Similarly, the next generated solution is  3 = (5, 4, 2, 1, 3), obtained by executing Exch(1, 4) on  2 .The set of candidate solutions obtained by using the path relinking method is { 1 ,  2 ,  3 }.As mentioned earlier, the local search method is applied on the best candidate solution with probability  LS .The local search method is also based on the exchange operator.In the local search method, at first, a position  is randomly selected.Then, for all  ̸ = , the operator Exch(, ) is applied to the current solution.The result is the best obtained solution.The local search method is repeated until no improvement is observed in a certain number of times (MaxCounter).

Onlooker Bee Phase.
In the onlooker bee phase, we try to improve the quality of the members of the populations.For this purpose, we repeat the following procedure a certain number of times (MaxIter).At first, a member of the population is randomly chosen.Then, the perturbation method (as described in Section 3) is applied to the selected member.Finally, an improvement method (to be described later) is applied to the perturbed solution and if the resulting solution is better than the selected member, then the selected member is replaced with the resulting solution.The pseudocode of the onlooker bee phase is given as follows: (1) Set  = 1.
(2) Let  be a random member of the population.
(3) Let   be the solution obtained by running the perturbation method to .
(4) Apply the improvement method to   to generate a new solution   .
(5) If   is better than , then set  =   .
Two local search methods LS1 and LS2 are used in the improvement method.LS1 is exactly the local search method described in Section 3 and is based on the exchange operator.LS2 is obtained by using the insertion operator instead of the exchange operator in LS1.In LS2, a job is removed from a permutation and inserted into other positions; then, the permutation with the best out of the insertions is retained for the next iteration.In the improvement method, at first, LS1 is applied on the given solution  and a new solution   is generated.If   is better than , we set  =   .Then, LS2 is applied on  and another solution   is produced.Similarly, if   is better than , we set  =   .If the current solution is not improved by using of LS1 and LS2, then the perturbation method (described in Section 3) is applied on .This process is repeated until no improvement is observed in a certain number (MaxCnt) of iterations.The pseudocode of this improvement method is given as follows: (1) Set  = 1 and let  be the current solution.
(2) Let   be the solution obtained after an application of LS1 to .If   is better than , then  =   .
(3) Let   be the solution obtained after an application of LS2 to   .If   is better than , then  =   .
(4) If  is not improved after the application of LS1 and LS2, then apply the perturbation method to  and replac  with the perturbed solution.

Scout Bee Phase.
In this phase, the algorithm tries to generate new solutions by executing the following procedure a certain number of times (MaxLoop).At first, two distinct solutions of the population  1 and  2 are randomly selected.
Then, the better one is determined.Without less of generality, let  1 be the better one.In the next step, a combination method is applied to the best solution found so far and  1 to generate a new solution.Finally,  2 is replaced with the new generated solution.The pseudocode of the scout bees phase procedure is given as follows: (1) Set  = 1.
(2) Let  1 and  2 be two members of the population chosen randomly. ( (4) Let  * be the solution obtained after an application of the combination method to   and  best ( best is the best solution so far).
( In the following, we describe the combination method.The ideas of the combination method are usually taken from crossover operators of evolutionary algorithms.However, in this paper, the combination method is based on the Taguchi orthogonal arrays.For this reason, it is required to briefly describe the Taguchi orthogonal arrays.To describe the Taguchi orthogonal arrays, we follow the same approach as [15].Taguchi orthogonal arrays are concerned with  factors, where each factor has  levels.The purpose is to determine the best setting of each factor's level.Clearly, examining all possible   combinations often needs a lot of computational effort and is inefficient.Therefore, a small but representative sample of whole combinations is considered for this purpose.Let  be the number of elements of this sample.This sample can be determined by an array with  rows and  columns and is denoted by (, , , ).All members of the sample must satisfy three conditions.Every level must make the same number of times in any column occur.For every  factors in any  columns, every combination of  levels must make the same number of times occur.The members of the factor must be uniformly distributed over the whole space of all possible combinations.In this paper, the parameter  is always equal to 2 and can be omitted from this notation.In other words, the more simple notation   (  ) can be used instead of (, , , ).To use   (  ) in the combination method, we need to explain how the best combination of each factor's level is determined.Let  , = ∑  =1     , where   is the objective function value of the th member of the sample; if the th level of the factor in   (  ) is , then   = 1 and otherwise   = 0.The th level of the factor  is the best level of this factor, if  = arg min 1≤≤  , .
In the following, an example is presented to explain how Taguchi orthogonal arrays are used to combine some members of the population.Consider the orthogonal array  4 (2 3 ).Note that  = 2,  = 3, and  = 4.In the combination method, 5 solutions are generated and the best one is returned as the result of the combination method.From these 5 solutions, 4 solutions are generated by using the rows of  4 (2 3 ) and a solution is generated by using the best level of each factor.At first, we explain how solutions are generated by using the rows of  4 (2 3 ).Assume that we want to combine two solutions  1 = (1, 4, 6, 2, 5, 0, 7, 3) and  2 = (4, 6, 3, 2, 1, 7, 0, 5) by using the third row of  4 (2 3 ).In the first step, 2 cut points 2 and 5 are randomly chosen and  1 and  2 are cut into 3 pieces.These 3 pieces are characterized by grey and white colors.Since the first component of the third row of  4 (2 3 ) is 1, the first piece of the combined solution is taken from the first piece of  2 .Therefore, the first piece of the combined solution is (4,6).The second piece of the combined solution is (6, 2, 5) which is taken from  1 , because the second member of the third row of  4 (2 3 ) is 0. But here, 6 is repeated in the first piece of the combined solution as well.Therefore, only 2 and 5 appeared in the combined solution and the entry corresponding to 6 is left blank.Similarly, as the third member of the third row of  4 (2 3 ) is 1, the third piece of the combined solution is taken from  2 and the entry corresponding to the repeated number 5 is left blank.The outcome of this procedure is (4, 6, ?, 2, 5, 7, 0, ?).Note that the missing components of the combined solution are 1 and 3. Now, we consider the first member  1 .The first blank position belongs to 1, because in  1 at first 1 appeared.Finally, the second blank position is devoted to 3 and the resulting solution is (4, 6, 1, 2, 5, 7, 0, 3).This procedure is depicted in Figure 1.Now, we explain how the best level of each factor is used to construct a solution.As we noted before, to compute the best level of each factor, we have to compute  , for 1 ≤  ≤ ( = 3) and 1 ≤  ≤ ( = 2).Consider Figure 2.  1,0 is constructed by using the first column of  4 (2 3 ).Since level 0 appeared in the first and second row of the first column of  4 (2 3 ), we have  1,0 = 14 × 1 + 18 × 1 + 20 × 0 + 10 × 0 = 32.Similarly,  1,1 is also computed by using the first column of  4 (2 3 ).Since level 1 appeared in the third and fourth row of the first column of  4 (2 3 ), we have  1,1 = 14 × 0 + 18 × 0 + 20 × 1 + 10 × 1 = 30.We have arg min 0≤≤1  1, = 1.Therefore, the best level of the first factor is 1.By following the same procedure, the best levels of the second and third factors are 1 and 0, respectively.Now, since the best level of the first factor is 0, the first piece of the new solution  5 is taken from  1 .Similarly, the second and third factors of  5 are taken from  2 and  1 , respectively.Finally, from  1 ,  2 , . . .,  5 , the best   one is  5 and is returned as the output of the combination method.

Partial solution
In general case, suppose that we want to use the orthogonal array   (  ) to combine  members  1 ,  2 , . . .,   of the population and generate some new solutions.At first,  − 1 cut points are randomly chosen and  1 ,  2 , . . .,   are cut into  pieces.Then, for 1 ≤  ≤ , the th row of the orthogonal array corresponding to   (  ) is considered and a new solution   is generated as follows.If the level of the th factor in the th row of the orthogonal matrix is 1 ≤  ≤ , then the th part of   is taken from the th part of   .Here, missing components of   are inserted as described before.In the next step, the fitness value   of each   is computed.Then, the best level of each factor is determined, as described before, and a new solution  +1 is also generated as follows.If the best level of the th factor is 1 ≤  ≤ , then the th part of   is taken from the th part of   and missing components of  +1 are inserted as described before.Finally, from  1 ,  2 , . . .,  +1 , the best one is chosen as the output of the combination method.The pseudocode of the described combination method is given as follows: (1) Select  members  1 ,  2 , . . .,   of the population, randomly.
(3) Use the th row of   (  ) to generate a new member   , for 1 ≤  ≤ .
(4) Compute the fitness value   of each new member   , for 1 ≤  ≤ .
(6) Use the best levels of all factors to generate a new member  +1 and calculate the fitness value  +1 of  +1 .

Numerical Results
In this section, we examine the practical efficiency of the BC algorithm.The implementation was performed by using the C programming language on a system with a dual core 3.6 GHz CPU and 2 GB memory.All of the parameters in this study were determined experimentally.The parameters of BC algorithm were set as follows:  size = 10,  LS = 0.05.Two sets of test problems were used to examine the efficiency of the presented algorithm.The first set of test problems is presented in Trabelsi et al. [10], and the second set is given by Taillard [16].The numerical results of the test problems of Trabelsi are recorded in Tables 2, 3, and 4 and the numerical results of the test problems of Taillard are given in Tables 5(a), 5(b), and 5(c).Table 1 is concerned with the properties of the 5680 different benchmark problems presented by Trabelsi et al. [10].The first column and row of this table are concerned with the number of jobs and machines, respectively.The number in the th row and th column denotes the number of test problems with   machines and   jobs, where   denotes the th number of the first column and   is the th number of the first row.For example, the number of test problems with  1 = 5 jobs and  5 = 15 machines is 100.
In Table 2, the computing time of the BC is compared with that of the genetic algorithm (GA) of Trabelsi et al. [10] which is an efficient algorithm of the literature (perhaps  the best one).In this table, the names of the algorithms are given in the first column.The numbers of jobs and machines are given in the second column and first row, respectively.For each problem instance BC was run 25 times and the average of the computing times is recorded in the column below AvgT.The numerical results of this table show that for large instances the computing time of BC is less than that of GA.From Figure 3, we can see that, with increasing the number of jobs, the difference of computing time of presented algorithm has been raised with GA that shows our algorithm is significantly faster than GA.In Table 3, the quality of solutions obtained by BC is compared with those of GA.Here, the error percentages are obtained by using the formula %err = ( max −Opt)/Opt×100 and are recorded below the column %err.It can be observed that the overall mean error value yielded by the BC is equal to 0.0328 which is smaller than 0.0458 generated by the GA.
As the problem size increases, the superiority of the BC over GA increases.For the biggest problem analyzed (12 × 100), the average error of the GA was nearly 0.189, while it was 0.14 for the presented BC.Based on Figure 4, we can observe that our algorithm reaches better solution in large problems which indicates the superiority of presented algorithm for mixed blocking flow shop than GA.Therefore, we conclude that the BC is competitive with GA, with respect to the computing time and the quality of the computed solution.
In Table 4, the rates of success of the presented algorithm, in finding the optimal solution of the test problems of Trabelsi, are given.In this table,  is the number of jobs,  is the number of machines, NoI is the number of problem instances (given in Table 1), and NS is the number of times in which BC computes the optimal solution.The numerical results of Table 4 verify that, from 64 cases, in 33 cases, BC computes the optimal solution of all problem instances.In 31 cases, the BC could not find the optimal solution of all of test problems.However, from these 31 cases, in 10 cases, the BC finds the optimal solution of more than (or equal to) 90 percent of test problems.Moreover, in 8 cases, the BC finds the optimal solution of 80 to 90 percent of the test problems.Only in 8 cases, the optimal solution of 70 to 80 percent of test problems is computed.Indeed, from 5680 test problems, only the optimal solution of 346 problems could not be found by using BC.These results show the efficiency of the presented algorithm in finding the optimal solution of the test problems.
For the first time, the BC was tested in the well-known test bed of Taillard [16].Due to the fact that the largest number of jobs in Trabelsi's benchmark problems is 12, in order to prove the efficiency of our algorithm according to competing approaches of this problem, we conducted still another experiment to test the performance of the BC on the 110 problems presented by Taillard that are larger than Trabelsi's benchmark.In Table 4, the first column is concerned with the problem name.In the second column, the size of the test problems is recorded.Here, the number of jobs is denoted by  and the number of machines is denoted by .Moreover, the column under Min (Max) is for the objective value of the best (worst) and the standard deviation (SD) of solution found by the presented algorithm.So, all results were obtained after 10 runs for each instance and are given in Table 2.The experimental results reveal that BC has the capability to solve large scale problems and it is effective and has reliable performance.
In addition, in order to prove the efficiency of applying NEH heuristic algorithm as one of the solutions instead of using just random ones as initial solutions, we compare the results of algorithms in these two cases that can be seen in Table 6.In that table, BC-NEH donates BC algorithm when NEH was used as one of the initial solutions, while BC-Random means BC algorithm with only random solutions as initialization phase.Comparison results indicate BC-NEH is 0.281 percent better than BC-Random that shows the efficiency of applying NEH as a solution and increases its effectiveness especially in large instances.It is also worth pointing out that, owing to  size = 10, to maintain the diversity of solutions and decrease the computing time, we just use a heuristic solution instead of applying different heuristic solutions [17] in initialization step.

Conclusion
In the current research, a mixed blocking flow shop scheduling problem for minimizing the makespan was discussed.This problem is known to be strongly NP-hard.Here, an efficient bee colony algorithm was proposed to solve this problem.The novelty of the presented method was mainly relevant to the way we applied the ideas of Taguchi orthogonal arrays, path relinking, and some local search methods within the bee colony algorithm, to obtain an efficient algorithm for mixed blocking constraints flow shop scheduling problems.Comparison of the proposed algorithm with an efficient algorithm of the literature demonstrates the efficiency of the proposed BC algorithm.For the first time, the presented algorithm was also tested on well-known test bed of Taillard.

Figure 3 :
Figure 3: Comparison of BC and GA with respect to computing time.

Table 1 :
Characteristics of the set of test problems presented byTrabelsi.

Table 2 :
Comparison of BC and GA with respect to computing time.

Table 5 :
The results of the BC on Taillards benchmark problems.

Table 6 :
Comparison of BC-NEH and BC-Random.