An Order Effect of Neighborhood Structures in Variable Neighborhood Search Algorithm for Minimizing the Makespan in an Identical Parallel Machine Scheduling

Variable neighborhood search (VNS) algorithm is proposed for scheduling identical parallel machine. The objective is to study the effect of adding a new neighborhood structure and changing the order of the neighborhood structures on minimizing the makespan. To enhance the quality of the final solution, a machine based encoding method and five neighborhood structures are used inVNS. Two initial solutionmethodswhichwere used in two versions of improvedVNS (IVNS) are employed, namely, longest processing time (LPT) initial solution, denoted as HIVNS, and random initial solution, denoted as RIVNS.The proposed versions are compared with LPT, simulated annealing (SA), genetic algorithm (GA), modified variable neighborhood search (MVNS), and improved variable neighborhood search (IVNS) algorithms from the literature. Computational results show that changing the order of neighborhood structures and adding a new neighborhood structure can yield a better solution in terms of average makespan.


Introduction
Identical parallel machine scheduling (IPMS) with the objective of minimizing the makespan is one of the combinational optimization problems.It is known to be NP-hard by Garey and Johnson [1] since it does not have a polynomial time algorithm.Exact algorithms such as branch and bound [2] and cutting plane algorithms [3] solve this type of IPM and find optimal solution for small size instances.As the problem size increases, the exact algorithms are inefficient and take much time to get a solution.
That disadvantages bring a need for heuristics and metaheuristics that give optimal or near optimal solution within a reasonable amount of time.Longest Processing Time Rule (LPT) proposed by Mokotoff [4] is the first heuristic applied in IPMS which has a tight worst case performance of bound of 4/3-1/3, where  is the number of parallel machines.LPT is based on distributing jobs on machines according to maximum processing time and the remaining jobs go one by one to the least loaded machine until assigning all the jobs to the machines.The LPT heuristic performs well for makespan criteria but the solution obtained is often local optima.Later, Coffman et al. [5] proposed MULTIFIT algorithm that is based on techniques from bin-packing.Blackstone Jr. and Phillips [6] proposed a simple heuristic for improving LPT sequence by exchange jobs between processors to reduce makespan.Lee and Massey [7] combine two heuristics, LPT and MULTIFIT, to form a new one.The heuristic uses LPT heuristic as an initial solution for the MULTIFIT heuristic.The performance of the combined heuristic is better than LPT and the error bound is not worse than the MULTIFIT.Yue [8] proved the bound for MULTIFIT to be 13/11.Lee and Massey [9] extend the MULTIFIT algorithm and show that the error bound of implementing the algorithm is only 1/10.Garey and Johnson [1] proposed that 3-phase composite heuristic consists of constructive phase and two improvement phases with no preliminary sort of processing times.They showed that their proposed heuristic is quicker than LPT.Ho and

Mathematical Problems in Engineering
Wong [10] introduce Two-Machine Optimal Scheduling which uses lexicographic search.Their method performs better that LPT, MULTIFIT, and MULTIFIT extension algorithm and it takes less amount of CPU time than MULTIFIT and MULTIFIT extension algorithms.
Riera et al. [11] proposed two approximate algorithms that use LPT as an initial solution and compare them with dynamic programming and MULTIFIT algorithms.Algorithm 1 uses exchange between two jobs to improve the makespan.Algorithm 2 schedules a job such that the completion time and process time of the selected job are near the bound.Their second algorithm is compared with MULTIFIT algorithms and results showed similarity to the MULTIFIT algorithm, but their algorithm reduces CPU time with respect to MULTIFIT heuristic.Cheng and Gen [12] applied memetic algorithm to minimize the maximum weighted absolute lateness on PMS and showed that it outperforms genetic algorithm and the conventional heuristics.Ghomi and Ghazvini [13] proposed a pairwise interchange algorithm, and it gave near optimal solution in a short period of time.Min and Cheng [14] proposed a genetic algorithm GA using machine code.They showed that GA outperforms LPT and SA and is suitable for large scale IPMS problems.Gupta and Ruiz-Torres [15] proposed a LISTFIT heuristic based on bin-backing and list scheduling.The LISTFIT generate an optimal or near optimal solution and outperforms LPT, MULTIFIT, and COMBINE heuristics.Costa et al. [16] proposed algorithm inspired by the immune systems of vertebrate animals.Lee et al. [17] proposed a simulated annealing (SA) approach for makespan minimization on IPMS.It chooses LPT as an initial solution.Computational results showed that the SA heuristic outperforms the LISTFIT and pairwise interchange (PI) algorithms.Moreover, it is efficient for large scale problems.Tang and Luo [18] propose a new ILS algorithm combining with a variable number of cyclic exchanges.Experiments show that the algorithm is efficient for  ‖  max .Akyol and Bayhan [19] proposed a dynamical neural network that employs parameters of time varying penalty.The simulation results showed that the proposed algorithm generated feasible solutions and it found better makespan when compared to LPT.Kashan and Karimi [20] presented discrete particle swarm optimization (DPSO) algorithm for makespan minimization.Computational results showed that hybridized DPSO (HDPSO) algorithm outperforms both SA and DPSO algorithms.Sevkli and Uysal [21] proposed modified variable neighborhood search (MVNS) which is based on exchange and move neighborhood structures.Computational results demonstrated that the proposed algorithm outperforms both GA and LPT algorithms.Min and Cheng [14] proposed a harmony search (HS) algorithm with dynamic subpopulation (DSHS).Results show that DSHS algorithm outperforms SA and HDPSO for many instances.Moreover, the execution time is less than 1 sec.for all computations.Chen et al. [22] proposed discrete harmony search (DHS) algorithm that uses discrete encoding scheme to initialize the harmony memory (HM), then the improvisation scheme for generating new harmony is redefined for suitability for solving the combinational optimization problem.In addition, the study made hybridizing a local search method with DHS to increase the speed of local search.Computational results show that the DHS algorithm is very competitive when compared with other heuristics in the literature.Jing and Jun-qing [23] proposed efficient variable neighborhood search that uses four neighborhood structures and has two versions.One version uses LPT sequence as an initial solution.The other version uses random sequence as an initial solution.A computational result demonstrates that EVNS is efficient in searching global or near global optimum.M. Sevkli and A. Z. Sevkli [24] proposed stochastically perturbed particle swarm optimization algorithm (SPPSO).The algorithm compared two recent PSO algorithms.It is concluded that SPPSO algorithm has produced better results than DPSO and PSOspv in terms of the optimal solutions number.Laha [25] proposed an improved simulated annealing (SA) heuristic.Computational results show that the proposed heuristic is better than that produced by the best-known heuristic in the literature.Other advantages of it are the ease of implementation.In this paper, the proposed algorithm of Jing and Jun-qing [23] in their paper "efficient variable neighborhood search for identical parallel machines scheduling" is used with some changes on it.One of the changes is changing in the order of the neighborhood structures and the other change is adding another neighborhood structure to get five neighborhood structures in our proposed algorithm.
The remaining sections of this paper are organized as follows.In Section 2, a brief description of IPMS problem is mentioned.In Section 3, the steps of proposed algorithm are described in detail and the neighborhood structures of this proposed algorithm are explained.In Section 4, computational results are discussed.Conclusion is made in Section 5.

Problem Description
The identical parallel machine scheduling (IPMS) problem can be described as follows.
A set  of an independent jobs  = { 1 ,  2 , . . .,   } to be processed on  identical parallel machines  = { 1 ,  2 , . . .,   } with the processing time of job  on any identical machine is given by   .
A job can only be processed on one machine simultaneously and a machine cannot process more than one job at a time.Priority and precedence constraints are not allowed.There is no job cancellation and a job completes its processing on a machine without interruption.
The objective is to minimize the total completion time "the makespan" of scheduling jobs on the machines.
This scheduling problem can be described by a triple  |  |  as follows: where  indicates parallel machine environment,  indicates number of machines,  indicates no constraints in this problem, and  max indicates that the objective is to minimize the makespan.This problem is interesting because minimizing the makespan has the effect of balancing the load over the various machines, which is an important goal in practice.As we mentioned earlier, the proposed algorithm is an addition of the proposed algorithm of Jing and Jun-qing [23].

Development of the Proposed (IVNS) Algorithm
The proposed algorithms have two versions and two types for each version as shown in Figure 1.In the first version, a new neighborhood structure was added to the four neighborhood structures which are proposed by Jing and Jun-qing [23] while in the second version the order of these neighborhood structures was changed.Both versions use LPT [20] and random initial solutions and are referred to as "HIVNS" and "RIVNS," respectively.All these versions of the proposed algorithm use the same five neighborhood structures.These neighborhood structures will be discussed in the following section.

Neighborhood Structures.
Determining the neighborhood structures is critical in the VNS algorithm.To enhance the local searching abilities, five different kinds of neighborhoods are utilized to find better solutions on a given schedule in the proposed algorithm, which are designed based on such an idea that a given solution can be improved by moving or swapping jobs between the problem machines (the machines with their finished time equal to the makespan of the solution) and any other nonproblem machines (the machines with their finished time less than the makespan of the solution).
The five neighborhood structures are illustrated as follows: ( The orders assigned to the types proposed of the algorithm are as follows: (1) The order of "HIVNS1" and "RIVNS1" is "move, exchange 1, exchange 2, exchange 3, and exchange 4." (2) The order of "HIVNS2" and "RIVNS2" is "exchange 3, exchange 1, move, exchange 2, and exchange 4".Improved VNS (IVNS) flow chart is shown as Figure 2.

Steps of IVNS.
The steps of IVNS for "HIVNS1" and "RIVNS1" are shown as follows.
Step 3.1.For schedule , distinguish the problem machine set ( pm ) and the nonproblem machine set ( npm ).
Step 3.2.For each machine   in  pm do.
Step 4. Output the best solution  found so far.
The steps of IVNS for "HIVNS2" and "RIVNS2" are shown as follows.
Step 3.1.For schedule , distinguish the problem machine set ( pm ) and the nonproblem machine set ( npm ).
Step 3.2.For each machine   in  pm do.
Step 4. Output the best solution  found so far.

Computational Results and Comparison
In this section, the results of two versions of the proposed algorithm were compared with LPT [1], SA [17], GA [14], MVNS [21], and IVNS [23] algorithms from the literature.The two versions of the improved variable neighborhood search algorithms "HIVNS1 and RIVNS1" and "HIVNS2 and RIVNS2" were coded in MATLAB R2012a and executed on i5 CPU 5 GHz with 6 GB of RAM.All of them were stopped after getting the lower bound or running for 100 iterations for RIVNS1 and RIVNS2.The number of machines and number of jobs are shown in Table 1.
The processing time of the jobs is the same as Jing and Jun-qing [23].As a result, 15 instances were conducted and each instance was conducted with 10 generations of different processing times.The total is 150 instances.The performance of the algorithms is measured with respect to the average makespan (mean) and average CPU time (Avg.time in second).The "mean" performance is a relative quality measure of solutions computed by /LB, where  is the average makespan obtained for each instance with 10 generations given by the algorithm and LB is the lower bound of the instance, calculated in equation that mentioned in Section 3.3, Step 2. The "Avg.time" refers to the total time it takes for the algorithm to finish the solution.Table 2 presents the results of the previous algorithms from the literature while Table 3 presents the results from the proposed algorithms.By comparing the results of the average means of the makespan.It is obvious that the proposed algorithms outperform all the algorithms in Table 2 from the literature.It is worth noting that for each instance the proposed algorithms obtain no worse than algorithms in Table 2 except at 10 machines and 20 jobs instance in only HIVNS algorithm and that is due to the difficulty facing the proposed algorithms to get a lower bound when the difference between the number of machines and the number of jobs is relatively small.In addition, by comparing the two proposed algorithms, we can see that the versions have the same average means of the makespan in case of random initial solution while as the 2nd version outperforms the 1st version in case of LPT initial solution in both average means of makespan and average CPU time.
Figure 3 shows the averages of makespan mean the maximum averages for each algorithm and the lower bound.It can be observed that the two proposed versions algorithms have makespan means averages, which closed to the lower bound, especially in RIVNS1 and RIVNS2 that have the same average (1.0054).
Figure 4 shows the averages of Avg.time means for all algorithms.We can see that RIVNS1 and RIVNS2 have the smallest Avg.time, which are 0.008 and 0.007, respectively.Moreover, it can be observed that the Avg.time of HIVNS1 and HIVNS2 is much higher than HIVNS, because in this paper, Matlab is used to construct the code of HIVNS1 and Table 2: The makespan results before the changing order of the neighborhood structures [23].HIVNS2, and the authors of [23] used C++ to construct HIVNS.Thus, the benefits of Avg.time offered by C++ far outweigh the simplicity of Matlab.Avg.time is especially important when dealing with algorithms, since many calculations involve immense optimization with complex equations and algorithms or calculations with a large number of iterations.As the amount of data increases, the computation time (Avg.time) for Matlab code increases significantly; therefore, Matlab code takes more time for those calculations; for example, in Table 3 the cases of  = 2, 5, 10, 20,  = 200 are need to large number of iterations.C++ is the way to go for algorithms calculations because of its speed and versatility.

Conclusion
In this paper, two versions of improved variable neighborhood search (IVNS) algorithms are proposed for scheduling identical parallel machines IPM with the objective of studying the effect of adding a new neighborhood structure and changing the order of neighborhood structures on minimizing the makespan  max .In the proposed algorithms, a machine based encoding method and five neighborhood structures are used to enhance the quality of the final solution.Computational results showed that the proposed algorithms outperform all the algorithms in the literature and obtain no worse than algorithms except when the number of machines and the number of jobs are relatively small which is due to the difficulty facing the proposed algorithms to get a lower bound in that case.In addition, we concluded that the second version outperforms the first version in case of LPT initial solution and therefore changing the order of neighborhood structures has an effect on minimizing the makespan.Further research is to implement the proposed algorithms in scheduling of unrelated parallel machines.

Figure 1 :
Figure 1: The two versions of the proposed algorithms.

Figure 2 :
Figure 2: Flow chart of the basic VNS algorithm.

Figure 3 :
Figure 3: Histogram of averages of makespan means.

Figure 4 :
Figure 4: Histogram of averages of Avg.time means.

Table 1 :
Number of machines and jobs.

Table 3 :
The makespan results after the effect of the changing order of the neighborhood structures.