A Dynamic Genetic Algorithm for Solving a Single Machine Scheduling Problem with Periodic Maintenance

The scheduling problem with nonresumable jobs andmaintenance process is considered in order to minimize the makespan under two alternative strategies. The first strategy is to implement the maintenance process on the machine after a predetermined time period and the second one is to consider the maximum number of jobs that can be done with an especial tool. We propose a new mathematical formulation for the aforementioned problem which is included in the NP-Hard class of problems; in the second part of the paper, we propose a dynamic genetic algorithm so that the largeand medium-scale problems could be solvable in computationally reasonable time. Also we compare the performance of the proposed algorithm with existing methods in the literature. Experimental results showed that the proposed genetic algorithm is able to attain optimum solutions in most cases and also corroborate its better performance from the existing heuristic methods in the literature.


Introduction and Literature Review
In most scheduling problems, it is assumed that the machine is continuously and uninterruptedly available.However, in the real world problems this is not the case (Pinedo [1]).The possible machine breakdown and interruption would be unavoidable in case of working continuously and also this may affect the quality of the jobs being processed by the machine.Thus, the periodic preventive maintenance process is usually planned and implemented to avoid the aforementioned problem.In this paper the scheduling problem is mathematically formulated subject to the amount of time the machine is available between the two consecutive periods, at the end of which the maintenance process is implemented and the maximum number of jobs which could be processed by the machine over the operating time.
Jobs are done in two different ways when we have a limited time period in which the machine is available before the next maintenance process.In the first case, which is called preemptive jobs, it is assumed that operations of the in-process jobs could be continued after machine interruption.On the other hand, for nonpreemptive jobs, it is assumed that the incomplete jobs must be reprocessed after interruption.Also, for some manufacturing processes such as manufacturing of the electronic circuits, the number of jobs to process before changing the tools is a constraint.Considering Graham's three-field notation for symbolizing the scheduling problems, this case is specified with 1/nrpm/ max (Graham et al. [2]).
The problem of machine availability has been widely considered in the literature due to machine breakdown, tool changing, and preventive maintenance process.In some of the researches, machine unavailability has been considered due to the stochastic breakdowns.For single machine scheduling problem with stochastic breakdowns, the optimal solution has been provided by Pinedo and Rammouz [3], Birge et al. [4], and Frostig [5].Some heuristics have been also developed to minimize the number of tardy jobs or makespan (Adiri et al. [6,7]).In addition, the constraint of the machine unavailability due to tool change has been also considered by Akturk et al. [8,9], Ecker and Gupta [10], Chen [11], and Choi and Kim [12].

ISRN Industrial Engineering
Machine unavailability due to preventive maintenance processes has been considered based on both flexible and periodic maintenance process.In the flexible maintenance process, the earliest and latest start time of the maintenance process is already specified and the machine shutdown is allowed in this range.Yang et al. [13] surveyed the problem of the single machine scheduling considering the only one flexible maintenance process.Regarding the actual time needed for maintenance process which is shorter than or equal to predetermined time, the flexible maintenance process is considered.They surveyed the problem to minimize the makespan and prove the NP-Hardness of the developed problem.Qi et al. [14] also considered the problem of scheduling several processes of maintenance as well as jobs processing on the machine.They applied heuristics and also methods on basis of the reputable branch and bound to determine the best jobs sequence and scheduling of the maintenance process in order to minimize the makespan.A mixed binary integer programming and heuristics to minimize the number of in-process jobs have been also proposed by Chen [15].Finally, Low et al. [16,17] surveyed the problem of single machine scheduling considering two alternative strategies, namely, machine unavailability after a fixed time period and also after processing a specific number of jobs to change the tool.They considered minimization of the makespan as their objective.
In context of the problems with periodic maintenance process, there are variants of research.Liao and Chen [18] developed an algorithm on the basis of the reputable branch and bound to minimize the maximum tardiness.Chen [19] suggested a heuristic algorithm to minimize the makespan.In addition, Chen [20] presented a method based on the reputable branch and bound as well as a heuristic to minimize the number of tardy jobs.Lee and Lee [21] developed a heuristic to minimize the makespan.Lau and Zhang [22] assumed that a fixed number of operations must be done between two successive interruptions.Liao et al. [23], Dell' Amico and Martello [24], and Yang et al. [25] assumed that the machine must stop working utmost after processing a fixed number of jobs due to the maintenance process.Finally, Hsu et al. [26] considered the single machine scheduling problem with periodic maintenance and the same strategies as the ones already considered by Low et al. [17]; their objective was to minimize  max .They developed a two-level integer programming for solving the problem optimally as well as two heuristics for solving the large-scale problems, namely, DBF and BBF.As Pinedo (2006) mentioned, the scheduling problems belong to strongly NP-Hard problems.Also in the single machine scheduling problems with machine unavailability, there are a lot of papers in which NP-Hardness of the problem is mentioned; for example, Lee and Liman [27] proved the NP-Hardness of the problems with deterministic maintenance times.In order to solve the problem in most cases heuristic algorithms are handled and to the best of our knowledge only Low et al. [17] use a modified version of particle swarm optimization algorithm for single machine scheduling problem with periodic maintenance.So in this paper, for solving the single machine scheduling problem, fixed time between two consecutive maintenance operations, and maximum number of jobs that can be done during this period due to the change of the tool, a dynamic genetic algorithm is proposed.
The rest of this paper is organized as follows.In the following section, we first propose a new mathematical model for solving the problem and following that we consider an efficient way of solving the problem in computationally reasonable time by proposing a genetic algorithm.At the third part, the results of applying both methods are presented and compared against each other.In this part we also compare the performance of the proposed GA with the two existing heuristic methods which are proposed by Hsu et al. [26].Conclusions and some guidelines for future study are presented in the last part of this paper.

Proposed Methods
This section contains two parts in each of which a separate solution method to solve the aforementioned problem is provided.In the first part, a mixed integer programming model is proposed for the first time in the literature, whereas in the second part we propose a GA to solve the problem.

Mathematical Formulation.
In this paper, a single machine scheduling problem is considered with two alternative strategies: (1) implementing the maintenance process after a predetermined time period and (2) the maximum number of jobs that can be done during this period in order to change the tool.The objective of the problem is to minimize the makespan and it is assumed that jobs are nonresumable which means if process of any job could not be completed till the next maintenance process, it must be repeated after maintenance time.Considering all the above, we can define the problem as follows:  jobs are planned to be processed on a machine and are nonresumable.The processing time of each job is indicated with   and it is assumed that all jobs are ready and available at the beginning.Time period between two successive maintenance processes is  and time needed for implementing any maintenance activity is specified with .The jobs processed between two successive maintenance processes are called a batch and should not exceed .Hence, if it is supposed to have  batches in a time horizon, the number of maintenance processes is −1.This could be shown in the schematic view in Figure 1.
Considering the above-mentioned explanation, the problem notation is as follows: : time period between two successive maintenance processes; : time needed for implementing any maintenance activity; : maximum number of jobs to process in period ; : a large number; : number of jobs;   : processing time of job ,  = 1, 2, . . ., ;   : if job  belongs to batch , it equals 1; Otherwise, it equals 0,  = 1, 2, . . ., ;   : if at least one job belongs to batch , it equals 1; otherwise, this variable equals 0;   : amount of gap for batch ;   : the amount of gap for all batches except for empty batches and the last batch in which jobs take place.
In the research done by Hsu et al. [26], the problem of minimizing the makespan is solved by using two mathematical models; in the first one the minimum number of required batches to complete jobs is determined and in the second one, assuming the number of batches a priori, the jobs are sequenced and arranged in the batches such that the makespan is minimized.In this paper, we propose a new mathematical formulation for the problem which solves the problem in one stage.
Before representing the mathematical model, the general idea of solving the problem is to be explained.Regarding the optimal number of the batches which is not specified at the beginning, the maximum possible number of batches is considered initially.Hence, if there are  jobs, there could be  batches at most.However, after arranging the jobs considering minimization of the makespan, subject to predetermined maximum time period allowed and the maximum number of jobs in the batches, there would be batches with more than one job assigned and, as a result, we will have batches with no jobs assigned.The model omits these empty batches.Also the amount of gap for each batch, excluding the last batch, is calculated.At last, the sum of the jobs' processing times, gaps of the batches, and fixed times needed for maintenance processes constitute the amount of the makespan.
Following the explanation mentioned above, the mixed integer programming model is proposed as follows: The first expression ( 1) is objective function which, as described above, consists of three separate terms.The first term calculates the sum of the fixed times needed for maintenance processes, the second term calculates the sum of the gaps of the batches excluding the last one, and the third one calculates the sum of the jobs' processing times.Constraint (2) assures that each job is exactly assigned to one batch.Constraint (3) assures that the number of jobs in each batch does not exceed .Constraint (4) assures that the sum of the jobs' processing times assigned to one batch could not exceed the maximum time period allowed .
We considered  empty batches at the beginning and this may result in empty batches at last, so constraints ( 5) and ( 6) are regulated so that if any job is assigned to the kth batch, respective binary variable  takes the value 1 and otherwise it takes 0. Constraint ( 7) is regulated so that, until a batch is empty, all of its further batches will also be empty.Hence, this constraint helps the empty batches, whose corresponding binary variable  is 0, take place just after the filled batches.This characteristic, as will be explained later in the paper, is used to calculate the amount of gap.By using constraint (8), the initial value of gap is calculated.If we are going to calculate the amount of gap with (8) we will face a problem.In this situation an empty batch will have gap  and it will affect the objective function; in order to solve this problem we introduce constraint (9) to calculate the amount of variable .For empty batches the amount of this variable will equal 0 also because the amount of gap for the last batch, in which jobs take place, is not a part of the makespan; the amount of this variable for the mentioned batch will equal 0. This variable for all other batches will be equal to gap for that batch.In order to solve the problem of nonlinearity of this constraint we propose the following constraints: In this constraints, if any job is assigned to the next batch,   =   , but if the next batch is empty,   is a free variable and, considering it only takes the positive values, it ranges from 0 to infinite.On the other hand, considering that this variable is a part of the objective function and the minimization of the objective function is the case, this variable takes its least possible amount, say 0.

Proposed Genetic Algorithm.
The genetic algorithm, first introduced by Holland [28], is an evolutionary algorithm on the basis of Darwinian's theory of evolution.One of the main differences of this algorithm from other metaheuristic methods, such as simulated annealing, is using a set of solutions (called population) instead of a single solution.As mentioned by Pinedo (2001), production planning problems belong to the NP-Hard class, and also because of the efficiency of GA for discrete optimization problems, as mentioned by Davis [29] and Goldberg [30], many researchers have used it for solving the scheduling problems.
We will explain each of the above items in the following sections.

Chromosome Structure.
Specifying the chromosome structure in every problem is one of the main steps of applying the genetic algorithm.The chromosome structure should be devised such that all of the problem's features could be fitted in that.In our problem, the chromosome structure indicates jobs sequence and their processing times.The rest of the required information for the problem can be derived from this structure.

Initial Population.
Specifying the initial population size is of special importance in the genetic algorithm.As mentioned by Back et al. [31] if the population size is too small, we might not find a suitable solution or if the population size is too large, it might result in large computational time.We choose a population size of 200 from among other candidates such as 300, 100, and 250 because of its better performance on devised experiments.
Suppose that there are  jobs such that  1 jobs require processing time  1 ,  2 jobs require processing time  2 , and  3 jobs require processing time  3 ( 1 +  2 +  3 = ).Hence, any permutation of these jobs is a possible solution for our problem.This strategy is used to make an initial population.

Fitness Function.
Fitness function is calculated to specify how suitable a solution is.So fitness function should be devised such that different solutions could be evaluated according to its value.In order to calculate fitness of a chromosome we use the idea in the objective function and break it into three parts.The first part is related to the jobs' processing times, which remains the same for various job sequences.The second part is related to the fixed maintenance process at the end of a period.Considering that at the end of a fixed time period there is a maintenance activity, we should first find the number of batches and the number of maintenance processes is the number of batches−1.The third part of this function is equal to the sum of the gaps (except for the last nonempty batch).In order to calculate the amount of gap for each nonempty batch initially due to jobs sequence and their processing times we should find out the jobs in a batch, and then by subtracting sum of their processing times from the fixed time between two maintenance operations we will have the gap for a specified batch.Summing these three parts, we can calculate the makespan.

Selection Method.
As introduced by Obitko [32], the selection methods are divided into four categories, namely, roulette wheel, rank, steady state, and elitism.Here we use the roulette wheel method.Goldberg [30] proposed this method.In this method, every population member has a chance to be selected although this probability is not the same for all members.
After all individuals in the population are ranked according to their fitness values, each of them is assigned a selection probability according to its' rank.At the next step, considering the weighed probabilities, we choose the parents amongst the population members.

Genetic Operators.
There are two genetic operators, crossover and mutation.The crossover operator uses two chromosomes as the parents and combines them; it produces two offspring.Although there are other methods for crossover such as two-point crossover, and uniform crossover here we use one-point crossover which is the simplest crossover method.Using this method, a point is randomly selected from which both parents' chromosomes are cut off.Afterward, the first part of the first parent is connected to the second part of the second parent and the first offspring is produced.Following this approach, by combining the first part of the second parent and the second part of the first parent, the second offspring is produced.As specified in Figure 2, this method results in infeasible solutions.It is obvious that, in the first offspring produced, we do not have any job with processing time 10, whereas in the second offspring produced we have two jobs with processing time 10, and there are five jobs with processing time 5 in the first offspring and three jobs with processing time 5 in the second offspring.
To make over the aforementioned problem, there are multiple methods.One of these methods is the positionbased crossover which was first proposed by Syswerda [33] for implementing uniform crossover that is used in other papers such as the paper by Hamidinia et al. [34] in which, with a little variation, this method is used for single-point crossover.However, it is to be noticed that in these kinds of problems, what causes the problem to be infeasible is the repetition of some jobs for two times and elimination of some others in the offspring.But in our problem the situation is different.Our objective in the problem ahead is to make the number of jobs of a specific kind fixed in the offspring's structures produced (in this problem, two jobs with the same processing time are the same in nature).Next, we are going to develop a method for crossover and we will explain it by its implementation on the example mentioned in Figure 2.
After cutting off the parents from a random point (as specified in Figure 2), we take into account the second parts of both parents.In each part, we determine the number of jobs with processing time 1, the number of jobs with processing time 5, and the number of jobs with processing time 10.Now for each type of jobs (regarding their processing times), we introduce an indicator in their corresponding offspring, the value of which equals the differentiation of the number of jobs with the same processing time in the second part of each parent.The results of the calculations for the first child which is produced from the first part of the first parent and the second part of the second parent could be seen in Table 2.In this table the amount of indicator for different kinds of jobs (indicator 10 − 1 means jobs with processing time 10 in the first offspring) is updated at each step in which a job is going to be transmitted to the corresponding offspring.We move the jobs assigned to the second part of the second parent to the first part of the first parent considering first the indicator of each job.If the indicator is less than or equals zero, it is moved without any changes.Otherwise, if the indicator is greater than zero, we randomly select a job, the value of which indicator is less than zero and transmit it except the original job.Then, we add 1 to the indicator of the moved job and subtract 1 from the main job's indicator.At the end of this process, the amount of the indicator of each job will equal zero.In Figure 3 you can see the second part of the second parent which is a candidate for sitting next to the first parent's first part and making the first offspring.
The first jobs processing time is 1 and according to Table 1 its indicator is 0, so it is transferred without any change.The second job has processing time 5 and its indicator at the beginning of this phase is greater than 0 (equals one) so we choose a job with an indicator less than zero randomly (here the only job with negative indicator is the job with processing time 10).At the end of this phase the indicator for jobs with processing time 10 will be increased one unit and will equal zero, and in the same way we decrease one unit from the indicator of the jobs with processing time 1.Now the amount of indicator for all three kinds of jobs equals 0; as a result, in further phases the jobs will transmit without any change.
The second part of the first parent is transmitted similarly.Finally we can see the produced offspring in Figure 4.
Mutation is the second genetic operator.This operator through random changes in the chromosomes enables better search in solution space.For consecutive iterations of the genetic algorithm, when best answer remains without changing, it means that the algorithm is trapped in a local optimum.In such cases, more searches in solution space, which can be done by mutation operator, can be instrumental in achieving optimal solution.In the literature, various approaches such as single-point mutations, two-point, multipoint, and swap are introduced.Here we introduce a dynamic multipoint swap mutation; experiments show that our method performs better than other static methods.
In the proposed method, the number of selected points for swapping () liaises to the number of iterations in which the best solution found by the algorithm remains the same without any changes.As the number of iterative solutions increases, the proposed mutation causes stronger search in the answer space.Size  is calculated from the following equation, in which  is the minimum number of jobs of a specific kind.As we have considered three types of jobs in this paper (as it is discussed in part 3, we have jobs with processing time 1, 5, and 10) then  will be min { 1 ,  2 ,  3 }. Var is the total number of jobs:    Two genes are randomly selected of each chromosome and their values will be swapped with each other; it is repeated  times for each chromosome.

Stopping Criteria.
To terminate the algorithm, various criteria have been introduced in the literature; one of them, which is also very useful, is considering the maximum number of iterations for the algorithm of which the amount should be determined according to the problem.In this study according to the results of the experiments, 250 have been considered as the maximum number of generation (Max ).In some issues the algorithm will reach the optimum solution in early iterations and, until the last iteration, the solution remains unchanged.In this situation two possible modes may be arising: first, the algorithm being trapped in local optimum, and in this case more search is needed which is carried out by using the proposed dynamic mutation.Second, the algorithm reaches the optimal solution.In such cases, to reduce the time needed to solve the problem, it is appropriate to stop the algorithm before the last iteration.Due to the possibility of having a local optimum, algorithm is allowed Max /2 iterations for further search and if, after this number of iterations, best solution did not change, the algorithm will stop.To improve the performance of the algorithm, the second stopping criterion will be used alongside the primary criterion.

Overview of Genetic Algorithm.
Each genetic algorithm contains four stages: initialization, selection, reproduction, and termination.The pseudocode presented in Algorithm 1 shows the aforementioned stages in the proposed genetic algorithm.

Design of the Experiments and Experimental Results
In order to evaluate the proposed genetic algorithm, we require a number of experimental problems through which we can compare the results of the different methods against each other.For generating the random numbers and developing the problem, we use Hsu et al. [26]'s proposed approach.
In this approach the problems are categorized into three sizes.Small-size problems are those with 20, 30, 40, 50, and 100 jobs.Medium-size problems are those with 250, 300, 350, 400, and 500 jobs.And in problems of large size, the number of jobs is considered 700, 1000, 2000, and 10000.Processing time of each job and the fixed time of the maintenance process are uniformly distributed over the discrete values of 1, 5, and 10.The time considered between two successive maintenance processes is calculated through the expression  = max{⌈ ⋅ ∑    ⌉, max   } in which  is uniformly distributed over the discrete values of {1/3, 1/4, 1/5}.The maximum number of jobs to process in every production run with one tool is taken from  = ⌈, ⌉ in which  is uniformly distributed over the discrete values of {1, 1/2, 1/3, 1/4, 1/5}.The proposed genetic algorithm has some parameters that should be adjusted so that the algorithm works satisfactorily.For the number of generations we choose 250, number of individuals in the population ( pop ) is 200, percentage of population which is chosen for crossover (  ) is 0.9, and mutation percentage is 0.1.
The experiments were done on a computer with a 2.4 GHz Core 2 duo CPU and 3 GB of RAM.For achieving globally optimal solutions, we used the GAMS software.As a result, for small-and medium-size problems, the needed time is less than 24 hours.On the other hand, for solving the problems with more than 700 jobs (large-size problems), the needed processing time is not computationally reasonable and exceeds 24 hours.Table 2 shows the results of running the experiments.For each problem size there are nine sample problems and each of them was ran 4 times, so our proposed genetic algorithm was tested totally in 504 problems, and mean time to solve each problem along with the best solution found is presented in Table 2.As mentioned before there are also two heuristic methods, namely, butterfly order with best fit (BBF) and decreasing order with best fit (DBF), for this problem which are presented by Hsu et al. [26] In order to compare the performance of these algorithms with the proposed GA their results are also presented in Table 2. Error percentage for proposed GA and the two heuristic methods in small-and medium-sized problems in which optimal solution was attainable less than 24 hours is calculated as follows: error = algorithm solution − optimum solution optimum solution .
For large-scale problems in which optimum solution was not attainable, due to better performance of the proposed GA compared with the heuristic methods in all cases, the amount of deviation of BBF and DBF algorithm from the attained solution by GA is mentioned as error: Comparing the results of our proposed genetic algorithm with the optimal solution, found by GAMS, we can see that, from the 90 sample problems in which GAMS was able to find the optimal solution in a reasonable time and less than 24 hours (small-size and medium-size problems), only in seven sample problems our proposed algorithm did not achieve the optimal solution, in which maximum error was about 0.02.This shows that the proposed genetic algorithms performance is reliable on the aforementioned problem.
Also we can see that the in all cases proposed GA presents better results than the two heuristic methods.From the table we can see that for BBF algorithm maximum error is about 0.3 and for DBF algorithm maximum error is about 0.36 which shows their weaker performance in comparison with the proposed GA.
The other important point that should be noted is the time needed to solve a sample problem with the proposed GA compared to the time needed to solve the same problem optimally.As it is shown in Table 2 for the sample problems with less than 100 jobs the time needed to solve the problem optimally is a little less than the time needed to solve the problem with genetic algorithm but, as the problem size increases, the time needed to solve the problem optimally has very fast growth, so that, for the large-scale problems (700 jobs and more), even after 24 hours GAMS was not able to achieve the optimal solution but we can see that this growth in the time needed for the genetic algorithm is very low and our proposed genetic algorithm is able to solve the largest scale of the sample problems (problems with 2000 jobs) in a time of about 5 minutes.Another point that should be mentioned is the big range of runtimes for problems of the same size.For example, there is a problem with 72 seconds of runtime and also there is a problem with 101 seconds of runtime with the same scale.This is because of the dynamic nature of the proposed algorithm.Considering the nature of the problem and its convergence speed, number of iterations and runtime of the algorithm will be different.It should be noted that, in

Figure 1 :
Figure 1: A schematic view of the problem.

Figure 3 :
Figure 3: Candidate part versus the original part.

Figure 4 :
Figure 4: Children produced using the proposed method.

Table 1 :
Steps of the proposed method.