Parallel-Machine Scheduling with Time-Dependent and Machine Availability Constraints

We consider the parallel-machine scheduling problem in which the machines have availability constraints and the processing time of each job is simple linear increasing function of its starting times. For the makespan minimization problem, which is NP-hard in the strong sense, we discuss the Longest Deteriorating Rate algorithm and List Scheduling algorithm; we also provide a lower bound of any optimal schedule. For the total completion time minimization problem, we analyze the strong NP-hardness, and we present a dynamic programming algorithm and a fully polynomial time approximation scheme for the two-machine problem. Furthermore, we extended the dynamic programming algorithm to the total weighted completion time minimization problem.


Introduction
Consider the following scheduling problem with deteriorating job and machine availability constrains.There are  independent deteriorating jobs  = { 1 , . . .,   } to be processed on  identical parallel machines.The actual processing time of job   ( = 1, . . ., ) is   =   , where   (>0) and  are the deteriorating rate and the starting time of job   , respectively.Each job   has a weight   .All jobs are released at time  0 (> 0).The case with  0 = 0 is not considered because the job would have its processing rimes equal to zero if  0 = 0. We assume that the jobs are nonresumable in our problems.Machine   (1 ≤  ≤ ) is not continuously available and unavailable during the period   = [ ()  1 ,  () 2 ).In addition, we assume that  0 (1+ max ) <   1 <   2 and  () 1 <  0 ∏  =1 (1+  ), where  max = max =1,..., {  }.Otherwise, all the jobs can be finished before the nonavailable period and the problem becomes trivial.Without loss of generality, we assume that all parameters are integral unless stated otherwise.
The model described above falls into the categories of the scheduling with deteriorating job and the machine scheduling with availability constraints.The scheduling with deteriorating job was first considered by J. N. D. Gupta and S. K. Gupta [2] and Browne and Yechiali [3].Cheng et al. [4] gave a survey and the monograph by Gawiejnowicz [1] presented this scheduling from different perspectives and covers results and examples.Graves and Lee [5] pointed out that machine scheduling with an availability constraint is very important and is still relatively unexplored.They studied this problem whose maintenance needs to be performed within a fixed period.Lee [6] presented extensive study of single and parallel machine scheduling problems with an availability constraint, with respect to various performance measures and two cases are considered: resumable and nonresumable.A job is said to be resumable if it cannot be finished before the nonavailable interval of a machine and can continue after the machine is available again.On the other hand, a job is said to be nonresumable if it has to restart rather than continue.Ma et al. [7] gave a survey of this scheduling.
In this paper, we consider the deteriorating job scheduling with machine availability constraints on  identical parallel machines.The jobs are nonresumable and our objective is to minimize the makespan and the total (weighted) completion time.
Relevant Previous Work.Wu and Lee [8] initiated the deteriorating job scheduling with machine availability constraints; 2 Mathematical Problems in Engineering they showed that minimizing the makespan of scheduling deteriorating jobs on a single machine with an availability constraint can be transformed into 0-1 integer programming.Ji et al. [9] gave some results for the linear deteriorating jobs with an availability constraint on a single machine.Gawiejnowicz and Kononov [10] considered the complexity and approximability of scheduling resumable proportionally deteriorating jobs.Fan et al. [11] considered the scheduling resumable deteriorating jobs on a single machine with nonavailability constraints.Li and Fan [12] addressed the nonresumable scheduling problem 1|−,   =   | ∑     .The problems they considered are on the single machine.In this paper, we consider the parallel-machine scheduling problem with deteriorating jobs and machine availability constraints, and we show that the problems are strongly NPhard and present some algorithms.

Minimizing the Makespan
In this section, we first show that problem   | − ,   =   | max is NP-hard in the strong sense.Ji and Cheng [13] showed that problem   |  =   | max is NP-hard in the strong sense when  is arbitrary.In their problem, all the machines are available all the time.Thus, our problem   |− ,   =   | max is NP-hard in the strong sense when  is arbitrary.
In the following, we discuss the Longest Deteriorating Rate (LDR for short) algorithm and List Scheduling (LS for short) algorithm and analyze a lower bound of any optimal schedule.

LDR and LS Algorithms
LS Algorithm.Given a sequence of jobs  1 , . . .,   , assign the jobs one by one according to the list.Each job is assigned to the machine where the job can be finished as early as possible.
LDR Algorithm.Sort the jobs in the nonincreasing order of their deteriorating rates, and then assign the jobs by LS algorithm.
LDR max ,  LS max , and  * max denote the makespan corresponding LDR, LS, and the optimal solution, respectively.Theorem 1.   max / * max can be arbitrarily large even for the two-machine problem  2 | − ,   =   | max .
The proof of this theorem is similar to the proofs of Theorems 1 and 3 in Liu et al. [14].
Proof.In any optimal schedule, let    and     denote the set of jobs scheduled on machine   and before   1 on   ( = 1, . . ., ), respectively.Then, the set of jobs scheduled after   2 is    /    , and We have the load of machine   denoted by

Minimizing the Total Completion Time
In this section, we discuss the total completion time minimization problem.
Ji and Cheng [13] showed that problem   |  =   | ∑   is NP-hard in the strong sense when  is arbitrary, which implies that our problem   |−,   =   | ∑   is NP-hard in the strong sense when  is arbitrary.

Dynamic Programming Algorithm.
In this subsection, we present a dynamic programming algorithm for  2 | − ,   =   | ∑   when machine  2 is always available.For convenience, let  0 = 1, and let the only nonavailable interval on machine  1 be [ (1)  1 ,  (1)  2 ),  = ∏  =1 (1 +   ), and Smallest Deteriorating Rate (SDR for Short).Sort the jobs in the nondecreasing order of their deteriorating rates such that Lemma 4. In any optimal solution to problem  2 | − ,   =   | ∑   , the jobs scheduled before the nonavailable interval are processed by the SDR order and so are the jobs scheduled after the nonavailable interval and on machine  2 .
We can proof this lemma by the interchanging argument.We assume that the jobs are reindexed in the SDR order.Let   (V, ) denote the optimal value of the objective function satisfying the following conditions: (i) The jobs in consideration are  1 , . . .,   .
(ii) The total processing time of  1 before  (1)  1 is V. (iii) The total processing time of  2 is .
To get   (V, ), we distinguish three cases as follows.
Combining the above cases, we design a dynamic programming algorithm as follows.

A Fully Polynomial Time Approximation Scheme.
In this subsection, we present a fully polynomial time approximation scheme for problem  2 | − ,   =   | ∑   when machine  2 is always available.Following Woeginger [15], we show that  2 | − ,   =   | ∑   is DP-benevolent, which follows that there exists a fully polynomial time approximation scheme for our problem.For convenience, let  0 = 1.
The fully polynomial time approximation scheme is based on Lemma 4 stated in Section 3.2.Thus, we first sort the jobs in the nondecreasing order of their deteriorating rates such that  1 ≤  2 ≤ ⋅⋅⋅ ≤   .The dynamic programming algorithm proposed in the following goes through  phases.
In the th ( = 1, 2 . . ., ) phase, we input the vector   = [  ]; meanwhile, a state set   is generated.Any state in   is a vector [ 1 ,  2 ,  3 , ] which encodes a partial schedule for the first  jobs  1 , . . .,   .The component  1 represents the total processing time before  (1)  1 on machine  1 ,  2 represents the total processing time after  (1)  2 on machine  1 ,  3 represents the total processing time on machine  2 , and the component  represents the objective value of the current schedule.The initial set  0 contains the only state [1,  (1)  2 , 1, 0].The state   is generated from the state  −1 by three mappings  1 ,  2 , and  3 which are defined as follows: Intuitively, function  1 puts job   at the end before  (1)  1 on machine  1 if it is possible for the given state; and function  2 puts job   at the end after  (1)  2 on machine  1 and function  3 puts job   at the end on machine  2 .Finally, set ( 1 ,  2 ,  3 , ) = .
Combining the above discussion, we design a dynamic programming algorithm as follows.
Algorithm DP2. 1 Note that the number of states in the above dynamic programming is bounded by  (1)  2 ∏  =1 (1 +   ).There holds the following result.Theorem 6.There exists a fully polynomial time approximation scheme for problem  2 | − ,   =   | ∑   when one machine is always available.
Proof.The functions  1 ,  2 , and  3 are vectors of polynomials with nonnegative coefficients and the polynomial functions in  1 ,  2 , and  3 that yield the components are polynomials.Moreover, all polynomials linearly depend on  1 ,  2 ,  3 , and .The inequality inside operator "if " can be checked in polynomial time.The objective function  is a polynomial with nonnegative coefficients.Therefore, similar to the example in Section 5.3 of Woeginger [15], it is not hard to verify that the above dynamic programming satisfies the conditions of Lemma 6.1 and Theorem 2.5 from [15].Thus, problem  2 | − ,   =   | ∑   is DP-benevolent.As a result, there exists a fully polynomial time approximation scheme for problem  2 | − ,   =   | ∑   when one machine is always available.

Lemma 7.
In any optimal solution to problem  2 | − ,   =   | ∑     , the jobs scheduled before the nonavailable interval are processed by the WDR order and so are the jobs scheduled after the nonavailable interval and on machine  2 .
We can proof this lemma by the interchanging argument.We assume that the jobs are reindexed in the WDR order.Similar to the context of Section 3.1, let   (V, ) denote the optimal value of the objective function satisfying the following conditions: (i) The jobs in consideration are  1 , . . .,   .
(ii) The total processing time of  1 before  (1)  1 is V. (iii) The total processing time of  2 is .
We design a dynamic programming algorithm as follows.

Conclusions
In this paper, we considered the parallel-machine scheduling with time-dependent and machine availability constraints.We showed that our two problems   | − ,   =   | max (∑   ) are NP-hard in the strong sense.We analyzed the LDR and LS algorithms and the lower bound of any optimal schedule and presented dynamic programming algorithm and fully polynomial time approximation scheme for the problem  2 | − ,   =   | ∑   .Furthermore, we extended the dynamic programming algorithm to problem  2 | − ,   =   | ∑     .
For future research, the other objectives are worth considering.The design of PTAS for our problems is another worthy topic.