Parallel Machine Scheduling with Nested Processing Set Restrictions and Job Delivery Times

The problem of scheduling jobs with delivery times on parallel machines is studied, where each job can only be processed on a specific subset of the machines called its processing set. Two distinct processing sets are either nested or disjoint; that is, they do not partially overlap. All jobs are available for processing at time 0.The goal is to minimize the time by which all jobs are delivered, which is equivalent to minimizing the maximum lateness from the optimization viewpoint. A list scheduling approach is analyzed and its approximation ratio of 2 is established. In addition, a polynomial time approximation scheme is derived.


Introduction
The problems of scheduling with processing set restrictions have been extensively studied in the past few decades [1,2].In this class of problems, we are given a set of  jobs J = {1, 2, . . ., } and a set of  parallel machines M = { 1 ,  2 , . . .,   }.Each job  can only be processed on a certain subset M  of the machines, called its processing set, and, on those machines, it takes   time units of uninterrupted processing to complete.Each machine can process at most one job at a time.The goal is to find an optimal schedule where optimality is defined by some problem dependent objective.
There are several important special cases of processing set restrictions: inclusive, nested, interval, and tree-hierarchical [2].In the case of inclusive processing set, for any two jobs  1 and  2 , either M  1 ⊆ M  2 or M  2 ⊆ M  1 .In the nested processing set case, either M  1 ∩ M  2 = , or M  1 ⊆ M  2 , or M  2 ⊆ M  1 .In the interval processing set case, the machines are linearly ordered, and each job  is associated with two machine indices   and   such that M  = {   ,    +1 , . . .,    }.It is easy to see that the inclusive processing set and the nested processing set are two special cases of the interval processing set.In the tree-hierarchical processing set case, each machine is represented by a vertex of a tree, and each job  is associated with a machine index   such that M  consists of the machines on the unique path from    to the root of the tree.
In this paper, we consider the problem of scheduling jobs with nested processing set restrictions on parallel machines.Besides its processing time   and processing set M  , each job  requires an additional delivery time   after completing its processing.If   denotes the time job  starts processing, it has been delivered at time   =   +   +   , which is called its delivery completion time.All jobs are available for processing at time 0. The objective is to minimize the time by which all jobs are delivered, that is, the maximum delivery completion time,  max = max    .Minimizing the maximum delivery completion time is equivalent to minimizing the maximum lateness from the optimization viewpoint [3].Following the classification scheme for scheduling problems by Graham et al. [4], this problem is noted: The motivation for this problem is the scenario in which the jobs (with nested processing set restrictions) are first processed on the machines and then delivered to their respective customers.In order to be competitive, the jobs are needed to be delivered as soon as possible to their customers.Thus, the industry practitioners are required to coordinate job production and job delivery.In manufacturing and distribution systems, finished jobs are delivered by vehicles such as trucks.Since there are sufficient vehicles for delivering jobs, delivery is a nonbottleneck activity.Therefore, we assume that all jobs may be simultaneously delivered.Considering job production and job delivery as one system, we choose the cost function to measure the customer service level.In particular, we are interested in the objective of minimizing the time by which all jobs are delivered.
The problem as stated is a natural generalization of strongly NP-hard problem  ‖  max , which corresponds to the special case where all M  = M and all   = 0 [5].For NP-hard problems, the research focuses on developing polynomial time approximation algorithms.Given instance I of a minimization problem and approximation algorithm , let (I) and OPT(I) denote the objective value of the solution obtained by algorithm  and the optimal solution value, respectively, when applied to I. If (I)/OPT(I) ≤  for all I, then we say that algorithm  has approximation ratio , and  is called -approximation algorithm for this problem.Ideally, one hopes to obtain a family of polynomial time algorithms such that, for any given  > 0, the corresponding algorithm is (1 + )-approximation algorithm; such a family is called a polynomial time approximation scheme (PTAS) [6].
As mentioned above, the classic scheduling problem (without processing set restrictions) of minimizing the maximum delivery completion time and the problem of minimizing makespan with nested processing set restrictions have been studied in the literature.However, to the best of our knowledge, the problem of minimizing the maximum delivery completion time with nested processing set restrictions,  |   (nested) |  max , has not been studied to date.In this paper, we first use Graham's list scheduling [17] to get a simple and fast 2-approximation algorithm.We then derive a polynomial time approximation scheme, which is heavily built on the ideas of [15].The PTAS result generalizes the approximation schemes of [15,16], both of which deal with only the special case where all   = 0.
The paper is organized into sections as follows.Section 2 presents a 2-approximation algorithm which uses list scheduling as a subroutine.The next three sections are devoted to designing the polynomial time approximation scheme.Section 3 shows how to simplify the input instance to get a so-called rounded instance.Section 4 shows how to solve the rounded instance optimally.Section 5 wraps things up to derive the polynomial time approximation scheme.The discussion in Section 6 completes the paper.

A 2-Approximation Algorithm
In this section, we will present a simple and fast 2approximation algorithm for  |   (nested) |  max .
As observed in [12], nested processing sets have a partial ordering defined by the inclusion relationship and, thus, offer a natural topological sort on jobs, taking more constrained jobs first.
We now explore the behavior of Graham's list scheduling algorithm [17], with jobs sorted to respect nestedness.It does not depend on the delivery times.The algorithm is called Nested-LS.

Nested-LS
Step 1. Place all the jobs in a list in the order of the topological sort on jobs, taking more constrained jobs first.Set Load  = 0,  = 1, 2, . . ., .
Step 2. For the first unscheduled job  in the list, select a machine   ∈ M  for which Load  is the smallest (ties broken arbitrarily).Assign job  to machine   .Set Load  = Load  +   .Repeat this step until all the jobs are scheduled.
The load on a machine is defined to be the total processing time of the jobs assigned to this machine.The quantity Load  represents the current load on machine   during the run of Nested-LS,  = 1, 2, . . ., .
Proof.Let OPT be the objective value of an optimal schedule.Denote by  max the objective value of the schedule generated by Nested-LS.Let  1 be the first job in the list generated in Step 2 for which  max =   1 +  1 +  1 holds, where   1 denotes the time job  1 starts processing.Remove all the following jobs from the list.Remove all the machines in M \ M  1 and all the jobs assigned to these machines.This cannot increase OPT and does not decrease  max .
A straightforward lower bound on OPT is OPT ≥   +   for any job .By the rule of Nested-LS, at the time when job  1 is assigned to its machine   ,   is the least-loaded machine among the machines in M  1 .It follows that   1 < OPT.Consequently, we get The initial step of sorting the jobs to respect nestedness requires ( log ) time.Assigning a job to the least-loaded eligible machine in Step 2 runs in (log ) time.Thus, Nested-LS runs in ( log ) time.

Simplifying the Input
The nested structure of processing sets can be depicted by rooted tree  = (,), in which each processing set is represented by a vertex, and the predecessor relationship is defined by inclusion of the processing sets.Each machine can be regarded as a one-element processing set and thus it corresponds to a leaf in , even if there are no jobs associated with this processing set.The root of  corresponds to M = { 1 ,  2 , . . .,   }.
For each vertex V ∈ (), let M(V) denote the set of machines associated with V, which is the disjoint union of all the processing sets associated with the sons of V. Let J(V) denote the set of jobs  for which M  coincides with M(V).
Following [15], we transform tree  into a binary tree as follows.If there is vertex V with at least three sons, create new vertex  as the new father of two sons of V and as a new son of V. Repeat this procedure until we reach a binary tree, which consists of  leaves and of  − 1 nonleaf vertices.Therefore, we can assume without loss of generality that  is a binary tree.
Given instance  of  |   (nested) |  max , let OPT() be the objective value of an optimal schedule for .Let  be the objective value of the schedule for  generated by Nested-LS.We have OPT () ≤  ≤ 2OPT () . ( Let  be some fixed positive integer.Classify jobs as big and small according to their processing times.Job  is big, if   > /(( + 1)), and otherwise it is small.
(ii) For each big job , round its processing time   up to the nearest integer multiple of /( 2 ( + 1)).Let  #  denote the rounded processing time of big job .Note   ≤  #  ≤ (1 + 1/)  .(iii) For each vertex V ∈ (), let   (V) be the total processing time of the small jobs with delivery time   in J(V).Let  #  (V) be the value of   (V) rounded up to the nearest integer multiple of /(( + 1)).The small jobs with delivery time   in J(V) are replaced with  #  (V) ⋅ ( + 1)/ new jobs each of which has delivery time   and processing time /(( + 1)),  = 1, 2, . . .,  + 1.
Proof.Let Σ be an optimal schedule for instance  with objective value OPT().Recall that the single machine case 1 ‖  max can be solved optimally by Jackson's rule [18]: process the jobs successively in order of nondecreasing delivery times.Hence, we can assume that in Σ on each machine the jobs with the same delivery time are processed together in succession.Let   be the total processing time of the small jobs with delivery time   processed on machine   in Σ.Let  #  be the value of   rounded up to the nearest integer multiple of /(( + 1)).Replace the small jobs with delivery time   processed on machine   in Σ by 2 +  #  ⋅ ( + 1)/ slots each of which has delivery time   and size /(( + 1)),  = 1, 2, . . ., ,  = 1, 2, . . .,  + 1.
We next explain how to assign all small jobs in  # () to these slots of size /(( + 1)).Starting from the leaves of tree , we work our way towards the root in a bottomup fashion.Suppose that we are handling vertex V ∈ ().At this point, all the descendants of V except V itself have already been handled and some slots have been occupied.Let  V be the subtree of  rooted at V which contains exactly 2(V) − 1 vertices, where (V) = |M(V)| denotes the number of machines (leaves) in  V .Let   denote the total processing time of the small jobs with delivery time   in all descendants of V (including V itself) in instance , and let  #  denote the total processing time of the corresponding small jobs in  # (),  = 1, 2, . . .,  + 1.Let  #  denote the total size of the slots with delivery time   and size /(( + 1)) on all machines in descendant leaves of V. We have the following two inequalities:  #  ≤   + (2(V) − 1)/(( + 1)) and  #  ≥   + 2(V)/(( + 1)).Therefore, we get  #  ≤  #  .Let SJ # l (V) denote the set of small jobs  # in  # () with delivery time   # =   , processing time   # = /(( + 1)), and processing set M  # = M(V).Since  #  ≤  #  , when we handle vertex V, there are enough unoccupied slots of size /(( + 1)) to accommodate the jobs in SJ #  (V).For each job in SJ #  (V) ( = 1, 2, . . .,  + 1), we assign an unoccupied slot to it and then mark this slot as occupied.
After we handle the root of , we fit all small jobs in  # () into the slots of size /(( + 1)) and thereby get an assignment of the small jobs in  # () to  machines.We schedule the small jobs in  # () assigned to machine   as follows.If   = 0 (which means that in Σ machine   processes no small job with delivery time   ) but the replacement procedure has assigned some small jobs with delivery time   to   , then we schedule these small jobs on   first,  = 1, 2, . . .,  + 1,  = 1, 2, . . ., .We then schedule all the other small jobs assigned to   by Jackson's rule, that is, keeping their order in Σ.(In fact, scheduling all the small jobs assigned to   by Jackson's rule can improve the solution.We schedule them in this way only for ease of the subsequent analysis.) The big jobs in  # () are easily scheduled.We simply replace every big job in Σ by its rounded counterpart in  # ().
Let Σ # denote the obtained schedule for  # ().It remains only to analyze how the objective value changes, when we move from schedule Σ to schedule Σ # .Rounding up the delivery times may increase the objective value by /.Rounding up the processing times of the big jobs may increase the objective value by a factor of 1 + 1/.Since there are at most  + 1 different delivery times in  # (), the replacement procedure for the small jobs increases the objective value by ( + 1) ⋅ 3/(( + 1)) = 3/.Hence,  # , the objective value of Σ # , is no more than (1 + 1/) ⋅ OPT() + 4/.Lemma 3. Let Σ # be a schedule for  # () with objective value  # .Then, there is a schedule for  with objective value   ≤  # + /.
Proof.We assume without loss of generality that, in Σ # on each machine, the jobs are scheduled by Jackson's rule, and thus the small jobs with the same delivery time are processed together in succession on each machine.Let  #  be the total processing time of the small jobs with delivery time   processed on machine   in Σ # ,  = 1, 2, . . .,  + 1,  = 1, 2, . . ., .
We now explain how to assign all small jobs in  to  machines.Starting from the leaves of tree , we work our way towards the root in a bottom-up fashion, as in the proof of Lemma 2. Suppose that we are handling vertex V ∈ ().At this point, all the descendants of V except V itself have already been handled and the associated small jobs have been assigned.Let   denote the total processing time of the small jobs with delivery time   in all descendants of V (including V itself) in instance , and let  #  denote the total processing time of the corresponding small jobs in  # (),  = 1, 2, . . .,  + 1.We have  #  ≥   .Let SJ  (V) denote the set of small jobs  in  with delivery time   =   and processing set M  = M(V),  = 1, 2, . . .,  + 1.For each machine   ∈ M(V), if  #  > 0 (which means that in Σ # machine   processes at least one small job with delivery time   ), we assign the small jobs in SJ  (V) to   until the first time that the total processing time of the small jobs with delivery time   assigned to   exceeds  #  (or until there are no unassigned small jobs in SJ  (V)).Since  #  ≥   , each job in SJ  (V) can be assigned to a machine in M(V).
After we handle the root of , we assign all small jobs in  to  machines.The big jobs in  are easily scheduled.We simply replace every rounded big job in Σ # with its original counterpart in .We then schedule all the jobs assigned to   by Jackson's rule, that is, keeping their order in Σ # ,  = 1, 2, . . ., .
Let  max be the objective value of the obtained schedule for .Since all small jobs in  have processing time at most /(( + 1)) and there are at most  + 1 different delivery times in , the replacement procedure for the small jobs may increase the objective value by ( + 1) ⋅ /(( + 1)) = /.The replacement of big jobs will not increase the objective value.Hence, we get  max ≤  # + /.

Solving the Rounded Instance Optimally
In this section, we will present an polynomial time optimal algorithm for rounded instance  # () obtained in the preceding section.The basic idea is to generalize the dynamic programming method used in [15] for solving problem  |   (nested) |  max .
Recall that all jobs in rounded instance  # () have processing times of form /( 2 ( + 1)), where  ∈ {,  + 1, . . .,  2 ( + 1)}.We thus can represent subsets of the jobs in  # () as vectors ⃗  = ( is encoded by vector ⃗ (V).Let V ∈ () and ⃗  ∈ Λ, where vector ⃗  represents a subset of jobs whose processing sets are proper supersets of M(V).Let (V, ⃗ ) denote the minimum objective value over all the schedules which process the jobs in the descendants of V and the additional jobs in ⃗  on the machines in M(V).All jobs in the descendants of V must obey the processing set restrictions, whereas the jobs in ⃗  can be assigned to any machine in M(V).
All values (V, ⃗ ) can be computed and tabulated in a bottom-up fashion.If V is a leaf, then (V, ⃗ ) is equal to the objective value of the single machine schedule which processes all the jobs in ⃗ (V) + ⃗  by Jackson's rule.If V is a nonleaf vertex, then V has two sons V 1 and V 2 , and Since there are (  2 (+1) ) different vectors in Λ and  is a fixed integer that does not depend on the input, all values (V, ⃗ ) can be computed in polynomial time.Finally, since, for root vertex , there is no job whose processing set is a proper superset of M(), the optimal objective value for  # () is achieved by computing (, (0, . . ., 0)).
We establish the following theorem.
Theorem 4. For any fixed integer , rounded instance  # () can be solved optimally in polynomial time.

The Approximation Scheme
We can now put things together.
Let  be an arbitrarily small positive constant.Set  * = ⌈11/⌉.Given instance  of  |   (nested) |  max , we get rounded instance  # ( * ) as described in Section 3. Solve  # ( * ) optimally as described in Section 4. Denote by Σ # the resulting optimal schedule with objective value  # max for  # ( * ).Transform Σ # into schedule Σ with objective value  max for instance  as described in the proof of Lemma 3.

Concluding Remarks
In this paper, we initiated the study of scheduling parallel machines with job delivery times and nested processing set restrictions.The objective is to minimize the maximum delivery completion time.For this strong NP-hard problem, we presented a simple and fast 2-approximation algorithm.We also presented a polynomial time approximation scheme.A natural open problem is to design fast algorithms with approximation ratios better than 2. It would also be interesting to study the problem with job release times.