Mathematical Model and Algorithm for the Reefer Mechanic Scheduling Problem at Seaports

With the development of seaborne logistics, the international trade of goods transported in refrigerated containers is growing fast. Refrigerated containers, also known as reefers, are used in transportation of temperature sensitive cargo, such as perishable fruits. This trend brings new challenges to terminal managers, that is, how to efficiently arrange mechanics to plug and unplug power for the reefers (i.e., tasks) at yards. This work investigates the reefer mechanics scheduling problem at container ports. To minimize the sum of the total tardiness of all tasks and the total working distance of all mechanics, we formulate a mathematical model. For the resolution of this problem, we propose a DE algorithm which is combined with efficient heuristics, local search strategies, and parameter adaption scheme. The proposed algorithm is tested and validated through numerical experiments. Computational results demonstrate the effectiveness and efficiency of the proposed algorithm.


Introduction
In today's globalized world, cost-effective transportation systems that link global supply chains are the engine fuelling economic development and prosperity.With 80 percent of global merchandise trade by volume carried by sea and handled by ports worldwide, the strategic economic importance of maritime transport as a trade enabler cannot be overemphasized.World container port throughput surpassed 600 million 20-foot equivalent units in 2012, increased by an estimated 3.8 percent (UNCTAD 2013).
Among container businesses, to export perishable goods, such as beefs and fruits, refrigerated containers or reefers (see Figure 1) are employed to maintain customers' goods with the best quality during the transport to final destinations.With this trend, some companies, like CMA CGM, have investigated in large fleets of reefers.NUCTAD (2013) also mentions that Container Industry San Antonio is going to be the first reefer container factory in South America.The company is scheduled to produce 40,000 reefer containers per year.South America is among the regions with the highest demand for empty reefer containers for export.The impact on society of reefers is vast, allowing consumers all over the world to enjoy fresh produce at any time of year and experience previously unavailable fresh produce from many other parts of the world (https://en.wikipedia.org).As for container terminals, this trend brings new challenges to managers: (i) a large amount of reefers is transshipped at a relatively short time; (ii) the accurate arrival and departure time of reefers are not known beforehand.To guarantee the service quality for reefers, without increasing the cost of employing reefer mechanics, the terminal managers need to optimize the usage of current employees.
However, rare effort has been put on optimization of human resource, such as reefer mechanics here.To guarantee the quality of goods transported in reefers, port managers need to properly arrange mechanics to plug in power when the reefers arrive and unplug power from the nearest electricity pole where reefer is stored.Such tasks need to be performed properly, since later completion may affect the quality of cargos, which cannot be recovered.The problem described is similar to the scenario of parallel machine total tardiness minimization scheduling where jobs arrive over time.Liu et al. [12], Chu [13], Yalaoui and Chu [14], and Kacem et al. [15] investigate parallel machine scheduling problem with setup times and devise a branch and bound algorithm to solve it.Schutten and Leussink [16] discuss parallel machine scheduling problem with release times where due dates and setup times are also in consideration and present a branch and bound algorithm to solve this problem.Chen and Lee [17] implement a column generation approach for parallel machine scheduling problem with a due window.These approaches can obtain optimal results but are applicable to problems with a limited number of jobs and machines, due to an unacceptable computation time.Hence, heuristic algorithms prevail against exact algorithms in this field.Chiang et al. [18] propose a memetic algorithm.Torabi et al. [19] formulate it as biobjective problem and construct a PSO algorithm to solve it.Lamothe et al. [20] propose an efficient simulated annealing algorithm for it.
The only literature referring to reefer mechanic scheduling problem (RMSP) can be found in [21,22].A general model that consists of the scheduling of straddle carriers, automated guided vehicles, stacking cranes, and the reefer mechanics is developed by Hartmann [21].A genetic algorithm is applied to solve it.Quite recently, Hartmann [22] studies the RMSP and attempts to minimize the total distances between any two consecutive jobs due to their different locations.An insertion-based heuristic is proposed and evaluated through a simulation model.However, there is no reference related to such a scheduling problem in the domain of deterministic scheduling literature.To the best of our knowledge, we are the first to study reefer mechanic scheduling problem in the view of optimization aspect.The main contributions which distinguish our work from theirs are (i) development of a mathematical model for RMSP, (ii) design of an efficient differential evolutionary algorithm (DE), and (iii) extensive numerical experiments to verify its superiority.
The remainder is organized as follows.In Section 2, we describe the problem and establish the MIP (Mixed Integer Programming) model.A DE algorithm is developed in Section 3. Section 4 presents extensive computational results.Section 5 concludes the work.

Problem Description
2.1.Background.Some restricted stacking areas on the yard are equipped with electricity poles for supporting reefers.Reefer mechanics plug the arriving container and unplug the departing container not later than a desired handling completion time.Each task of plugging or unplugging takes a certain time.After completion of one task, a mechanic walks to the location of next task.
As mentioned in Hartmann (2013), the basic process associated with reefer handling is as follows: (i) the terminal operating system (TOS) transmits job information to a reefer mechanic, (ii) the reefer mechanic receives the job on his portable radio data device, (iii) the mechanic walks to the reefer location, and (iv) the mechanic performs the plugging or unplugging of the container and returns to (i).Each plugging job is related to an arrival reefer and each unplugging job corresponds to a departing container.
Each of the jobs has a release time, that is, the earliest possible start time of handling, a processing time, that is, the time taken for the unplugging/plugging and the repair of mechanical problems, and a due date, that is, the desired completion time of handling.The release time of a plugging job corresponds to the time when a container is placed at yard by a stacking crane.Considering unplugging jobs, the release time depends on the allowed time without power.The due date indicates the time at which the stacking crane has to pick the container, which is determined by each individual container.A due date allows a slight violation.Due date is determined by each container, because (i) some reefers may be equipped with cooling water system, they may suffer a longer time without electricity, (ii) inbound/outbound containers from different vessels may consume different transportation time on the yard, and (iii) the cargos contained in different reefers have different tolerances.
As a reefer may be located at any position on the yard designed for reefers, the walking distance of a mechanic between two jobs is symmetry and satisfies triangle inequality.This corresponds to a symmetry setup time between two tasks.In RMSP, a group of mechanics are available all the time.The aim of RMSP is to minimize the sum of the total tardiness of all reefers and the total walking distance of all mechanics.

Mathematical Model.
To facilitate model formulation, we define the RMSP on a directed graph  = (, ), where  and  are the node and arc sets, respectively.The node set  consists of a set of tasks or jobs  = {1, 2, . . ., }, each representing the plugging or unplugging of a reefer, and two dummy nodes (0 and  + 1), representing the mechanic start and end nodes.The arc set  is given by  = {(, ) :  ∈  ∪ 0,  ∈  ∪ { + 1},  ̸ = }.A set of homogeneous mechanics  = {1, 2, . . ., } are available all the time and can process jobs with the same speed.Each node/job  is associated with a processing time   and a due date   .Associated with each arc (, ) is a setup time   which is specified for a couple of jobs  and .Clearly,   =   .In this sense, the RMSP aims at determining a set of optimal mechanic schedules covering all jobs and the total walking distance of all mechanics.
The objective function is to minimize the sum of the total tardiness and the total walking distance of all electricians.As the walking speeds of mechanics are typically identical, we use setup time between two jobs to denote their walking distance.This is also because the objective function is the weighted sum and we can scale by walking speed.
We are dealing with a deterministic scheduling optimization problem.To precisely define the problem, we construct a MIP formulation.First, parameters and decision variables are introduced below.The MIP Formulation The RMSP can be mathematically formulated as follows:

Parameters
In this formulation, the objective (1) is to minimize the sum of the total tardiness of all jobs and the total walking distance of all mechanics.Constraints (2) restrict that each job is processed exactly once.Constraints (3) guarantee that each mechanic can be only used at most once, which also imply that no more than || mechanics can be used in total.Constraints (4) reflect flow conservation constraints that ensure that the jobs are correctly ordered for each mechanic.Constraints (5) specify that the completion time of the first job processed by a mechanic should be greater than its processing time.Constraints (6) indicate that the each job can be processed only after its release time.Constraints (7) ensure the precedence relation.Specifically, if job  immediately precedes job  on mechanic  (i.e.,    = 1), then completion time of job  (i.e.,    ) must be larger than the completion time of job  (i.e.,    ), plus the setup time between  and  and the processing time of .Otherwise, the big  renders the constraint redundant.Constraints (8) define the tardiness of the jobs on each mechanic.Constraints (9) and (10) clarify the domains of variables.

Solution Method
The reefer mechanic scheduling problem can be categorized as combinatorial optimization problem with great computational complexity.Commercial optimization software such as CPLEX can solve above MIP formulation for small size problems but turns impotent for large size problems, which is experimentally verified in Section 4.However, metaheuristics inspired from natural systems are capable of obtaining high quality solutions within moderate computational time, serving as a good alternative when dealing with real-world problems.DE algorithm initially proposed by Storn and Price [23] has emerged as a simple yet efficient metaheuristic with successful application in many fields such as the job-shop scheduling problem [24], the vehicle routing problem [25], the forecast problem [26], the cyclic hoist scheduling problem [27], the reactive power management problem [28], and the parallel batch processing machine scheduling problem [29].Therefore, in this paper, we propose a new DE algorithm combining DE with multiple improvement strategies to solve the RMSP problem.In the following paragraphs, we will shed light on how to implement it, including solution representation and decoding method, parameter setting, DEbased search, and local search strategies.
Step 1: Initialize a population of  individuals; Step 2: Evaluate the  individuals and record the current best solution; Step 3: While the preset stopping condition is not satisfied DO Step 3.1: Mutation Generate a mutant population by using a mutation operator; Step 3.2: Crossover Mix the parent and mutant to create a trial population; Step 3.3: Selection Select the survival of the fitter for the next generation; End Step 4: Return the best solution.

A Brief Introduction to DE.
The DE algorithm is a scholastic population-based metaheuristic algorithm that implements the evolutionary generation-and-test paradigm for global optimization problems.It takes full advantage of distance and direction information from the current population to guide its search.DE follows the general procedure of evolutionary algorithms.To be specific, it starts with an initial population and iteratively proceeds from the incumbent population to a new better-performing one by employing mutation, crossover, and selection operators.Mutation operator is concerned with generating new mutant individuals with regard to difference among individuals, while crossover operator exchanges information between parent and mutant individuals.According to a fitness metric, the selection operator favors better individuals to survive.Basically, DE adopts the algorithmic framework as shown in Algorithm 1.
An exhaustive description of these operators will be presented in subsequent parts.

Solution Representation and Decoding
Method.Solution representation involves figuring out a suitable mapping from solutions of problem at hand to individuals in DE population, being the first and foremost issues in using DE for the RMSP problem.Note that DE is initially proposed for continuous optimization problems in which each individual is encoded by float numbers; therefore the priority-based representation is used in this paper in such a way that the feasibility of individuals from parents to offspring can always be ensured.It is worth noting that the priority-based representation resembles the random keys representation that encodes a solution for problem to be optimized with random numbers [30].
Specifically, for priority-based representation, each individual  is represented by a list   = { 1 ,  2 , . . .,   }, where  denotes th individual in population,  stands for the total number of jobs to be scheduled, and   ( = 1, 2, . . ., ) restricted in [0, 1] is used to indicate the priority of job .The larger the value is, the higher priority the job takes.Sort the jobs in a decreasing order of priority; then a complete permutation of all jobs is determined.In order to obtain an intuitive understanding of it, an illustrative instance is shown in Figure 2. The instance comprises 2 reefer mechanics and 6 jobs indexed from 1 to 6, respectively.Job 2 with priority 0.6 is ranked first and thus assigned to be the first element in permutation.By analogy, job 5 is ranked last and is the final  Once a permutation is generated, we proceed to decode it into a complete schedule.For this purpose, Algorithm 2 decoding procedure is developed.
To explain the decoding method, we also use above instance for illustration.Relevant data is depicted in Tables 1  and 2. In this case, the permutation {2, 1, 4, 3, 5, 6} is decoded into a feasible schedule whose Gantt chart is shown in Figure 3.
After decoding, each schedule can be evaluated by a fitness value, inversely proportional to objective value of the schedule.Looking back, we can know that this combination of solution representation and decoding scheme makes DE applicable for the problem under consideration.

Population Initialization.
The quality of initial population is considered to have a great impact on the convergence speed and final solution quality of evolutionary algorithms.
Step 1: Select the first unscheduled job in the permutation and assign it to the earliest available mechanic, if multiple mechanics are available, then goto Step 2; Step 2: Assign it to the earliest available mechanic with minimum setup time, if multiple mechanics share the minimum setup time, then goto Step 3; Step 3: Assign it to the one with minimum mechanic index; Step 4: Then determine the start time with respect to its release time; Step 5: Repeat Steps 1-4 until all jobs are scheduled.
Algorithm 2: Decoding.Therefore, population initialization is of crucial importance for DE.In the absence of prior knowledge of optimal solutions, we often resort to random initialization which usually results in unsatisfying population.In order to enhance the initial solutions, we make full use of LPT (longest processing time) [31], EDD (Earliest Due Date) [32], and KPM [33] heuristics to generate some high-quality individuals and the remaining is initialized in a random way.Job permutation generated by the adopted heuristics should be encoded into corresponding solution representations.This transition can be realized by the following formula: where   , { = LPT, EDD, KPM} is the priority of job  based on heuristic  and   is the job index in corresponding permutation.The initial priorities of the remaining individuals are generated within the domain defined by the prescribed interval [0, 1], shown as follows: where rand(0, 1) is a uniformly distributed random number within the range [0, 1].

DE-Based Search.
In our algorithm, we implement DE as a main procedure to update the whole population.More specifically, three operators, that is, mutation operator, crossover operator, and selection operator, are executed sequentially to update the population till a preset termination criterion is met.It is widely recognized that the mutation operation plays an important role in determining the performance of DE algorithm.Generally, different mutation strategies are required for different optimization problems depending on the properties of problems.In this regard, we propose an ensemble of mutation strategies for DE algorithm, in which a pool of different mutation strategies coexists throughout the whole search process.Specifically, two distinct mutation operators are employed to construct the strategy pool: (1) Mutation Strategy 1 [23]: "DE/rand/1" where  1 ,  2 ,  3 are mutually exclusive integers randomly selected from [1, ] and are distinct with . is referred to as scaling factor controlling the mutation scale, which is generally distributed in [0, 1].
(2) Mutation Strategy 2: "Gaussian mutation" where (, ) is a Gaussian distribution function with mean  and standard deviation , which are defined by the following formulations: where  1 and  2 are two random numbers between 0 and 1. Best is the current best individual in population.If the newly generated individual goes beyond bounds [0, 1], we simply reset it within the bounds.
The choice between DE/rand/1 and Gaussian mutation is decided by a switch probability and we set it to 0.5 in our algorithm.The reason behind this configuration is that DE/rand/1 has been experimentally proven to possess a strong robustness to different problems.In addition, Gaussian mutation generates new mutant solution based on the current and the best individuals at the present generation.As a result, new mutant individuals will be distributed around the weighted center of    and Best.At the beginning, the search process emphasizes the global exploration due to the large deviation (initially,    will be far from Best).As the algorithm proceeds, the deviation becomes smaller, and in this case the search process will focus on local exploitation.Based on the above analysis, this combination is able to strike an effective tradeoff between the exploration and exploitation.

𝑖
along with corresponding mutant individual    undergoes a crossover operation, through which a new trial individual    is created.A widely used binomial crossover scheme is defined as follows: where The meaning of notations in this formula conforms with that of (18).
The guideline behind the configuration is that each parameter is sampled from two different normal distributions with the same probability.According to probability theory, distribution with small mean value tends to return a relatively small value which is conductive to intensification, while distribution with large mean value inclines to create a relatively large value which facilitates diversification.In the case, balance between intensification and diversification can be ensured.

Local Search.
When tackling combinatorial optimization problems, incorporating local search into metaheuristic based algorithm with the aim of enhancing exploitation capability is an extremely effective way to strengthen the performance of basic algorithm.In our algorithm, we present two local search strategies.
Strategy 1.Three operators, swap, insert, and inverse, are employed in Strategy 1 [34].Let  be the selected individual, and let  and  be two mutually different positions randomly chosen from [1, ], respectively; then the steps can be described as follows: (i) Swap (, , ).
Swap two elements on positions  and  in individual .
Remove the element on position  and then insert it before position .
Inverse the selected subsequence between positions  and .
Step 1. Select the best individual.
Step 2. Sequentially perform swap, insert, and inverse operator on it, .
Step 3. Return a new individual.
After these operations are sequentially executed, a new individual is generated and competes with  to survive.It is worth mentioning that we implement local search directly on the encoding individual rather than its corresponding permutation, to avoid wasteful duplication of conversion between permutation and individual.In addition, we only perform local search on the best individual to save computational resource.Strategy 2. In view of the superiority of NEH heuristic for the PFSP [35], we extend it and propose another insert-based local search strategy.Similarly, we perform it only on the best individual; the main procedures are listed as follows: Step 1. Decode the best individual into job permutation.
Step 2. Find out the first available mechanic .
Step 3. Schedule job on the selected mechanic  and schedule the job in the position with lowest resulting objective function value.
Step 4. Repeat Steps 2 and 3 until all jobs are scheduled.
If the new schedule is superior to the predetermined one, we will accept it as the new best schedule.Afterwards, we should convert the new best schedule into an individual, which is a reverse process of decoding.The conversion has two phases: (1) transform the schedule into job permutation (see Algorithm 3); (2) then transform corresponding permutation into individual according to (11).

Computational Experiments
In order to validate the performance of the proposed DE algorithm, we have conducted comprehensive experiments on a suite of randomly generated instances.The MIP formulation is solved by the CPLEX 12.5 solver implemented in MATLAB R2014a.The proposed algorithm is also coded in MATLAB and run on a Core i3 computer (2.3 GHz) with 4 GB of memory.To test the effectiveness of local search strategies, we compare it with other algorithmic techniques reported in literature, that is, GA, genetic algorithm [21], and IH, insert-based heuristic [22].
Step 1: Let   , { = 1, 2, . . ., } be the set of scheduled jobs on mechanic , initialize corresponding earliest available time   to 0, calculate minimum earliest available time  and set initial permutation  = 0; Step 2: Identify the mechanic ,  ∈ {1, 2, . . ., } Step 2.1 Select mechanic  such that   = , if multiple mechanics satisfy this, then goto Step 2.2; Step 2.2 For each mechanic , compute   =   + the setup time for first job in   , then select mechanic  with minimum   , if multiple mechanics satisfy this, then goto Step 2.3 Step 2.3 Select the mechanic with minimum mechanic index.
Step 3: Delete the first job in   and append it to P, update   ,   and .If   = 0, then mechanic  is not in consideration in further iterations; Step 4: Repeat Steps 2 and 3, until a complete permutation is derived.Algorithm 3: Transform schedule into job permutation.
(iv) The setup times are also integers uniformly sampled from 4 to 6 which are 10∼30% of the mean value of the processing time.
(vi) The weight of the total walking distance in the objective function is assumed to be 0.5.
Five instances are generated for each combination (, ), yielding 150 instances in total.The raw data are available by request to the corresponding author through e-mail.

Lower
Bound.Heuristic algorithms report solutions whose optimality could not always be confirmed.In view of this, we decide to evaluate the quality of solutions via comparing them with lower bound LB.To derive the lower bound, we relax the instance under examination by reassociating the minimum value of setup times among jobs with each job.In this case, the setup time can be viewed as a given increment in processing time for convenience.Moreover, the second term in objective function indicating the total walking distance of all mechanics becomes a constant.Therefore, the relaxed problem can be remodeled by a time indexed formulation.It is worth noting that it is technically difficult to formulate original RMSP by a time indexed formulation due to the presence of different setup times. denotes the upper bound of the makespan of each instance.Variable   is equal to 1 if job  starts at time , 0 otherwise.Assume that  is the minimum of setup times and  ≤ ; then the formulation is shown as follows: ∑ ∈ [0, ] , ∈ , where constraints (21) restrict that each job cannot be processed before its release time.Constraints (22), referred to as resource constraints, state that at most  jobs can be executed simultaneously.Constraints (23) ensure that each job should begin once and only once.Constraints (24) clarify the domains of variables.The lower bound for original problem is obtained by solving above formulation using CPLEX solver.
It is worth mentioning that a fair comparison among the three heuristic algorithms is assured through executing all algorithms for the same number of objective function evaluations: 400 ×  (for small size problems) and 2000 ×  (for large size problems) in a run, where  is the number of jobs.That is to say, when the maximum of fitness function evaluations is reached, the execution of a particular algorithm is stopped.The parameter  is set to 40 in a   of DE algorithm.Other two algorithms follow the setting described in their respective literature.Five independent runs are carried out for each algorithm in each problem instance and the best result from the five repeated runs is accepted as the final solution to the problem instance.

Results
for Small Size Problems.Since the time and space complexities of exact solution methods exponentially grow with the increase in problem size, the CPLEX solver can only solve the MIP formulation containing 8 jobs to optimality on a computer introduced above in our experiments.The solver runs out of memory when we proceed to larger size problems.Therefore, we use the 8-job problem instances, named small size problems, to demonstrate the performance of proposed DE algorithm and its competitiveness to other heuristics.The metric for evaluating the performance is to compute a mean percentage deviation of the solution from optimal solution by the following formula: where where   () is the final solution obtained from a given algorithm  and   () is the optimal solution reported by CPLEX solver for th problem instance of a specific combination (, ).PD() is the corresponding percentage deviation.Obviously, lower value of MPD is preferred.The computational results for small size problems are summarized in Table 3. Lowest MPD value is identified with boldface for each combination.
According to the results shown in Table 3, the proposed DE algorithm and GA are the best-performing algorithms among the three heuristics in terms of MPD values.MPD of 0% suggests that both DE and GA can solve all the small size problem instances to optimality.We can observe that insert-based heuristic produces the worst MPD compared to its counterparts, which verifies that rule-base heuristic often yields less satisfactory results.Therefore, it can be concluded that the overall ranking of three heuristic algorithms from best to worst is as follows: DE, GA, and IH.
To demonstrate the efficiency of the proposed DE algorithm, we trace the track of the best objective values found by DE and GA with respect to function evaluation over an instance of combination (8,4), as shown in Figure 5.It is obvious that DE is the best one in terms of convergence speed, concluding that DE is computationally efficient in solving small size problems to optimality.

Results
for Large Size Problems.Since there do not exist exact algorithms capable of finding optimal solutions within a reasonable amount of time, we now assess the performance of aforementioned approaches for large size problems by computing a relative mean percentage deviation of best solutions from lower bound LB as follows: where PD = 100 (   () − LB () LB () ) ,  = 1, 2, . . ., 5, where   () represents the best result from five replications for a specific algorithm  and LB() is the lower bound of instance .PD() is the corresponding percentage deviation.Similarly, for a given algorithm, the lower its MPD, the better its capability.All the computational results are listed in Table 4.
As we can see from Table 4, the three heuristics show a distinct difference in finding optimal or approximate optimal solutions.The proposed DE algorithm obtains the lowest MPDs and thus is superior to other algorithms in terms of solution quality.Obviously, DE algorithm yields best results with average MPDs being less than 2.00% for all combinations.The performance of GA is relatively satisfactory with average MPDs being less than 5.50% for all combinations.However, IH is noticeably inferior to DE and GA in solution quality.Moreover, the results take lower bound as benchmark; hence the best solutions found by DE can be even closer to optimal ones by large probability.From a practical point of view, the results obtained by DE are very encouraging and indicate that DE algorithm is viable for large size problems.
Interestingly, we can notice that there is a tendency that for a given number of jobs the MPD slightly increases as the number of mechanics increases with only a few exceptions.A possible reason for this expected phenomenon might be that the search space (solution space) enlarges gradually as the number of mechanics grows.So, we can infer that the number of mechanics is probably the determinant of hardness of problems under consideration.Due to the limitation of space, herein, we only illustrate the convergence speed of DE over one instance of combination (30,4).Figure 6 plots the best solution found during entire iterative process.For comparison, we also record the convergence process of GA.
Clearly, the convergence of DE is very satisfactory because it consumes about 1 × 10 4 function evaluations to converge to an approximate solution.Moreover, the difference of deviation between 1 × 10 4 function evaluations and 1.5 × 10 4 function evaluations is very marginal.GA is also excellent in efficiency but encounters a dilemma in achieving a good solution.In summary, this graph confirms that DE algorithm is effective and efficient in dealing with RMSP problems.provide DE algorithm with a good initial population; (2) the configuration of parameter and mutation operator and local search strategies can establish a good balance between two forces: intensification and diversification.However, GA tends to suffer from stagnation of the search process due to the diversity loss, which explains why it yields less competitive solutions than the proposed algorithm.Based on the comparisons, we can conclude that the proposed DE algorithm is more effective than existing approaches in solving the RMSP.For DE-P,  is set to 0.4 and CR to 0.7 after extensive computational experiments.We follow a similar computational procedure to that explained before.The experimental results of DE, DE-M, DE-LS1, DE-LS2, and DE-P are reported in Table 5, where the best results for each combination are highlighted in boldface.We can observe in Table 5 that there is no distinct difference in performance for these algorithms with regard to small size problems.Nevertheless, when it comes to large size ones, the difference becomes obvious.The proposed DE obtains the best overall results, in comparison with all other algorithms.Removal of any one of these components will generally trigger a significant degeneration in performance.We attribute this phenomenon to the fact that all these improvement strategies are vital for a better balance between the exploration and exploitation during the evolution.Therefore, we can conclude that the above components can collaboratively enhance the performance of basic DE algorithm.

Conclusion
In this work, we consider the problem of scheduling reefer mechanics to minimize the sum of the total tardiness of all reefers and the total walking distance of all mechanics and we formulate it as mixed integer programming.Considering the NP-hardness of the problem, we propose a DE algorithm to solve it.First, to make DE suitable for the problem, priority-based representation is used.Second, LPT, EDD, and KPM heuristics are used in combination with random initialization to alleviate the search randomness at the beginning and get a high-quality initial population.Third, balance between diversification and intensification is ensured by welldesigned crossover operator, parameter setting, and local search scheme.Then, the resulting algorithm is compared against other two constructive heuristics and thoroughly tested on small size and large size problems, respectively.The results suggest that the proposed algorithm outperforms the existing IH and GA algorithms and it is effective and efficient in finding optimal or approximate optimal solutions for the RMSP.
In future work, on one hand, the time indexed formulation for calculating lower bound suffers from a low efficiency for large sized problems when solved by CPLEX solver, resulting from extremely large amount of efforts to generate () 0-1 variables.Note that an optimal solution has only  variables set to 1 and remaining to 0. Therefore, it is worthwhile to use column generation to solve the time indexed formulation more efficiently.Moreover, another desired extension of the current work is to take into consideration the uncertainty in release times, conforming to more practical scenarios.

Figure 4 :
Figure 4: An illustrative example for local search Strategy 1.

3. 6 .
Procedure of the Proposed Algorithm.In this paper, we encode a solution with priority and develop a new parameter setting strategy to balance exploration and exploitation abilities of DE.Additionally, two local search strategies along with a new Gaussian mutation are integrated into DE for performance improvement.The steps of the resulting algorithm are shown in Algorithm 4.

Figure 5 :
Figure 5: Track of the best solutions.

4. 3 . 4 .
Effectiveness of the Improvement Strategies.The primary improvement strategies presented in Section 3 include the new mutation mechanism, the two local search strategies, and the new parameter setting method.The purpose of this subsection is to verify the effectiveness of the above components.To this end, additional comparative experiments executed for different variants of the proposed DE.They are enumerated as follows: (1) DE without new mutation mechanism and with only DE/rand/1, denoted as DE-M; (2) DE without local search Strategy 1, denoted as DE-LS1; (3) DE without local search Strategy 2, denoted as DE-LS2; and (4) DE without new parameter setting method, denoted as DE-P.

Figure 6 :
Figure 6: Track of the best solutions.
rand  is a function returning a random number distributed within the range [0, 1].CR ∈ [0, 1] is named crossover rate which controls how many elements are derived from mutant individual   .rand is a random integer chosen from[1, ]to ensure that the trial individual is distinct with the target individual in at least one element.For the problem under consideration, if    yields a reduction in objective value, then    replaces    and survives into the next generation; otherwise  , is retained.3.4.4.Parameter Setting.Regarding parameter setting, existing works relevant to DE have demonstrated that its performance is highly sensitive to parameter setting, especially the scaling parameter  and the crossover rate CR.And various parameter adaptation strategies have been proposed in previous studies and their effect varies when addressing different issues.We propose a new parameter adapting technique for the purpose of striking a tradeoff between exploration and exploitation.
3.4.3.Selection Operator.Through mutation and crossover operators, a set of new trial individuals are generated.Subsequently, decoding method is employed to evaluate the fitness values or objective values of all individuals.Finally, a selection operator decides on the survival of trial individual to the next generation, as described by where randn represents the normal distribution and the two numbers in bracket indicate mean value and standard deviation, respectively.rand is a random number uniformly distributed within the range [0, 1].   will be truncated to [0, 1] if it goes beyond bound constraints.

:
Evaluate each solution and record the best one; While Stop condition is met, output the best result, otherwise Do Step 3: Perform DE-based search: 1 Initialize three individuals by LPT, EDD and KPM heuristics; Step 1.2 Generate the rest in a random way; Step 2Step 3.1: Mutation Create a mutant population ; Step 3.2: Crossover Recombine mutant population  and target population  to produce a new trial population ; Step 3.3: Selection Update population by one-to-one greedy selection; Step 4: Perform local search on the current best: Step 4.1: Execute Strategy 1; Step 4.2: Update the best individual; Step 4.3: Execute Strategy 2; Step 4.4: Update the best individual; End Algorithm 4: Proposed DE.

Table 3 :
Mean percent deviation for small size problems.

Table 4 :
Mean percent deviation for large size problems.
from GA and IH, respectively.Furthermore, the convergence curves indicate that the proposed algorithm converges to a better solution in much less function evaluations.The superior performance of the proposed DE algorithm may stem from two aspects: (1) LPT, EDD, and KPM heuristics

Table 5 :
Mean percent deviation for DE and its variants.