Energy-Saving Production Scheduling in a Single-Machine Manufacturing System by Improved Particle Swarm Optimization

A single-machine scheduling problem that minimizes the total weighted tardiness with energy consumption constraints in the actual production environment is studied in this paper. Based on the properties of the problem, an improved particle swarm optimization (PSO) algorithm embedded with a local search strategy (PSO-LS) is designed to solve this problem. To evaluate the algorithm, some computational experiments are carried out using PSO-LS, basic PSO, and a genetic algorithm (GA). Before the comparison experiment, the Taguchi method is used to select appropriate parameter values for these three algorithms since heuristic algorithms rely heavily on their parameters. The experimental results show that the improved PSO-LS algorithm has considerable advantages over the basic PSO and GA, especially for large-scale problems.


Introduction
Manufacturing is an important industry that consumes about one-third of the world's energy [1] and 56% of China's energy [2]. Saving energy not only reduces production cost but also protects the environment. Achieving energy savings through low-cost production scheduling is important [3] although it is possible to replace old machines with higherpriced and energy-efficient ones.
Energy-saving scheduling has received increasing attention in recent years. e literature on energy-efficient scheduling has considerably expanded since 2013 [4].
Lee et al. [5] studied single-machine scheduling with energy efficiency under a policy of time-varying electricity price and solved the problem using a dynamic control approach. Módos et al. [6] studied robust single-machine scheduling to minimize release times and total tardiness with periodic energy consumption limits. ey proposed an efficient algorithm to obtain the optimal robust solution of given processing orders and used it in two exact algorithms and Tabu search. Che [7] focused on biobjective optimization to minimize total energy consumption and maximum tardiness in a single-machine scheduling problem considering a power-down mechanism. e problem was built as a mixed integer linear programming (MILP) model. ey developed exact and approximate algorithms for small-and large-scale problems accordingly. Che et al. [8] established a new continuous-time mixed integer programming model for an energy-sensitive single-machine scheduling problem and proposed a greedy insertion algorithm to reduce the total electricity cost of production, which provided energysaving solution for a large-scale single-machine scheduling problem, such as 5000 jobs, in a few tens of seconds. Chen et al. [9] studied the energy consumption of production activities from the point of view of machine reliability. ey examined the actual problem in a rotor production workshop. e problem was a single-machine scheduling problem that optimized delay costs and energy costs. e ant colony algorithm embedded with a modified Emmons rule was used to solve this problem. In addition, they used sensitivity analysis to discuss special cases. By determining the start time of job processing, the idle time, and the time when the machine must be shut down, Shrouf et al. [10] aimed to minimize the cost of the energy consumption in a single-machine scheduling problem in the production process, and the energy price was varied throughout a day. For large-scale problems, a genetic algorithm (GA) was used to obtain a satisfactory solution.
Interval number theory was used to describe the uncertainty of renewable energy such as wind energy and solar energy [11], and two new biobjective optimization problems for interval single-machine scheduling were studied. rough parameter analysis, the authors provided decision-makers with guidelines for practical production environment. In [12], a memetic differential evolution (MDE) algorithm with superior performances to strength Pareto evolutionary algorithm II (SPEA-II) and nondominated sorting genetic algorithm II (NSGA-II) was proposed to solve an energy-saving biobjective unrelated parallel-machine scheduling problem to minimize maximum completion time and total energy consumption.
e MDE algorithm was enhanced by integrating the combination of list scheduling heuristic and local search.
Given the difficulty in the energy-concerned production scheduling problem, solving most of them by exact algorithms is unrealistic, especially for large-scale instances. Heuristic algorithms are popular for solving these problems [4].
Particle swarm optimization (PSO) is one of the heuristic algorithms, and it has been widely used in many fields. A new bare-bones multiobjective particle swarm optimization algorithm was firstly proposed by Zhang et al. [13] in order to deal with environmental/economic multiobjective optimization problems.
ere were three innovations in an updating method for particles which do not need to adjust parameters, a dynamic mutation operator to improve search ability, and a global particle leader updating method based on particle diversity. e results of experiments demonstrated that the algorithm can obtain a good approximation of the Pareto front.
Song et al. [14] proposed a variable-size cooperative coevolutionary particle swarm optimization algorithm (VS-CCPSO) for evolutionary feature selection (FS) methods to deal with "curse of dimensionality" on highdimensional data. e idea of "divide and conquer" was used to cooperative coevolutionary approach, and several strategies tailored according to properties of problems were developed including a space division strategy, an adaptive adjustment mechanism of subswarm size, a particle deletion strategy, and a particle generation strategy.
e experimental results illustrated that VS-CCPSO can obtain a good subset of features, which indicated that vS-CCPSO is competitive in dealing with high-dimensional FS problems. e paper by Zhang et al. [15] presented the first study of multiobjective particle swarm optimization (PSO) for costbased feature selection in classification in order to obtain nondominated solutions. Some adjustments, including a probability-based encoding method, a hybrid operator, ideas of the crowding distance, the external archive, and the Pareto domination relationship, were embedded in PSO to improve its search capability. Experimental results illustrated that this algorithm can automatically evolve a Pareto front, and it is a competitive feature selection method to solve the cost-based feature selection problem.
A fuzzy multiobjective FS method with particle swarm optimization, called PSOMOFS, was developed in the literature by Hu et al. [16], aiming at the feature selection (FS) problem with fuzzy cost. Specifically, a fuzzy dominance relationship, a fuzzy crowding distance measure, and a tolerance coefficient were used in the algorithm. Compared with other evolutionary methods and classical multiobjective FS methods, experimental results indicated that the proposed algorithm can obtain feature sets with better performances in approximation, diversity, and feature cost.
In our earlier work [17], we studied a biobjective optimization problem. e drawbacks of the biobjective approach can be identified from two aspects. Firstly, it is difficult and sometimes puzzling for practitioners to choose the most suitable solution from the set of nondominated solutions output by the scheduling algorithm. Secondly, the optimization capability of multiobjective evolutionary algorithms is often not robust enough, which means the obtained nondominated solutions can be far from the true optimum.
In this paper, a single-machine scheduling problem considering energy consumption based on real production environment is studied. Numerous industries involving continuous production processes (e.g., continuous casting and hot rolling in the steel industry) are limited in their energy consumption over time by contracts with power suppliers. In particular, the amount of energy expended in each successive period must not exceed a certain limit. Production scheduling under such constraints is much more complex than under normal production conditions. erefore, we focus on minimizing total weighted tardiness with energy consumption constraints. e total weighted tardiness captures the requirement of meeting the delivery time specified by customers. In other words, we are treating the problem as a single-objective optimization model, with energy-saving goals defined as constraints. is approach is closer to reality because energy considerations are usually expressed as constraints.
In order to solve our problem, a local search strategy was designed to be embedded in the basic particle swarm optimization (PSO) algorithm [18] as the proposed PSO-LS. Besides, a constraint handling process was tailored for this problem, which was used to determine the start processing time and completion time of each job to obtain total weighted tardiness as an objective function value. To make the problem more realistic, the starting time of each job is as low as seconds without violation of constraints. e experimental results show that our enhanced algorithm PSO-LS is statistically superior than that of the basic PSO and GA algorithms especially for large-scale problems. e remainder of this paper is arranged as follows: in Section 2, we define the problem and analyze its difficulty and then provide a small-scale example to facilitate understanding.
en, a PSO embedded with a local search strategy (PSO-LS) algorithm with design details, such as the decoding method, is proposed to solve this problem in Section 3. In Section 4, many computational experiments are executed using PSO-LS, basic PSO, and GA to evaluate PSO-LS. At last, we outline the conclusion and a prospect for future studies in Section 5.

Problem Statement.
At the beginning, there are n jobs marked as 1, 2, 3, . . . , n { } waiting to be processed on one machine. Job i requires a processing time of p i (h) and consumes A i (kWh) per hour, and it cannot start to be processed before the former job is finished. e total energy consumption of the jobs processed in the same processing time window shall not exceed Z (kWh). Time window Θ's time interval is [(Θ − 1)T, ΘT], where Θ ∈ 1, 2, . . . , Θ max is the label of time window and T is the length of processing time window.
Job i also has due date d i h ( ) and importance w i due to customer requirements. We need to decide the starting time st i of job i so that we can determine completion time Our target is to minimize the sum of weighted tardiness of all the jobs, i.e., n i�1 wt i , which is called total weighted tardiness (TWT). e following assumptions are considered in this problem: (i) ere are no uncertainties. (ii) e length of processing time window T ≥ max(p i ) so that the time window is reasonable. (iii) e electricity limit Z ≥ 0.5 × max(A i × p i ) guarantees the problem is solvable. e following mathematical model can be built to describe the problem in detail: Expression (1) is the objective function used to minimize TWT. Expression (2) is for the energy constraint. Expression (3) is used to express the constraint that one job at most is being processed at a time. Expression (4) calculates the weighted tardiness of each job. Expressions (5) and (6) are the ranges of the starting time and weighted tardiness, respectively.
Noticing that Expression (2) is a max-min constraint and Expression (3) is a quadratic constraint, this model is nonlinear.

Analysis of the Problem.
According to Graham et al. [19], our problem can be expressed as 1|Z| TWT. If the energy consumption constraint is not considered, the problem can be relaxed to 1‖ TWT, which is nondeterministic polynomial-time hardness (NP-hard) [20]. is means an NP-hard problem can be reduced to our problem. So, in this paper, the problem 1|Z| TWT is at least NP-hard.
ere are an infinite number of possible solutions to this problem. Due to energy consumption limits in each time window, some jobs may have to wait for some time after the previous job has been completed. erefore, during certain time windows, the machine may need to be idle for a period of time. Jobs to be fully processed in such time windows have an infinite number of starting times to choose from because there are an infinite number of points on a line segment.
When jobs are processed as early as possible without violating the constraint conditions, one permutation can only represent one feasible solution. Considering the objective function with the minimum TWT, i.e., n i�1 max(0, st i + p i − d i ), the value of the objective function at an early starting time must not be worse than that at a late starting time in the same situation. erefore, the optimal solution to the scheduling problem can be included in the permutations.
Based on the above analysis, the job processing sequences are used to represent the solutions of the singlemachine scheduling problem with energy consumption constraints. When the order of jobs is determined, the starting time of each job is also determined based on the principle of as early as possible. For a problem with n jobs, there are n! possible solutions in the solution space.

A Small-Scale Example.
A simple example is used to facilitate understanding of the solution. Considering 6 jobs at hand needing to be scheduled, the length of time window equals 30 and the energy consumption limit per hour T is equal to 189. Other figures are provided in Table 1.
To facilitate the actual production operation, the time value of starting machining operation is accurate to the second. Since the processing time is measured in hours, we need to reserve the starting processing time to two decimal places without violating production constraints.
To find the optimal solution, recursion and backtracking are used to enumerate all the permutations of the job processing order. en, we have 6! � 720 solutions that are labelled from 1 to 720, as shown in Figure 1. e elbow points are captured during the enumeration process. e optimal solution whose objective function value is 262.63 appears at the 637th feasible solution, whose processing order is (6,2,4,1,3,5)

Heuristic Algorithms
Due to the difficulty in the single-machine scheduling problem with energy consumption constraints, it is impossible to quickly obtain an exact solution with the increase in the number of jobs.
To solve this problem, it is necessary to use a fitting algorithm based on a heuristic algorithm to obtain satisfactory solutions. ere are many heuristic algorithms but we choose PSO from them as our framework for three reasons. Firstly, PSO relies on only a few parameters. Secondly, it has good performance in both the convergence speed and the diversity of solutions. irdly, its process is simple and effect is good.
In this section, the basic PSO algorithm is first introduced. en, the encoding and decoding algorithms for this problem are designed in detail. To improve the PSO algorithm, a local search based on the nature of the problem is proposed. Finally, the PSO-LS framework is described.

Basic PSO Algorithm.
ere are a bunch of particles in the solution space looking for a target. Every particle from the beginning has a position (pos) and a velocity (vel). e best previous location in its memory is called pbest. e best particle in the swarm is called gbest. During each flight, as the particles reach their target, they need to determine their speed based on the speed of the last flight, the previous pbest, and the previous gbest. eir pbest and gbest are reselected based on new locations. is iteration runs until the end of time. e gbest resulting from the final iteration will be the result of the problem produced by PSO:  1  8  12  28  3  2  4  7  15  2  3  10  7  15  2  4  4  11  28  6  5  9  12  10  3  6  10  11 pos ite � pos ite− 1 + vel ite . (8) e PSO algorithm can be described in detail as follows: (1) Initialize the particle swarm. Specifically, initialize the position and velocity of each particle. (2) Evaluate all the particles in the swarm according to their own positions. (3) Initialize pbest and gbest. Assign the pos value of each particle to its own pbest, and the best one of all the particles is used to assign gbest as its initial value. (4) Repeating steps 5 to 8 until the stop criteria are satisfied. (5) Update particle swarm. e new velocity and new position of each particle can be calculated by Expressions (7) and (8), respectively, where weights c 1 and c 2 are parameters that need to be set in advance and r 1 and r 2 are two different random real numbers with values ranging from 0.0 to 1.0. ite is the number of the current iteration.
(6) Adjust the particle swarm. e position of the particle is in the range of [pos min , pos Max ], where both are parameters of the PSO. Particles beyond this limit are adjusted to meet this constraint.

Encoding and
Decoding. To apply the particle swarm algorithm to our problem, for each particle, the position and velocity have n dimensions according to the number of jobs to be processed. In addition, the position and velocity of each particle are calculated by formulas (9) and (10), respectively, where k is the dimension index: Since the basic PSO algorithm was originally designed to solve continuous problems, the smallest position value (SPV) was used to convert the positions into integer sequences as the decoding method, which can be expressed as job processing sequences. e core of SPV process is a sorting algorithm. It is simple, effective, and easy to implement, so we use it as the decoding method.
For example, given a 5-dimensional position [ e values on each dimension correspond to the original dimension indexes that are extracted to produce sequences (5, 4, 3, 1, 2). e decoding method (SPV) is shown in Algorithm 1.

Constraint Handling and Solution Obtaining.
To ensure these permutations have the probability of containing optimal solutions, jobs are scheduled as early as possible according to processing orders in compliance with the energy consumption constraint. Specifically, we need to decide where to place the job π in a given processing order Π on the processing schedule in the current processing time window Θ. e jobs (π − 1), (π − 2), etc., are scheduled before job π.
To make the problem easier to solve, we only need to decide how long the job π costs in the current time window Θ and the next time window (Θ + 1). Each job only needs to make this decision, then the starting processing time of each job can be determined, and finally, the objective function value of this processing sequence can be used to evaluate the quality of this given processing sequence.
e time the job costs in the current time window Θ and the next time window Θ + 1, expressed as tpc and tpn, respectively, can be calculated by formulas (11) and (12), respectively, where rZ is the rest energy that can be used in the current time window, A π is the energy consumption per hour to process the job π, and p π is the processing time of job π: We enumerate all the meaningful situations based on Expressions (11) and (12) to obtain the rest T and rest Z, Mathematical Problems in Engineering which are expressed as rZ ′ and rT ′ for the job in the next position, respectively, and to determine the completion time of the current job.
All the cases are shown in Figure 3.
(1) tpc � p π and tpn � 0. is means the job π will be scheduled to be processed entirely in the current time window. e completion time of the current job can be calculated by e energy consumption runs out after processing job π. So, the next job must be scheduled in the next time window, and rZ ′ � Z and rT ′ � T are included in the next job. Figure 3(b) displays this case. (c) tpc � p π � rT < rZ/A π . So, the job (π + 1) has to be processed in the next time window (Θ + 1), and rZ ′ � Z and rT ′ � T are for the next job. Figure 3(c) demonstrates this situation. (d) tpc � p π � rZ/A π � rT. e next job is scheduled to be processed in the next time window with rZ ′ � Z and rT ′ � T. is case is described in Figure 3(d).
(2) tpc < p π and tpn � p π − tpc. e job π cannot be scheduled entirely in the time window Θ. e completion time of the current job can be calculated by Θ × T + tpn. is current job takes tpn hours to be processed in the next time window. We can obtain rT ′ � Z − tpn and rZ ′ � Z − tpn × A π for the next job.
(a) tpc � rZ/A π < rT. After producing the job π, the electricity in time window Θ runs out. ere is a gap between the starting time of job π and the completion time of job (π − 1). is situation is shown in Figure 3(e).
e current job can begin processing when the production of job (π − 1) is complete. Figure 3 is is similar to 2(b). e energy in the current time window runs out. Figure 3(g) shows this situation in detail.
(3) tpc < p π and tpn � Z/A π < p π − tpc. e job π has to start being processed in the time window (Θ + 1) with rZ � Z and rT � T. en, we need to reschedule the job π in the next iteration, which may be consistent with situation 2(a) discussed above. An example of this situation is shown in Figure 3(h). e procedure above repeats from π � 1 until π � n for the number of jobs. We can determine the completion time of each job so that TWT can be calculated as the fitness of a particle.
Based on the above description, the constraint handling process is described in Algorithm 2. To determine the completion time precision down to seconds, lines 7 and 17 to 22 were added to handle the tpn with more than two decimals. In line 7, we leave tpn with 2 decimal places to obtain tpn roun d . For example, if tpn � 3.91 + 1 × 10 − 100 , then we have tpn roun d � 3.92. In lines 17 through 22, rZ and rT are calculated based on tpn roun d instead of tpn, and the schedule is adjusted as well.

Local Search.
To improve the searching ability of the basic PSO, an insertion operator is used in the gbest of the particle swarm in each iteration. is insertion operator can considerably increase the diversity of solutions, thus increasing the probability of significantly improving gbest. In addition, it is simple to implement and has low time and memory overhead. e insertion operator for the processing order of gbest is described in Figure 4. It aims to insert the job π 1 from the time window Θ 1 immediately after the job π 2 in time window Θ 2 .
is insertion operator can be easily applied to the job processing order.
Firstly, we need to select two different time windows. Time windows Θ 1 and Θ 2 are random integers selected from the range of [1, Θ max − 1] and [Θ 1 + 1, Θ max ]. en, two jobs need to be chosen from those two time windows. Jobs π 1 and π 2 are chosen randomly from the jobs processed in the selected time windows. At this moment, job π 1 must be processed before job π 2 because time window Θ 1 is in front of time window Θ 2 . irdly, for jobs from job π 2 to the job immediately after job π 1 , their ordinal numbers need to be subtracted by 1 in this processing sequence. Finally, we change the ordinal number of job π 1 in the processing order. e only task to perform is to place job π 1 after job π 2 .
Because the position of gbest is used to update particles in the next iteration, we need to change the position of gbest to maintain the consistency of the position and processing order of gbest. A small example is provided in Table 2 to illustrate the process of adjusting the position of gbest according to the insertion operator.
Assume that the original gbest has a 5-dimensional position pos 0 � (3.1, 4.3, 2.7, 2.2, 1.9) and the original order or de r 0 � (5, 4, 3, 1, 2). If π 1 � 4 and π 2 � 2, we can determine the changed order target � (5, 3, 1, 2, 4) according to insertion operator for order, as shown in Figure 4. For this instance, step 1 shown in Table 2 involves exchanging the job π 1 to the job to the right. So we need to swap jobs 4 and 3, . In step 2, we also swap the job π 1 to the right job. is time, we need to exchange job 4 with job 1 according to or de r 1 so that or de r 2 can be obtained. To obtain pos 2 , we can swap the value of dimensions 4 and 1 of pos 1 , that is, exchanging 3.1 with 2.7.
Step 3 is also based on its previous step; this time, we need to exchange job π 1 and job π 2 because job π 2 is to the right of job π 1 in or de r 2 . Similar with step 1 and step 2, To verify its correctness, we can decode pos 3 using SPV so that we can determine its order (5, 3, 1, 2, 4), which is equal to the target order. Based on all descriptions above, for each iteration, we applied the insertion operator shown in Algorithm 3 to the original gbest five times to obtain five different new gbests. e original gbest was replaced by the best gbest among these five gbests with the smallest objective function value if it was better than the original gbest. Based on the above, the local search process is described in Algorithm 4.

PSO-LS Framework.
e PSO-LS framework structure is shown in Algorithm 5. As described in the previous sections, the state of the swarm is initialized by the first three lines. e main loop of the PSO algorithm consists of lines from 4 through 9. Based on the fact that the behavior of each particle depends heavily on the global leader gbest, the eighth line which aims to improve the quality of the global optimal particle is added before the end of each iteration by calling Algorithm 4.

Computational Experiments
To evaluate PSO-LS proposed in the previous chapter, we performed many experiments for comparison with the genetic algorithm (GA) [21].
To be specific, we used the main algorithm framework in a previous study [22] that uses GA to solve the scheduling problem.
e initial population was generated randomly. We used C1 from [22] for chromosomal representation and crossover operator. For selection mechanism, we used a hybrid of roulette wheel selection [23] with an elitist strategy [23]. A shift mutation was applied to the GA according to the literature [22]. e probability of mutation p m was dynamic, and at the beginning, p m : � p 0 m . For each iteration, p m decreases by a rate of θ. In the population, when the value of the minimum fitness over the mean of fitness is larger than a constant real number D, i.e., v min /v mean > D, p m takes the initial value p 0 m . Before the comparison, experiments for parameter selection were needed because the performance of heuristic algorithms depends strongly on their parameters. We performed the same parameter selection experiment on PSO and GA.

Problem Data Setting.
Before we started the experiment, the data of problems are given as follows: (ii) e electricity consumption of job i, A i , is distributed from the uniform unif (3,15).
(iii) e due time of job i, d i , is distributed from the uniform unif(5, 5 × n). (iv) e importance of job i, w i , is distributed from the uniform unif (1, n).
All the problems handled in the experiments were generated randomly based on the distributions.

Algorithm Parameters Setting.
We used the Taguchi method [24] for parameter selection.
Taguchi method is also called the orthogonal experimental design which is used to determine the value of each factor especially when the number of factors and factor levels are greater than 3. Based on orthogonal tables, the number of experiments can be greatly decreased. e brief steps of the Taguchi method are as follows: firstly, determine responses, factors, and levels; secondly, select appropriate orthogonal tables; thirdly, execute the experiments according to orthogonal tables and fill the tables with experimental results; then, analyze the results and determine the factor level combination; and at last, verify the effectiveness of the selected levels of factors.
We took the size of swarm, weight, c 1 , c 2 , and V max as factors for the PSO algorithm. We also took the size of population, the probability of crossover p c , the probability of mutation p m , and the other two parameters related to mutations θ and D as the factors for the GA algorithm. Each factor of both algorithms took 4 levels, as shown in Tables 3  and 4.
We conducted 16 experiments according to the orthogonal tables of parameters of each algorithm, as shown in Tables 5 and 6.
Considering that the problems on different scales may take different parameter levels to obtain effective search capabilities, we set the means of TWT of a random smallscale problem and a random large-scale problem as responses. A problem with 30 jobs was used as the small-scale problem and 70 jobs as the large-scale problem. To eliminate as much random interference as possible, the average value of the objective function value resulting from 20 executions of each algorithm was taken as the response.
After the tables were designed and filled, we needed to select parameters for the PSO and GA algorithms by analyzing Tables 5 and 6. en, we needed to decide which level would be selected for each factor. We selected the factor of the size of the particle swarm for the small-scale problem as an example, as shown in the line on the top left of Figure 5. Firstly, let us consider the situation where size equals 50. According to Table 5, the average of TWT for a size of 50 for the small-scale problem Require: parameter, problemData Ensure: solution (1) initializeSwarm (swarm, parameter, n) (2) evaluateSwarm (swarm, problemData) (3) initialize pbest and gbest (4) while parameter.stopCriteria is not satisfied do (5) updateSwarm (swarm, parameter) (6) adjustSwarm (swarm, parameter) (7) evaluateSwarm (swarm, problemData) (8) localSearch (swarm.gbest, problemData) (9) end while (10) solution � swarm.gbest.solution (11) return solution ALGORITHM 5: PSO-LS framework.    e average responses at different levels for each factor need to be calculated, and then, the level of each factor with the smallest mean TWT is chosen. We performed this procedure on other factors for PSO and all the factors for GA for both the small-and large-scale problems. e main effect plots for PSO and GA are shown in Figures 5 and 6, respectively. e lowest points of these lines are taken as the value for each parameter for the algorithms.
As such, we determined all the parameter values for each algorithm. e TWT obtained by the levels of factors chosen for the large-scale problems is also the smallest among the other data in the same column. erefore, the parameter selections for PSO and GA by the Taguchi method are effective.

Performance Comparison.
We compared the results of PSO-LS, PSO, and GA for problems on six different scales: 10, 30, 50, 70, 100, and 300 jobs. Problems with fewer than 70 jobs were considered small-scale; otherwise, they were considered large-scale.
When solving these two sets of problems on different scales, the algorithms choose the values of parameters of the corresponding scale, which was obtained in the previous section.
For each scale, five different instances were randomly generated. When solving each problem, each algorithm was independently executed 20 times, and the average value of these 20 results was recorded to reduce the differences in the random initial solutions.
Each algorithm executes the same duration for each run. All algorithms were executed in 1, 10, 30, 50, 100, and 100 s, which correspond to solving problems with 10, 30, 50, 70, 100, and 300 jobs, respectively. For small-scale problems, the algorithms almost converge within the setting time, so we compare the results of each algorithm when the algorithms

Mathematical Problems in Engineering
converge. For large-scale problems, because of the difficulty in the problems, algorithms cannot converge in a short time, so we let the algorithm run enough time to see, which makes the results closer to the optimal solutions. Since time is vital to production environment, it is important to get a better satisfactory solution in a shorter time. erefore, the computational time rather than the number of iterations is controlled for the computational experiments.
For the 20 runs, the comparison results of the average TWT obtained by PSO-LS, PSO, and GA are shown in Table 9.
Paired t-tests were to be conducted with a significance level α � 0.05 to analyze the performance of PSO-LS compared with both basic PSO and GA.
For the same scale of problems, the differences in the objective function values obtained by PSO-LS and PSO shown in Table 9 can be obtained by ) > t α�0.05 (n − 1), H 0 is rejected and u d > 0. at is, PSO has a greater fitness than PSO-LS, so the performance of PSO-LS is better than PSO. Otherwise, PSO-LS is considered to have no obvious advantages.
We used the above method to test the performance of PSO-LS compared to PSO at all scales. Firstly, subtract the value of the PSO-LS column from that of the PSO column in Table 9 and obtain the data of the second column in Table 10. Values greater than 0 indicate that PSO-LS is better. en, the mean and standard deviation of differences on different problem sales are calculated including 10, 30, 50, 70, 100, and 300, and the results are recorded in the third and fourth columns in Table 10, respectively. T-statistics on different scales was finally obtained according to the mean differences and standard deviations which were displayed in the fifth column. Similarly, columns 2 through 5 are shown in Table 11 to demonstrate the differences between PSO-LS and GA on all scales. When the computed t-statistics was greater than t α�0.05 n − t1) � 2.1318 ( , PSO-LS was considered to have better performance than PSO for problems of the corresponding numbers of jobs. In the fifth column of the two tables, the data greater than 2.1318 have been highlighted to show that PSO-LS is better for problems with corresponding number of jobs.
In addition, to analyze PSO-LS performance in smalland large-scale problems, the sample size is n � 15, and when ) > t α�0.05 (n − 1) � 1.7613, PSO-LS is considered statistically better in large-or small-scale problems significantly. e figures were placed in columns 6 through 8 of Tables 10 and 11. Column 8 was the t-statistics for small-scale problems and large-scale problems, and the data greater than t α�0.05 (n − 1) � 1.7613 were shown in bold.
In order to analyze the differences between PSO-LS and PSO and GA in general, when ) > t α�0.05 (n − 1) � 1.6991, PSO-LS was considered statistically better. e mean of differences, the standard deviations, and t-statistics are shown in the last three columns of Tables 10 and 11. e numbers greater than 1.6991 were bold.
In order to analyze the effectiveness of PSO-LS against basic PSO, the differences, the averages of differences, and tstatistics are shown in Table 10. e gap is the difference of TWT obtained by PSO and by PSO-LS, d 1 is the mean of the differences calculated from 5 instances with the same job size, s D1 is the standard deviation related to d 1 , and then, the t-statistics of each scales, t 1 , can be obtained by Similar to Table 10, Table 11 is also given to analyze the performances of PSO-LS compared with GA. e observations from the displayed results are shown as follows.
As for the comparison between PSO and PSO-LS, PSO-LS is statistically better than the basic PSO significantly according to Table 10 because the differences between the PSO and PSO-LS were almost greater than zero (except for 10-4 and 30-1 in Table 9) and the t-statistics for all the differences was 2.58 which was greater than t α�0.05 (29) (1.6991). As for performance of solving problems with the same number of jobs, tstatistics in the t 1 column was greater than t α�0.05 (4) (2.1318), only except for the problems with 10 jobs, which demonstrated the superior performance of PSO-LS than that of PSO except for the 10-scale problems. T-statistics in t 2 is both larger than t α�0.05 (14) (1.7613). is can be attributed to the effectiveness of the local search strategy, by which a better gbest, as the global leader which the whole particle swarm depends on, may be found in each iteration with just a little more time cost because the insertion operator is simple, efficient, and time-saving. Besides, there was a increasing tendency of t-statistics with the increase in the number of jobs, starting with 0.46 and reaching the top at 14.09 by the scale 300, which meant the local search strategy improved the search capability as the complex of problems increases, although on small scales such as scale 10, it did not have ). Standard deviation depends on the difference in instances, which can be considered as a property of problems.
As for the comparison between GA and PSO-LS, tstatistics was equal to 2.43 for all the gaps between GA and PSO-LS and the t-statistics is larger than t α�0.05 (29) (1.6991). So PSO-LS was significantly better than GA generally. PSO framework was time-saving, which gave rise to less time cost within each iteration so that within the executing time limit, the number of iterations of PSO-LS can be more, compared with the GA algorithm with relatively complex operators of selection, crossover, and mutation. Moreover, the local search strategy with the efficient insertion operator tailored specifically for this problem improved gbest which all the particles rely heavily on to update their states, leading to improvement in almost each iteration. Because of the energy constraints, the insertion operator can cause great changes. However, the operators of GA, such as crossover and mutation, can bring less diversity of offspring, and good genes might be likely to be lost in inheritance.
ere are more information between PSO-LS and GA in detail. Firstly, when the number of jobs was 10, GA had a slightly larger average TWT than PSO-LS at most instances, except for instances 10-1 and 10-2 in Table 9, but the gaps between the objective function values obtained by PSO-LS and GA were smaller than 5.00 in all instances, which is shown in Table 11. e t-statistics of the gaps between GA and PSO-LS was 1.71, which was smaller than t α�0.05 (4) � 2.1318. So, when the number of jobs was 10, PSO-LS was slightly better than GA and there was no statistically significant differences between them. is was because the problems with 10 jobs were not difficult and algorithms had converged or was close to converging within 1 second.
Secondly, when the number of jobs was 30, PSO-LS obtained the fewest points compared with PSO and GA except for 30-1, as shown in Table 9. So the differences between GA and PSO-LS were mostly nonnegative in Table 11. e t-statistics of the gaps was 2.26, which was greater than t α�0.05 (4) � 2.1318, according to which the PSO-LS was statistically better than GA.
irdly, when the number of jobs was larger than 30, PSO-LS was much more effective than PSO and GA, as displayed in Table 9.
According to the t 1 column in Table 11, the t-statistics of gaps of TWT obtained from GA and PSO-LS was 4.58, 11.96,  15.58, and 32.82 for scale from 50 to 300, which were all significantly larger than t α�0.05 (4) � 2.1318. And t-statistics in column t 1 had an upward trend with the increase in jobs, which illustrated the superior performance of PSO-LS than GA especially for large-scale problems.
As for t-statistics of differences between GA and PSO for the small-scale and large-scale problems, according to column t 2 in Table 11, they were 2.55 and 2.68, respectively, which were both greater than t α�0.05 (14) � 1.7613. It shows good global searching capability of PSO-LS against GA.
In conclusion, PSO-LS has a strong search capability for finding more optimal solutions than those obtained by basic PSO and GA. e larger the number of jobs, the larger the gap in the results between PSO-LS and GA. Because of local search strategy, PSO-LS is statistically better than the basic PSO and GA significantly.

Conclusion
In this paper, we studied a single-machine scheduling problem with energy consumption limits to minimize total weighted tardiness. Based on the basic PSO algorithm, a local search was designed to be embedded in the algorithm as PSO-LS to improve the performance of the original algorithm, which is one aspect of our main contribution. Another aspect of our contribution is the constraint handling process tailored for this problem, which was used to determine the start processing time and completion time of each job to obtain total weighted tardiness as the objective function value. e constraint handling process considers the actual production environment, reserving two efficiencies for starting processing time to be accurate to the seconds and using as much energy in each time window as possible without violating energy constraints.
Experimental results showed that this enhanced algorithm, PSO-LS, is more effective than the original PSO and GA algorithms especially for large-scale problems, which demonstrated the effectiveness of our algorithm design.
is research can be extended in the following aspects: first, more operators inside PSO could be developed to improve the performance of local search; second, other methods such as the adaptive large neighborhood search algorithm (ALNS) [25] could be used to bring more diversity of particles; third, initialization methods of PSO could be taken into account to improve both the diversity of particles and the goodness of initial solution; fourth, energy consumption limits with uncertainty can be taken into account to meet more complex production environment; fifth, other PSO variants such as binary PSO [26] can be considered to be used as the algorithm framework to handle the discrete problem. ese issues will be addressed in the further studies. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest.

Authors' Contributions
Qingquan Jiang and Xiaoya Liao contributed equally to this work.