Multiobjective Parallel Algorithms for Solving Biobjective Open Shop Scheduling Problem

Open Shop Scheduling Problem (OSSP) is one of themost important scheduling problems in the field of engineering and industry. *is kind of problem includesmmachines and n jobs, each job contains a certain number of operations, and each operation has a predetermined processing time on its corresponding machine. *e order of processing of these operations affects the completion times of all jobs.*erefore, the purpose of OSSP is to achieve a proper order of processing of jobs using specified machines, so that the completion time of all jobs is minimized. In this paper, the strengths and limitations of three methods are evaluated by comparing the results of solving the OSSP in large-scale and small-scale benchmarks. In this case, the minimized completion time and total tardiness are considered the objective functions of the adapted methods. To solve small-scale problems, we adapt a mathematical model called Multiobjective Mixed Linear Programming (MOMILP). To solve large-scale problems, two metaheuristic algorithms including Multiobjective Parallel Genetic Algorithm (MOPGA) and Multiobjective Parallel Simulated Annealing (MOPSA) are adapted. In experimental results, we randomly generated small-scale problems to compare MOMILP with the Genetic Algorithm (GA) and Simulate Annealing (SA). To compare MOPSA and MOPGA with the state of the art and show their strengths and limitations, we use a standard large-scale benchmark. *e simulation results of the proposed algorithms show that although the MOPSA algorithm is faster, the MOPGA algorithm is more efficient in achieving optimal solutions for large-scale problems compared with other methods.


Introduction
Optimal job scheduling problems are classified into single machine scheduling, job shop scheduling, open shop scheduling, flow shop scheduling, and hybrid scheduling problems; where the single machine scheduling is the simplest problem of scheduling. e problem is to schedule the processing of n jobs with varying processing times on a single machine [1]. In the Job Shop Scheduling Problem (JSSP), the problem is more complex. Here, each job consists of a set of operations (O1, O2, . . ., O n ) that need to be processed in a specific order. Each operation has a unique machine on which it must be performed, and only one operation per task may be performed at the moment [2]. A more complex problem than JSSP is the Open Shop Scheduling Problem (OSSP). OSSP involves m machines (M 1 , M 2 , . . . , M m ) to perform n jobs (J 1 , J 2 , . . . , J n ), each job involving m operations. Jth operation of J j is indicated by O ij which should be processed on machine M i for p ij ≥ 0 units of time.
erefore, we will have a total of n × m operations (O 11 , O 12 , . . . , O nm ) [3]. Like OSSP, each job in the Flow Shop Scheduling Problem (FSSP) consists of precisely m operations. However, the ith operation of the job must be executed on the ith machine. No machine is capable of doing many operations simultaneously. e execution time for each action of each task is provided earlier [4]. e most complex problem in the class of job scheduling problems is the hybrid scheduling problem (HSFP). In this problem, n jobs (J 1 , J 2 , . . . , J n ), must be processed in a multistage manufacturing facility in which each step has several concurrent and identical machines [5].
is paper addresses the problem of Biobjective Open Shop Scheduling. In case of complexity of the problem, the OSSP is in between the job shop scheduling and the flow shop scheduling problems. However, all the problems in this class are considered as NP-Hard problems. e OSSP has a broad range of applications across different sectors that include timetabling, Satellite communication, health care management, transportation, tourism, computer vision, and many other applications. In the following subsection, we introduce some related works that tried to solve the problem of open shop scheduling.
1.1. Related Works. Production scheduling problems have been widely studied in the past years. A detailed review of this type of research work can be found in [6,7]. Breit et al. [8] solved the OSSP of two machines with a predetermined stop time for one of the machines and proposed a heuristic algorithm. Next, they set some predetermined stop times for both machines and again used heuristic algorithms to approximate the problem. Hsu et al. [9] examined two different periodic maintenance approaches simultaneously in the single-machine scheduling problem. Periodic maintenance approaches considered for their problem are such that the machine stops after a certain period of time (T) or after processing a certain number of jobs (K) in order to perform the maintenance process. Obviously, if either of these conditions occurs earlier, the machine will stop.
To quantitatively characterize the production and distribution optimization issue, Fu et al. [10] formulate a mixed-integer programming model to minimize the maximum completion time. To best handle the presented issue, an improved black widow optimization method is created to address the researched problem. In their suggested method, the solution representation, population initiation, procreation, cannibalism, and mutation, as well as a simulated annealing method, are uniquely constructed.
Another condition that arises in the presence of machines' unavailability constraint is the impermissibility of cutting work due to which in the absence of sufficient time to perform an operation and face the machine stop, the desired operation is not loaded on the machine and its processing is postponed until machine stop is finished. Transportation and movement times between machines are other criteria that have been considered in scheduling problems. Strusevich [11] considered a time interval between the completion of the processing of one activity of a job on one machine and the beginning of the next activity of the same job on another machine for two-machine OSSP and due to occurring transportation in real conditions named it transportation time. He also points out that in chemical and metallurgical applications, these transportation times are equivalent to the times of the heating or cooling processes.
Recently, Fu et al. [12] offer a stochastic biobjective twostage open shop scheduling problem that mimics a car repair process where duties are delegated to different third-party organizations with specialized equipment. e optimization issue was formulated by reducing the overall lateness and processing costs, subject to different resource restrictions. In order to tackle the open shop issue, a hybrid multiobjective optimization of migratory birds, along with a genetic operation and discrete event system, is developed by considering problem features. e OSSP is known as an NP-hard problem [13][14][15] and it is not possible to solve these problems in polynomial time except in small dimensions. erefore, approximate solution achievements including heuristic and metaheuristic methods can be more efficient than exact methods. Panahi et al. [16] used multiobjective simulated annealing and Ant Colony Optimization (ACO) algorithm to solve Multiobjective Open Shop Scheduling (MOOSS) with the objectives of total delay and the longest completion time. Panahi and Tavakkoli-Moghaddam [17] combined general and multiobjective simulated annealing algorithms with ACO and used it to simultaneous optimization of two objective functions including the longest completion time and total delay in OSSP. Drezner [18] used the Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm for batch scheduling in the case of multifunction parallel machines.
Task scheduling has been studied in much different research works with an array of different applications from task scheduling in cloud computing [19] to the flow shops in factories [20,21]. Most studies on scheduling problems have been conducted under the basic assumption that machines are available on all planning horizons, while this is not the case in the real world. Machines may not be available for reasons such as preventive maintenance, failure, rest periods, and residual work from the previous planning period that should be processed at the beginning of the new period, [22]. In many cases, unavailability times are known in advance, so the decision-maker can make a more efficient decision by considering them. Karimi et al. [23] considered a predetermined time for machine unavailability in the single machine problem and proposed two heuristic methods to solve it.
In this paper, an OSSP is addressed in which stops occur at different times on machines and its duration is different but specific for different machines. e intended objective functions include minimizing the longest completion time (C max ) and the sum of delays, which are presented as a weight combination in a single function. Also, due to the possibility of using different devices to transport different items in a shop, the movement time in a fixed route is different for each job. Also, the working path from one machine to another is considered different from its return path due to the minimization of material flow interference. erefore, the work transferring matrix is asymmetric between different workstations. In order to solve problems with small dimensions and when the weight of each target is known, the exact approach of the mixed biobjective linear mathematical model is adapted to use as a problem-solving method, and later the results are compared with the results of GA and SA in solving small-scale OSSP. On the contrary, when the weight of each target is unknown, the achievement of Pareto optimal solutions using In the following, Section 2 presents a mixed linear mathematical programming model. e adapted metaheuristic algorithms and their details are described in Section 3. Section 4 describes the design of experiments and computational evaluation. Section 5 provides a conclusion and future studies.

Mathematical Model
In this section, the problem is introduced using simplifying assumptions as well as mathematical symbols and then a multiobjective mixed linear programming model is presented and analyzed to accurately solve small-scale problems.
To achieve the mathematical model, the available time of each machine is considered as a packet of a certain length T i and jobs should be placed in such a way that their total processing time does not exceed the length of the packet. To clarify the issues, see Figure 1, in which J [j] indicates j th job in sequence and B il indicates l th packet of machine i.

Problem Assumptions.
Simplifying assumptions are considered as follows: (i) Each machine may process up to one task at a time.
(ii) Each machine may process up to one task at a time.
(iii) Jobs can be processed in any order by all or some machines. (iv) All jobs are available in the shop from the beginning of the planning horizon. (v) Job cutting is not allowed during processing. (vi) ere is only one machine of each type in the shop (operations of each job are performed by only one specific machine). (vii) Unavailable times occur at specific times on each machine.
e length of available (and unavailable) times of machines is specific and depends on the machine and is fixed during the planning period.
st ij , C ij , C j , C max , Tard j ≥ 0 ∀i,j , Equation (1) is the objective function of the problem, which includes the convex linear combination of the two criteria: (1) the longest completion time and (2) the sum of the delays. θ 1 and θ 2 are positive coefficients that represent the weight of each target and θ 1 + θ 2 � 1. e completion time of each operation (C ij ) is obtained by equation (2). If job j on machine i is processed before machine h, we have Y ihj � 1, then the start time of preparing job j on machine h (st hj � 1) will be larger than the completion time of that job on machine i, this is shown using equation (3). If job j is processed on machine i before job k (X ijk � 1) and both jobs are located in the same packet l (A ijl � A ikl � 1), then the start time of preparing job k on machine i (st ik � 1) will be larger than the completion time of job j on that machine, this criterion is shown in equation (4). Consequently, equation (5) ensures that no preparation takes place when machines are unavailable. Equations (6) and (7) together prevent the completion of operations during machines are unavailable. Equations (8) and (9) simultaneously specify the sequence of jobs on each machine and packets. Equation (10) determines the order of the machines for the operations of each job. Equation (11) ensures that each operation takes place in exactly one packet of each machine. Equation (12) controls the sum of the total processing times of the operations contained in a packet according to the maximum time of that packet, which is the access time of the relevant machine (T i ). e completion time of each job is calculated through equation (13) and according to the objective function. Equations (14) and (15) calculate the longest completion time and the delay time of each operation, respectively. Finally, equations (16) and (17) represent the negative variables and binary variables (0 or 1).

An Example.
Here, to clarify the concept of the problem, we provide an example and solve it using the above MOMILP model. is example includes five jobs (n � 5), two machines (m � 2), and five packets for each machine (b � 5). e rest of the parameters are as follows: Considering the weight of the targets θ 1 � θ 2 � 0.5, the global optimal solution Z � 90.5 is obtained by solving the MOMILP model in Lingo 8 software; the Gantt chart of which is depicted in Figure 2. As can be seen, no operation is performed during the unavailable times of machines, which are shown in gray. us, our main goal of modeling this design has been met. In addition, the sum of the total processing times associated with the operations contained in each packet does not exceed the time of that packet. Also, the obtained sequence indicates that the delivery date has been observed so that the jobs with an earlier delivery time are at the beginning of the sequence and the jobs with a later delivery time are at the end of the sequence. Another noteworthy point is the observance of transfer times; as an example, we can point to the distance between the completion of job J 1 on machine M 2 and starting this job on the next machine (Tr 211 � 2).

Model Analysis.
In this section, the sensitivity of the MOMILP model to the number of jobs (n), machines (m), and packets (b) is investigated. To do so, all indexes related to variables and constraints are shown in Tables 1 and 2, respectively.
e effect of increasing the number of jobs, machines, and packets on the number of variables and constraints can also be seen in Table 3.

The Genetic Algorithm-Based Method
As mentioned in the introduction, the OSSP is known as an NP-hard problem and only small-scale problems can be solved accurately in a reasonable amount of time [24]. Although the proposed mathematical model achieves an accurate solution, its efficiency decreases as the dimensions of the problem increase. Hence, heuristic methods or approximate algorithms are more effective for solving medium-and large-scale problems that commonly occur in the real world.
Genetic algorithms (GAs) are a class of generalpurpose search strategies based on natural selection and genetics. GAs have been used effectively to a wide array of optimization challenges [19,25,26]. In contrast to local search algorithms such as SA, which are often focused on manipulating a single viable solution and are extremely quick, GAs retain and modify a population of possible solutions. Despite the fact that GAs have shown to be a diverse and successful search tool for addressing optimization issues, there are still several scenarios in which the basic GA performs poorly. erefore, several hybridization methods have been reported in the literature. In this section, we also propose to use the Multiobjective Parallel Genetic Algorithm to solve the OSSP.

Chromosome Display.
Chromosomes are in fact encoded solutions and points in solution space. In this paper, each operation is considered as a gene and each chromosome (solution) is represented as a permutation of genes (operations). For example, consider a problem with five jobs and 2 machines, and so 10 operations. A chromosome (or an encoded solution) can be looked at in Table 4 for reference.where O ij is the operation of job j on machine i. According to this chromosome, operation O 21 will be processed before operation O 11 (note that these two operations are related to the same job) and operation O 14 will be processed before operation O 11 (note that these two operations are related to the same machine).

Initial Population.
Scattering in generations prevents rapid convergence and local optimums. erefore, we randomly generate the initial population (initial generation) to maintain the scattering of solutions in the solution space as much as possible. at is, a random permutation from the set {1, 2, . . ., m.n} is considered as an individual where m.n indicates the number of operations.

Objective Function.
In order to calculate the objective function for each chromosome (solution space points), these so-called coded solutions should be decoded. For this purpose, we place the operations on the machines in ascending order from the first operation to the last one; taking into account their unavailable times as well, we calculate their completion time.
en, taking into account the completion time of all operations, the longest completion Also, in order to calculate the second expression of the objective function, i.e., the sum of the delays, first the completion time of each job is calculated as C max � C j � max i C ij and then the delay of that job is obtained by Finally, the sum of delays is calculated as n j�1 Tard j . How to determine the values of the coefficients θ 1 and θ 2 is explained in the next sections. erefore, the objective function of each chromosome is calculated as

Selection.
Selection is the process by which individuals present in a generation are selected in pairs for mating. ere are various approaches for selection in the literature [27]. In this paper, we use the roulette wheel selection approach, according to which individuals with higher fitness have a better chance of mating and this is in fact the basis of Darwin's theory. In order to apply the roulette wheel selection, two parameters are required: individual selection probability and individual cumulative probability, which are obtained from equations (18) and (19), respectively: According to the roulette wheel selection approach, a number is randomly generated in the range [0, 1], then individual i that meets the condition CP i−1 < r < CP i is selected as one of the pairs. is process is repeated for the second one.

Crossover Operation.
After selecting a pair of parents using one of the selection methods, a genetic crossover operator with the probability p c is used to combine the two parents and generate two children.
ere are several crossover techniques and combining parents: single-point crossover, two-point crossover, multipoint crossover, uniform crossover, and so on [28]. However, the main problem with the crossover operation is that the feasibility of new solutions (newborns) may not be guaranteed. In cases where new chromosomes or newborns produce infeasible solutions, corrective action is usually taken to turn them into feasible solutions, which will lengthen the solving time, by maintaining the feasibility of new chromosomes in each generation and without any modification process [29]. In the following, the crossover operation is described in four steps: Step 1: Select two operations randomly from chromosome 1 (parent).
Step 2: Transfer the genes from the two selected operations along with all the genes between them from parent 1 to child 1 while maintaining their location.
Step 3: Select the rest of the genes needed by child 1 from left to right until creating a complete chromosome and place them the same way from left to right in the empty spaces of child 1.
Step 1. Suppose the two operations selected by the parent 1 are O 14 and O 22 , in other words, genes 2 and 9, respectively.
Step 3. Select the rest of the genes needed by child 1 (from left to right) from parent 2 and place them (from left to right) in the empty spaces of child 1.
Step 4. Exchange the two parents 1 and 2 and repeat steps 2 and 3 until creating child 2 as follows: 3.6. Mutation. In order to prevent the generated generations from being directed toward the local optimums, a mutation operator with a probability p m is applied to each of the produced children. is operator exchanges the two randomly selected operations and their genes [29]. For example, suppose that operations O 12 and O 24 are selected from child 1. en, genes 6 and 8 are exchanged and mutated child will be simillar to the result given in Table 9)

Termination Condition.
e GA continues until a termination criterion is met, which is m.n.εs from the beginning and produces new generations [30]. Where ε is a coefficient whose value is adjusted through experiments. In the following sections, we explain how to set it up. Also, the reason for choosing this termination condition is that it gives the algorithm more time to solve larger problems.
3.8. Primary Genetic Algorithm. Before presenting the GA for solving multiobjective problems, here we describe the structure and performance of the original GA using the above explanations. e following is how the basic genetic algorithm works in 12 steps: Step 1. Generate popsize individuals (chromosome) randomly as the initial population.
Step 2. Calculate the fitness of each individual Step 3. Select a pair of parents from individuals based on the selection strategy.
Step 4. Apply crossover operator with the probability p c on the parents and generate two new children.
Step 5. Apply mutation operator with the probability p m on each child.
Step 6. Calculate the fitness of children Step 7. Repeat steps 3-6 until generating popsize new children.
Step 8. Arrange all existing individuals, both old (i.e., the initial population) and new (i.e., generated children), that reach 2 × popsize according to their fitness.
Step 9. Transfer popsize number of individuals with the best fitness to the new generation.
Step 10. Check the termination condition, if it is not met, go to step 3; otherwise, step 11.
Step 11. Select the best solution (the most fitted one) of the last generation as the answer to the algorithm.
Values related to the parameters of the GA including p c , p m , popsize, and ε will be determined by designing statistical tests in future sections.

Multiobjective Parallel Genetic Algorithm.
ere is a lot of research in the literature in which GA has been used to solve multiobjective problems due to its high ability to deal with multiobjective optimization. Since the GAs deal with a population of points (solutions), Pareto optimal multiple solutions can be found in a population of GAs [27]. In the following, we explain how to deal with a multiobjective problem.
In the mathematical model MOMILP, θ 1 and θ 2 are introduced according to the weight of each objective C max and n j�1 Tard j , and the objective function is written as a single function (1). Now, our biobjective problem has become a single-objective problem. However, determining weights is a delicate and sensitive task. To tackle this problem, we can divide the population into several groups, with individuals in each group searching for one of the possible combinations of goal coefficients, while all groups search in parallel for Pareto optimal solutions. In this way, we can turn a multiobjective problem into a single-objective  Table 6: Transfer the genes from the two selected operations along with all the genes between them from parent 1 to child 1 while maintaining their location.  Table 7: Select the rest of the genes needed by child 1 from parent 2 and place them in the empty spaces of child 1.   Complexity problem with different weights. erefore, we consider a set of λ different weights with a little difference between the two consecutive weights. Equation (20) shows the set of possible combinations.
Similarly, we divide the population into λ subpopulations and assign weights of the above set to them. us, each subpopulation searches for Pareto optimal solutions in a separate path. Subpopulations act independently and unrelatedly as parallel GAs, but their other parameters and operators, including crossover and mutation, the size of the subpopulation, and termination conditions, are all the same.
In this way, the scattering of solutions is effectively strengthened. Also, Pareto solutions are stored to keep dominant solutions.
is prevents losing dominant solutions during the optimization process. is set is updated frequently to get closer to Pareto's optimal solution. e process of MOPGA is as follows: Step 1. Generate the initial population randomly, including λ × popsize members Step 2. Divide the initial population into λ subpopulations Step 3. Create the weights related to the single objective function and assign to the subpopulations For instance, if we set λ � 21 and ρ � 0.05, we will have: Step 4. Perform the initial genetic algorithm process from step 2 for all subpopulations and save the dominant solutions Step 5. Find the best solution among the dominant ones Step 6. Finish

e Simulated Annealing (SA)-Based Method.
Another common metaheuristic algorithm for solving NPhard problems is Simulated Annealing. is algorithm is inspired by physical systems in which the energy of atoms is reduced through the cooling process to a minimum level [30][31][32]. SA analyzes the complete surface of the goal function and attempts to optimize it while going both uphill and downhill. Consequently, it is essentially independent of the initial values, which are often a crucial input in traditional algorithms. In addition, it can escape from local optimums and locate the global optimum via uphill and downhill maneuvers. In addition, SA makes fewer rigorous assumptions about the objective function than traditional algorithms. Because of these lenient assumptions, it can cope with objective functions with ridges and plateaus more readily. Finally, it is capable of optimizing objective functions that are not even specified for certain parameter values [33].
In this section, we first provide an overview of the SA and then describe the MOPSA.

e Primary Simulated Annealing (SA) Algorithm.
SA starts from a solution (energy level) as the basis of an initial temperature (temp 0) and searches in the solution space in the neighborhood of that solution. e search continues until reaching a termination condition, which we set as m.n.φ iterations. Parameter φ is a fixed coefficient in range (0, 1). In this algorithm, we also use the method of GA to encode the problem solutions and introduce chromosomes. We also considered a random permutation of operations as the base solution for the beginning. In order to move from one solution to another, the neighborhood search method described in Algorithm 1 is used. After completing the neighborhood search process, the current temperature is reduced by multiplying it by the parameter μϵ(0, 1) and the neighborhood search process is repeated. e algorithm continues until reaching the final temperature (temp f ). During iterations and searches, the new solution is accepted if it improves the value of the objective function, otherwise, the new solution is accepted with the probability exp(−Δ/temp i ), in which temp i is the current temperature and Δ � 100 × (Z new − Z old )/Z new . SA is summarized in Algorithm 2. We consider the final temperature as 1 and the value of other parameters of the algorithm including temp, μ and φ set by the statistical experiment design method is discussed in Section 4.

Multiobjective Parallel Simulated Annealing (SA).
MOPSA process is very similar to the process of MOPGA and both use the same approach to deal with multiobjective optimization. Here, the objective function is considered as a single function (1) and several SA examines the different weights of each target in parallel and independently. e difference between starting MOPSA and MOPGA is that, in MOPGA, the initial population is divided into several subpopulations, but since SA starts with only one chromosome, such a division is not possible. e process of MOPSA is as follows: Step 1. Produce λ chromosomes randomly Step 2. Produce the weights of the single objective function as equation (22) and assign them to the chromosomes Step 3. Perform the primary SA as shown in Algorithm 2 for all chromosomes from step 2 and store the dominant solutions Step 4. Find the best solution among the dominant ones Step 5. End

Experimental Design and Computation Evaluation
In this section, we seek to establish the adapted algorithms through experiments and statistical analysis. For this purpose, different levels of the relevant factors are considered 8 Complexity and their different combinations are investigated. erefore, the Taguchi approach is used as an experimental design method in this paper. In addition, after adjusting the parameters of the proposed algorithms using the Taguchi method and by generating random data, two problem groups, one with small dimensions and the other with large dimensions, are randomly generated and the results of the solution methods are compared.
We also compare the results of our experiments on a largescale benchmark with other methods that solved the OSSP.

Taguchi Experiment Design.
Taguchi method is one of the statistical analysis methods in which using orthogonal arrays the number of experiments is significantly reduced compared to complete factorial designs [30]. For example, to design an experiment with four factors, each including three levels, a complete factorial design performs 3 4 � 81 experiments, but Taguchi reduces them to nine experiments.
Taguchi believed that the key to improving quality was to reduce deviations, not just controlling placement within the specified range. erefore, he tried to reduce the deviations to zero and paid less attention to being on a fixed slope. is is the main reason why the Taguchi method is mentioned as a reliable method [34]. According to the Taguchi method, factors are divided into two categories: controllable factors or signals and uncontrollable factors or noise. In order to reduce the deviations, a fraction called signal to noise (S/N) is defined which indicates the sensitivity of the response variable to uncontrollable factors and shows the deviation around a certain amount of signal. High values of S/N indicate low deviation, which means that controllable factors are more effective than uncontrollable ones. Using the optimal levels of factors obtained by this method, close to the mean value with the least deviation is expected [30].
To convert the objective function to the fraction S/N, three types of criteria can be imagined in the Taguchi method: "the less the better," "the more the better," and "the closer to the nominal value the better" [30]. According to the measurement criterion of the problem discussed in this, which is expressed as a minimization expression, the "less is better" mode is appropriate. e relevant S/N formula is as follows: Also, a well-known performance criterion named RPD is used: Where Alg sol is the value of an objective function of an example which is obtained from an algorithm of experiment (1) Choose a gene at random.
(2) If the selected gene is located in the first location of the chromosome, then replace it with the next gene. Otherwise, (3) If the selected gene is located at the last location of the chromosome replace it with the previous gene.
Otherwise, (6) Evaluate exchanging with the two lateral genes (previous and next). design and Min sol is the minimum value of the objective function of that example which is obtained from all the algorithms of experiments. So, it is obvious that low RPD values are more valuable.

Data Generation.
To perform Taguchi experiments, a problem with dimensions (m, n, b) � (10, 5, 20) is considered, and five iterations of the same are generated randomly. Also, other literature-inspired [9,[35][36][37] parameters of the problem are as follows: (i) Processing times have a continuous uniform distribution in (1, 99). (ii) Transfer times between machines have a continuous uniform distribution in (1) and (20). (iii) Consecutive unavailable durations of each machine have a uniform distribution in (1, 50). (iv) Consecutive machine access times (packet size of each machine) are calculated as follows: where a is selected from 1/5, 1/4, 1/3 { } randomly.
Delivery time is calculated as follows: In which d j is the total time that a job is on all machines.

Adjusting the Parameters of the MOPGA.
Factors to be set in the MOPGA algorithm are as follows: crossover probability (p c ), mutation probability (p m ), population size (popsize), and termination criterion coefficient (ε). According to Table 10, three levels are considered for each factor [29,30]. So, Orthogonal Array Design L 9 is the most appropriate case because it has exactly the same number of factors and levels as the experiment and meets all objectives. Combinations of different levels of factors in each experiment of the design L 9 for the MOPGA algorithm are shown in Table 11. MOPGA and all experiment designs are programmed in C#.Net visual language and run on a computer with GHz Intel ® Core ™ and 4 GB ram. According to the results, optimal levels of factors are as follows: p c � 0.8, p m � 0.2, popsize � 30 and ε � 0.4.
In order to find the effect of each factor on the response variable, we perform an analysis of variance (ANOVA). It is important to note that in the designed experiment, the degree of error freedom is 0. To deal with this problem, the factors that have the lowest mean squares are considered errors. In the above experiment, factors p c and popsize with mean squares of 0.92 and 0.70, respectively, are of the least mean squares and so have the least effect on the response variable. erefore, these two factors are considered errors. Analysis of the variance of the S/N fraction is given in Table 12. erefore, the most effective factors are p m and ε with the percentage of effect of 86.92% and 7.81%, respectively.

Adjusting the Parameters of the MOPSA.
According to the description of the SA, three factors should be adjusted: initial temperature (temp 0 ), temperature reduction coefficient (μ), and the coefficient of the number of iterations per temperature (φ). Considering two levels for each factor, orthogonal array L 4 is suitable and its factors and levels are depicted in Table 13. Also, the combinations of different levels of factors in each experiment of design L 4 of the MOPSA are shown in Table 14. MOPSA and all experiment designs are programmed in C#.Net and run on a computer with GHz Intel ® Core ™ and 4 GB ram. Optimal levels of factors are as follows: temp 0 � 100, μ � 0.7, and φ � 0.3.
An analysis of variance is performed to find the effect of each factor. Also, to prevent the error degree of freedom from zeroing, we consider the factor with the lowest mean squares as the error. e results indicate that the factor μ with the mean square 0.03 has the lowest effect on the response variable. Analysis of variance of S/N for the factors of the MOPSA algorithm is shown in Table 15 in which factor φ with the percentage of effect of 66.62% has the highest effect on the response variable. As well, temp 0 is the second effective factor with a percentage of effect of 21.89.

Computational Evaluation.
In order to evaluate the performance of the MOMILP model and the proposed algorithms, two sets of problems including small-scale         problems (n � 4, 5, 6; m � 2, 3) and large-scale problems (n � 10, 20; m � 5, 10) have been generated [26]. Considering one iteration for each of the small-scale problems and five iterations for each of the large-scale problems, we will have a total of 26 problems. e rest of the parameters are considered exactly as in the Data Generation Section. Considering the weights as θ 1 � θ 2 � 0.5, small-scale problems are solved using the primary GA and SA and the results will be compared to the optimal solutions obtained by the MOMILP model with the mentioned weights. MOMILP model is coded in Lingo 8 and C#.Net is used to code the primary algorithms GA and SA. e performance of the MOMILP model and primary SA and GA are shown in Table 16. As seen in the table, although the MOMILP model is a suitable method to solve small-scale problems, by increasing the computational time of the MOMILP model while magnifying the problem, the GA method becomes more efficient with a deviation of 4.85% from the optimum. As well, although the SA algorithm shows a relatively large deviation from the optimal solution, its time saving compared to GA is clear.
To solve large-scale problems, MOPGA and MOPSA algorithms are used and each problem is solved five times using each of the algorithms. In addition, in order to compare algorithms, obtained results are converted to relative percentage difference (RPD) criteria and are shown in Table 17. erefore, MOPGA with an average RPD of 0.05% is more efficient and stable than MOPSA in solving largescale problems.
In addition, MOPGA and MOPSA are programmed in C#.Net programming language, and all algorithms including models MOMILP, GA, SA, MOPGA, and MOPSA are performed on a computer with CPU 3 GHz Intel ® Core ™ I and 4 GB RAM.
In order to show the efficiency of the proposed algorithm (MOPGA), we tested the above algorithm on benchmarks by Taillard [38] and compared it with other metaheuristic algorithms including SA [39], GA [40], Hybrid GA [40], GA [41], Hybrid GA [42], Ant Colony System (ACS) [43], Cuckoo Search (CS) [43], Cat Swarm Optimization (CSO) algorithm [44], Extended Genetic Algorithm Open Shop (EGA_OS) [14], and Bat Algorithm Open Shop (BA_OS) [22]. Table 18 shows the simulation results of the proposed algorithm and the comparable algorithms for different benchmarks. As mentioned in this table, the proposed algorithms were able to achieve the optimal solution in most benchmarks and solve the problem well.
As you can see in Table 19, the MOPSA algorithm has less execution time than the other two algorithms and was able to respond in less time. e MOPGA algorithm was able to achieve the optimal answer in less time than the CSO algorithm. e accuracy of the proposed algorithm in programming things well on machines is shown in Figure 3. is figure shows the Gantt chart for the 15 × 15 − 1 benchmark. In this case, a few free times left for the machine shows the accuracy of the proposed algorithm. Figure 4 shows the fitness diagram of the MOPGA for the 15 × 15 − 1 benchmark. In this diagram, the best fitness is getting close to 900 while the walk number crosses 900 walks. Figure 5 shows the dispersion diagram for the 15 × 15 − 1 benchmark. Population dispersion is always maintained in the MOPGA. As you can see in the above algorithm, premature convergence of chromosomes is prevented.

Conclusion
In this paper, a multiobjective mixed-integer linear programming model is adapted to solve biobjective OSSP considering unavailable times for machines. Also, "maximum completion time" and "total delays" are considered in optimization criteria simultaneously. e efficiency of the MOMILP model is investigated through a number of smallscale problems and compared with the efficiency and execution time of GA and SA. Although the results indicate the high ability of the MOMILP model to solve small-scale problems, the analyzes performed on the model highlight the need to develop other methods such as heuristic and metaheuristic methods to solve large-scale problems. erefore, MOPGA and MOPSA algorithms were introduced for multiobjective problems with unknown weights for objectives. Also, the reliability of multiobjective algorithms is evaluated by designing Taguchi experiments. Results show that the MOPGA algorithm is more efficient than the MOPSA algorithm in dealing with large-scale problems. In the following, some areas of development and expanding the present study are mentioned: (i) Developing other solution methods, including new heuristic and metaheuristic methods to solve largescale problems and comparison with the algorithms proposed in this paper can be a good idea for making the model more practical. (ii) Considering more diverse optimization criteria according to the needs of today's industry can also provide the basis for future studies. (iii) In this paper, optimizing factors of the algorithms in discrete space are explored. erefore, another area of development of the present paper can be the optimization of factors in the continuous space. (iv) A sequence-dependent preparation and separation times can be considered and then a mathematical model can be developed. (v) e number of vehicles can be used as a constraint on the problem. (vi) Considering variable speeds for machines seems practical. at is, the processing speed of the 14 Complexity machines decreases after performing a certain number of jobs or a certain period of time and then increases again after performing the maintenance process.

Data Availability
We used data from ref [26] and discussed the same in Section 4.2 of the manuscript.

Conflicts of Interest
e authors declare that they have no conflicts of interest.