A Two-Phase Pattern Generation and Production Planning Procedure for the Stochastic Skiving Process

Te stochastic skiving stock problem (SSP), a relatively new combinatorial optimization problem, is considered in this paper. Te conventional SSP seeks to determine the optimum structure that skives small pieces of diferent sizes side by side to form as many large items (products) as possible that meet a desired width. Tis study studies a multiproduct case for the SSP under uncertain demand and waste rate, including products of diferent widths. Tis stochastic version of the SSP considers a random demand for each product and a random waste rate during production. A two-stage stochastic programming approach with a recourse action is implemented to study this stochastic NP -hard problem on a large scale. Furthermore, the problem is solved in two phases. In the frst phase, the dragonfy algorithm constructs minimal patterns that serve as an input for the next phase. Te second phase performs sample-average approximation, solving the stochastic production problem. Results indicate that the two-phase heuristic approach is highly efcient regarding computational run time and provides robust solutions with an optimality gap of 0.3% for the worst-case scenario. In addition, we also compare the performance of the dragonfy algorithm (DA) to the particle swarm optimization (PSO) for pattern generation. Benchmarks indicate that the DA produces more robust minimal pattern sets as the tightness of the problem increases.


Introduction
Te skiving stock problem (SSP) has been described by Zak [1] as the companion piece to the cutting stock problem (CSP) since they have similar inputs and solution approaches.Skiving is a relatively new technology.It involves joining, binding, stitching, or sealing together small pieces (auxiliary rolls) to form large items (products) that meet a minimum specifed width.It aims, on the whole, to obtain the maximum number of large items as possible [1].Several narrow rolls are joined in the paper industry to construct wider rolls [1].In manufacturing toothed belts, the small rectangular pieces remaining after cutting are stitched together to form large rectangles [2].Other skiving applications include pipe manufacturing, frefghting system design [3], and cognitive radio network spectrum aggregation [4].
In particular, in industries with high raw material residues, the skiving process is widely applicable [2].In this case, a set of optimal pattern combinations is generated, maximizing production with limited availability of smaller items.Te skiving process is an inherently waste-minimization procedure, mainly modeled as a mixed-integer problem (MIP).Because of the NP-hard structure of the MIP, enumerating all feasible patterns is difcult, specifcally for large-scale problems.Pattern-based models [1], arc fow models [5], or assignment models [4] are some of the solution approaches presented in the relevant literature.Methodologically, for the MIP structure of its mathematical formulation [1], the synthesis of column generation (CG) and branch and bound (BB) is fundamental.Due to the arduous nature of obtaining an optimal solution for large problems and dimensional complexities, most of the proposed methodologies for the SSP employ heuristics [6].Tese heuristics can essentially aid in ensuring integer solutions after solving the linear relaxation [6], and the construction of the minimal pattern sets provides input to the mathematical model [7].Te performance of metaheuristic optimization methods on the SSP compared to the CSP remains under-researched in the literature, despite the fact that metaheuristic methods have been very promising on many similar problems, such as the CSP.
Te vast majority of the SSP literature considers solving the deterministic SSP.Tese approaches stem from the assumption that the decision-maker has the perfect information regarding all parameters, which leads to unrealistic results.Real-world problems involve large uncertainties in various parameters such as product demand, resource availability, yields, set-up and processing times, and costs.In deterministic planning, where only expected values are considered, it is not possible to make an appropriate adaptive decision to minimize the risk caused by the variability of these factors.Many researchers have investigated the various structures for the stochastic CSP, where the demand [8,9] or the yield [10] is a random variable.However, there is a noticeable lack of stochastic solution strategies for the SSP in the literature.
In this study, we expand upon the pure SSP in several dimensions by including (i) the set-up costs for each pattern change, (ii) the raw materials costs of the requisite small items, and (iii) the required quantities of small items to be used.Furthermore, a heuristic method for solving the multiproduct SSP in a given stochastic setting is implemented.For this stochastic version of the SSP problem, twostage stochastic programming (SP) with recourse is employed [11][12][13] and a mathematical model for weakly inhomogeneous items under both stochastic demand and stochastic waste (scrap) rate is proposed.In the two-stage stochastic program, production decisions are made before the demand occurrence, as opposed to the "wait-and-see" approach where decisions are made after the reveal of the random variable values [8,9,11,14].We make decisions about production amounts, skiving patterns, and the number of each replicated skiving pattern before any scenario occurs, as they are scenario-independent decision variables.In the second stage, the scenario-dependent decision variables are the underproduction and overproduction quantities.Tese variables represent the measures taken under possible scenarios.Furthermore, a two-phase procedure is applied to solve the problem, where the frst phase produces the minimum skiving pattern set and the second aims at minimizing production costs.Tis two-phase procedure continues recursively until the target production quantity is obtained.In the frst phase, we implement the dragonfy algorithm (DA) [15] and generate the skiving patterns.Tis frst phase's output serves as the second-stage's input, where we implement a range of scenarios combining random variables.Te sample-average approximation (SAA) method for the two-stage stochastic programming model is used to obtain a solution to the SSP [13,16].
Te terms "two-phase" and "two-stage" are not interchangeable.Te overall solution process is referred to as "two-phase."Te frst phase refers to the DA that generates the skiving patterns.Te second phase refers to the two-stage stochastic programming with a recourse action model that minimizes the expected total production cost.Tis study is a single-objective analysis that minimizes the total production cost.However, it also aims to maintain an acceptable trim loss level as a DA goal in the frst phase.
Te paper is structured as follows: the next section reviews the relevant literature, and Section 3 outlines the stochastic SSP.Te methodology is presented in Section 4. Tis section is followed by an illustrative example in Section 5. Numerical experiments and discussions are provided in Section 6, together with the computational complexity discussions.Finally, in Section 8, the conclusions and future work are presented.

Literature Review
Te SSP was initially proposed by Johnson et al. [17] as an incorporated part of the CSP.Tey unifed the CSP and the SSP into a single problem called the cutting and skiving stock problem (CSSP).Te CSSP modeled the cutting and skiving of large products in a two-step framework.A pattern-based mathematical formulation and commercially available software (MAJIQTRIM) are proposed using heuristics and linear programming to solve the CSSP.It was only when Zak [1] carried out a theoretical analysis comparing the SSP and the CSP models that the SSP was recognized as a problem in its own right.Zak's study [1] proved that the SSP is not the dual form of the CSP.According to Zak [1], the SSP shares some input data similarities with the CSP in terms of item widths, consumer demand, and scalar knapsack capacity.However, the SSP and the CSP have diferent pattern matrices due to their diferent structures.Te CSP is structured as a set-packing while the SSP is structured as a set-covering [18].Following these fndings, Zak [1] reported that the SSP is not the dual form of the CSP and launched SSP as a standalone challenge in combinatorial optimization [1,19].Martinovic and Scheithauer [20][21][22] stated that the SSP is structure-wise closer to the dual bin packing problem (DBPP), also referred to as a particular type of the bin covering problem (BCP).However, the SSP difers in being expressed and solving from the DBPP [1].Martinovic and Scheithauer [20] pointed out these diferences as the level of heterogeneity of item sizes and their available quantities based on the study of Wäscher et al. [18].According to this study [18], the SSP is associated with items with low heterogeneity.In contrast, DBBP contains very heterogeneous items.Furthermore, the DBPP formulation regards the presence of each small item type as one [23].Zak [1] has extended this problem by incorporating higher availability values for small items.Ultimately, item-oriented models and heuristics are used in the mathematical formulations and solution approaches of the DBPP.
Te SSP is challenged by fnding integer solutions for large problems, as with CSP.Consequently, the most common way is to obtain a linear relaxation and then 2 Applied Computational Intelligence and Soft Computing discretize the solution.Arbib et al. [2] used a column generation (CG) [24] and a branch-and-bound (BB) algorithm to solve the CSSP with cutting loss minimization in the production of transmission belts.Tey extended the singleperiod CSSP model proposed in [2] to a multiperiod problem in which a BB algorithm solves a CSSP with a small size [25].In a similar study, Zak [1] presented CG [26] for the linear relaxation of the problem and a (BB) algorithm to obtain integer solutions for the SSP.Meanwhile, Gilmore and Gomory's [24,26] pattern-based model and column generation (CG) for the large-scale CSP are also convenient for the SSP [1].Furthermore, the pure SSP can be categorized as an output maximization based on Wäscher et al. [18].
In addition, the 1D-CSP with a skiving option is by Ágoston [3] for a fre-fghting system, where single-sized pipes are cut into smaller pipes, and the residual parts can be joined using the skiving technique to produce extended pipes.Te process allows only one welding operation on the pipe for safety reasons.Ágoston also transformed the CSP into a mixed-integer linear programming (MILP) model, which includes sequential cutting and welding patterns, and proposed a three-stage algorithm that minimizes inventory and the cost of diferent patterns.Karaca et al. [27] proposed two solution methods for the biobjective SSP, where the frst objective is to minimize trim loss and the second is to minimize the number of welds in a product for quality and sustainability reasons [28].Tey proposed CG and B&B algorithms as exact solvers and the dragonfy algorithm (DA) integrated with the constructive heuristic and a heuristic approaches.Finally, they presented comparative results of these two methods regarding solution quality and computational complexity.Karaca et al. proposed the DA for the pattern generation procedure, stating that, unlike the CSP, metaheuristics are under-utilized in the SSP realm.Well-known metaheuristics in this area are Tabu search [29], simulated annealing (SA) [30][31][32], ant colony optimization (ACO) [33], and its variants or hybridizations [34], genetic algorithm [35][36][37], genetic symbiotic algorithm [38], evolutionary programming (EP) [39], and hybrid chemical reaction optimization (CRO) [40].
Te studies mentioned above involve numerical implementations and real-world applications of the SSP.Nevertheless, the literature also ofers theoretical analyses.Martinovic et al. are considered one of the most important pioneers of theoretical analysis of the SSP literature [4,5,[19][20][21][22]41].In addition to the pattern-based SSP model, Martinovic and Scheithauer [20] presented three graphtheory-based models for the SSP.Tey further investigated the continuous relaxations of these models to prove their equivalences with the pattern-based model.In a latter study, Martinovic and Scheithauer [21] formalized the gap between the continuous relaxation value and the optimal objective function value.Tey also proposed a modifed version of the best-ft algorithm to improve the upper bound of the optimality gap for the divisible case.Moreover, they investigated the proper relaxation concept using the proper pattern set, which gives tighter bounds than continuous relaxation [21].Tey analyzed the integer round-down property (IRDP) and modifed integer round-down property (MIRDP) and noninteger round-down property (non-IRDP) in discretizing the obtained solutions [21].In [22], furthermore, Martinovic et al. [5] improved the standard arc fow model, which was previously presented in [20] by incorporating reversed loss arcs and minimizing the number of arcs, which dramatically reduced the execution time.Tey presented a new theoretical approach based on hypergraph matching to develop a relaxation by evaluating the proper gap for skiving stock instances.Tey benefted from the polyhedral theory to characterize IRDP instances for the SSP [19].
In parallel to the theoretical analysis, Martinovic et al. [4] also considered the problem of spectrum aggregation for cognitive radio as a real-world application of the SSP.Tis problem concerns the allocation of the radio network spectrum availability, in which the primary user allocates predetermined portions of a frequency band.Existing bandwidths, or spectrum holes, were too limited to support secondary users' bandwidth needs.Tey analyzed aggregating free spectrum holes to supply sufcient bandwidth to secondary users under hardware limitations [4].Tey compared the standard SSP model based on the Zak, solved by the column generation method, to an arc fow model and an assignment model.Computations showed that the assignment model resulted in a signifcant reduction of the computational complexity.
Te stochastic SSP literature is minimal, and it is possible to use solution methods previously employed in the CSP.Terefore, we also elaborate on the stochastic CSP literature in this section.Te majority of the methods implement problem-based heuristics and various stochastic programming approaches.Te CG is the most commonly applied method to retrieve an initial, noninteger solution [8,42,43].Alem et al. [8] implemented a CG for a two-stage stochastic programming model where the demand was random in the CSP.Jin et al. [43] implemented a two-stage stochastic integer programming for the CSP that makes inventory replenishment decisions in the frst-stage and cutting decisions in the second stage.Moreover, the CG is used for the LP relaxation for the cutting stock problem, and the residual heuristic is used to obtain integer solutions.Another solution methodology for the stochastic CSP by Demirci et al. [42] uses the CG and the L-shaped algorithm.Chauhan et al. [44]'s CG and BB approach was accompanied by both a fast pricing heuristic and a marginal cost heuristic in a stochastic problem where the demand is random.As an alternative method to CG, Beraldi et al. [9] suggested a twostage stochastic programming model with Lagrangian decomposition and BB to decompose the problem into subproblems.Tese subproblems are fed into a proposed heuristic in the second stage.Moreover, Sculli [45] considered defects as random variables due to the winding process in the CSP.José Alem and Morubito [46] employed stochastic demand and set-up times for cutting patterns in furniture production.Zanjani et al. [10] presented a twostage stochastic linear programming (LP) approach for the CSP where yields are random variables with discrete Applied Computational Intelligence and Soft Computing probability distributions.Tey used the sample-average approximation (SAA) scheme to approximate the problem to avoid high computational time caused by numerous scenarios in stochastic programming.
Tis study extends Zak's standard pattern-based SSP [1] by including production, set-up, and raw material costs.Having the quantity of each raw material as a decision variable also converts the model into an assortment problem.Moreover, the multiproduct version is adapted to the standard pattern-based SSP model [1,4].Te main contribution of our study is to handle diferent sources of uncertainty: the product demand and the waste rate.First, the DA produces skiving patterns.Te later stage deals with uncertainty in the SSP by a two-stage stochastic programming model with recourse formulation [11].Furthermore, we implement an SAA approach [13] to cope with a large number of scenarios and previously applied in extensive problems such as supply chain network-based decision-making [47,48].Finally, a recursive solution procedure between the DA and the SAA is developed for the large-sized stochastic SSP under uncertain demand and waste rate.

The Stochastic Skiving Stock Problem under
Uncertain Demand and Waste Rate

Te SSP Defnition and Mathematical Formulation.
Basic defnitions and the formulation for the SSP [4,[19][20][21] are presented as follows: E ≔ (m, l, L, b) is the SSP instance, with m is the number of small item types.l is an m-dimensional vector representing the width of each type and is composed of l i , where i ∈ I and |I| � m and L is large item width [4,[19][20][21].
Large items with a minimum width of L should be produced during the skiving process.Moreover, b is an m-dimensional vector consisting of b i which represents the availability of each small item type [4,[19][20][21].For the sake of clarity and standardization of terminology, small and large items will be referred to as items and products throughout the rest of the study.All input data are positive integers (Z + ) and satisfy L > l 1 > . . .> l m as a sorted set.Every feasible set of items for constructing a product with a minimum width of L is called a a feasible pattern of the E. Any feasible pattern can be described by a nonnegative vector a � (a i , . . ., a m ) T ∈ Z m + , and a i ∈ Z + is the number (repetition) of i th item in a pattern.Finally, P(E) ≔ a ∈ Z m + | l T a ⩾ L   represents a feasible set of patterns [4,[19][20][21].
Te minimal pattern is one where the product width is less than the threshold value L if any element is dropped from the pattern.In plain expression, there is no feasible pattern  a ∈ P(E) such that  a ≤ a holds component-wise ( a i ≤ a i , ∀i ∈ I).A minimal pattern set (or set of minimal patterns) is denoted by P * (E) [4,[19][20][21].Also, a pattern with index ja j ∈ P * (E) is an exact pattern if l T a j � L, so every exact pattern is a minimal pattern.x j is the decision variable indicating the repetition of the pattern j, a j � (a 1j , . . ., a mJ ) T where a j ∈ Z m + , where j represents the index of the pattern.To conclude, the objective function of the SSP objective function is defned as follows [4,[19][20][21]: Furthermore, a minimal pattern a j ∈ P * (E) which satisfes a ij ≤ b i , ∀i, j is called a minimal proper pattern.In a sense, a minimal proper pattern also meets the item availability constraints for a given production quantity of each product.Else, it is a nonproper minimal pattern [4,[19][20][21].We extend and defne the propriety for the pattern set, as well.A proper minimal pattern set is denoted as the patterns in P * P (E) can produce a predetermined production amount with the available items.
Furthermore, without loss of generality, E ≔ (m, l, L, b) is extended by including multiple products of various widths, defned as the multiproduct case as E ≔ (m, K, l, L, b) where K represents the product type number.k is the index of product type k ∈ K ≔ 1, . . ., K. L is no longer a constant but a vector of widths.x jk refers to the amount of pattern j used to produce product k.Ten, the formulation is extended as follows:

Standard Formulation of a Two-Stage Stochastic Programming (SP) Model.
Tere are several applications of the two-stage stochastic programming in literature, such as the air freight hub location and fight routes planning where the demand is random [49], supply chain planning in which the capacity of facilities and the customer demand are random [48], and the sawmill production planning problem in which the yield of production is random [10].Below, we present the general framework.
Assume that ξ is a random vector that holds realizations of each scenario.Te decision variables whose values must be decided before observing the actual values of ξ are called the frst-stage decision variables.An instance of such a decision variable is denoted by the vector x.When any scenario occurs, it reveals the complete information on the random variables in ξ, and their values become known.Any decision after ξ is known as a second-stage decision variable or the recourse action, denoted by vector y.It is important to 4 Applied Computational Intelligence and Soft Computing emphasize that the recourse action depends on both the frststage decision variables and the outcomes of random variables.In other words, the two-stage stochastic programming recourse action model can respond to each possible outcome of random variables using the secondstage decisions (the recourse action).Te general formulation of A two-stage stochastic programming (SP) model with recourse general formulation is given as follows [11][12][13]: where Q(x, ξ) � min q T y|Wy � h − Tx,y ≥ 0  .Te random vector ξ is formed by the q T , h T , and T components, where E ξ is mathematical expectation according to ξ where T is the technology matrix, h is the right-hand-side values, W is the recourse matrix, and q T is the penalty cost vector of recourse decisions [11].
In the general formulation of the two-stage stochastic programming in equation ( 3), each value combination for the random variables in ξ corresponds to the scenario s ∈ S ≔ 1, . . ., S with the probability P s .Terefore, equation (3) can be written in the form of the stochastic model as follows [11]: where Q s (•) refers to the value of the Q(x, ξ) under scenario s.

Mathematical Formulation of the Stochastic SSP under
Demand and Waste-Rate Uncertainties.In the nomenclature, we solely list the notation used for the stochastic SSP model.It should be noted that the diferent phases of the solution methodology will also use their own notations as described in Section 4. We transform the original SSP into a cost-minimization problem by including the production, raw material, set-up, overproduction, and underproduction costs.Furthermore, the demand D k and the approved product rate Υ k are the random variables for each product k � 1, . . ., K. Υ k , also known as the yield efciency, is the rate of approved products after discarding the production waste.In this study, we will assume the same random yield efciency for all products, which reduces Υ k to Υ, resulting in a total of K + 1 random variables.Each combination of the values for D k and Υ corresponds to a scenario indexed by s ∈ S ≔ 1, . . ., S with the probability P s such that P s ≥ 0 and  s∈S P s � 1.Finally, each scenario realization for every random variable is represented as D(s) � (d s1 , . . ., d sK ) and Υ(s) � υ s .Both random variables constitute the base for the mathematical model in the two-stage stochastic programming with recourse.
We partition the decision variables of the stochastic model into two parts: the frst-stage and second-stage decision variables.Te frequency of each pattern for each product denoted by the matrix x is a frst-stage decision variable composed of x jk 's.Another frst-stage decision variable is the raw material needed, denoted by the vector r and composed of r i s.Finally, the last frst-stage decision variable is the vector y, denoting the set-up change decision y j , ∀j.Te values of a and Δ are obtained using the DA before solving the SP.Te a matrix has the values of a ij , and the Δ matrix holds the δ jk values.Tese values fed into the frst stage of the SP as parameters.Te overproduction and the underproduction amounts (q + sk , q − sk ) are the second-stage decision variables and depend on the scenario and the frststage decisions.Ten, we can construct the frst-stage objective function as follows: Moreover, s ∈ S, Q s is the Q(•) function provided in equation (5) for scenario s and is defned as follows: Finally, by using equations ( 5) and ( 6), and the nomenclature, the deterministic equivalent of the stochastic SSP model with the instance E ≔ (m, K, l, L, b) is given through equations ( 8)-( 17), and we obtain the following: Applied Computational Intelligence and Soft Computing Te objective function is given in equation ( 8) (z SP ) minimizes both the frst-stage and the second-stage costs.Te frst-stage costs are composed of the following components: (i) Raw material cost: the cost of items used to form products involves the costs of generating leftovers or purchasing raw materials.(ii) Set-up cost: if a pattern is used in the skiving process, the set-up cost for that specifc pattern is incurred.However, since recent skiving machines are fully automated, the diferences between set-up times of each pattern are assumed trivial.Terefore, we assume that the set-up cost is fxed and does not change with a specifc pattern.(iii) Production cost: an item roll naturally has two dimensions: width and length.We consider each item's variable width and fxed length, resulting in the 1D-SSP.Terefore, the production time and cost for the skiving process to form every product are assumed to be fxed.
Te second-stage costs include overproduction and underproduction, lost sales or overtime production costs, and the costs of procuring products from other sellers to meet the demand.Constraint (9) multiplies the number of pattern repetitions by the number of items in each pattern to ensure enough items are available for the amount produced.Constraint (10) balances the overproduction or underproduction amount at the second stage due to the frststage production decisions and the scenarios.Te set-up constraint given in equation (11) states that if a pattern is used, the set-up cost is related to this pattern.Moreover, it can be used up to M k times for product k, where M k is an upper bound for the number of replications of pattern j in product k.However, the waste rate might require extra production.Terefore, a reasonably greater value than the maximum demand can validate the value for M k s.
Te set-up cost is incurred for every pattern change.As aforementioned, if multiple products can be produced using the same pattern, we can avoid excessive set-up costs by setting this pattern once and producing all products simultaneously.Tis way, we do not need to change patterns and pay the set-up cost only once.For this purpose, we construct a pattern pool that unites all patterns used in all products.An auxiliary and dependent binary variable δ controls the assignment of patterns to products.
Constraints ( 12)-( 15) are integrality and positivity constraints, and constraints ( 16) and ( 17) impose that y j and δ jk are binary variables, respectively.We determine the values of the decision variables a ij in constraint ( 9) and δ jk in constraint (11) in the frst phase.Since the values of both variables are determined in the frst phase, we feed them as parameters for the second phase.Terefore, the fnal model becomes a stochastic mixed-integer programming (SMIP) model.We will analyze the performance of the DA using a larger numerical example in the next section.

Deterministic Counterpart of the First-Stage Model.
We must check the propriety of P P (E) throughout the procedure.In other words, whenever we make a production decision, we must check if we can produce this amount with the patterns on hand.If not, we can opt for underproduction or search for new patterns to avoid losing sales.
For this checkpoint, we implement the deterministic counterpart of the SSP with a small modifcation.Without the loss of generality, the deterministic counterpart of the SMIP model given in equations ( 8)-( 17) can be rewritten using the frst-stage variables and costs.Tis model minimizes the frst-stage cost at a given production amount as follows: 6 Applied Computational Intelligence and Soft Computing In this deterministic model as shown in equations ( 18)-( 22), the indices, the frst-stage variables, and the frststage costs are almost the same as the original stochastic problem given throughout in equations( 8)- (17).Te main diference with the stochastic counterpart is the introduction of the production upper bound for each product denoted by upperBound k in equation (20).Tis upper bound represents a target production amount (usually dependent on the demand).Te variable q − k tracks the lack of each product, and a penalty cost, Ψ k , represents a large number such as k to be zero.In other words, minimizing this objective function forces the total production of each product to be equal to the upper bound.Tis deterministic model is used iteratively in the algorithm to check whether the minimal pattern set generated by the DA can satisfy a given target production amount.If not, the algorithm recursively triggers the DA to generate additional minimal patterns until the target production amount (upper bound) is satisfed.Briefy, this process controls if additional patterns are needed, and in this way, it prepares an efcient pattern set for the next phase, which may improve stochastic solutions.

The Proposed Solution Methodology
In this section, we develop an iterative two-phase solution methodology for the stochastic SSP.An overview of the methodology is presented in Figure 1.In the frst phase of the algorithm, we implement the dragonfy algorithm (DA) proposed in [15] to produce the minimal pattern set P * (E).Tis minimal pattern set P * (E) is fed into the second phase.In the second phase, the SMIP presented in Section 3.3 is solved by the SAA method [13].Tis process provides candidate solutions and an upper bound for the production amount.We frst present each module separately and later defne the two-phase methodology that integrates these algorithms, including the DA results in the proposed methodology, producing a heuristic solution.

Dragonfy Algorithm Implementation
Steps.Te dragonfy algorithm (DA) is a method based on swarm intelligence.It employs nonlinear Lévy fights in the search space.Literature refers to the travelling salesman problem (TSP) [50], optimal dynamic scheduling of tasks [51], path planning optimization for mobile robots, moreover, optimization of 0-1 all knapsack problem versions [52], feature selection problems [53], graph coloring problems [54], and wind-solar-hydropower scheduling optimization by employing multiobjective version [55].It performs better than PSO and GA when discrete problems are considered [15].In this study, the DA, which was modifed by Karaca et al. [27], is used for the generation of patterns for each product on a separate basis by following the steps.

Preprocessing.
Te width of item types is used in this process.
Step 1: items no longer available are excluded from the set I.
Step 2: the amount of each item i necessary for the construction of a product k (n ik ) is calculated such that Step 3: the item set is extended by repetition of each item (n ik −1) times to get the expanded and sorted item set I + , so that there are n ik of each item.Te composition of the updated item set is shown in Figure 2. Assume In other expressions, cl h is the accumulated length of the items from the frst to the h th position.
In the following, the extended set I + is introduced into subsequent stages of the main dragonfy algorithm framework.Step 1: Each dragonfy has a dimension of I + , and each dragonfy value is a random number uniformly distributed between 0 and 1. Generating this matrix is Applied Computational Intelligence and Soft Computing called as pos NoD,I + .Each row of pos NoD,I + denotes a dragonfy, that is, a solution.
Step 2: For each dragonfy, the value of the objective function, the trim loss, is calculated.Let b be the b th dragonfy's index.Te objective function of the b th dragonfy is calculated in the following way: (1) Sort in decreasing order the b th row of the pos NoD,  I matrix.(2) We now have the index vector for the values that have been sorted.Tis order is denoted as Tat is, it will skip items until it reaches the desired length for product k.(4) Te trim loss is computed as  h ′ h�1 l [h] − L k .Otherwise specifed, the indexes of the values of the dragonfy are sorted in descending manner.Next, removing elements is started in this order until the product length is reached.As an illustration, assume the starting set is 6, 5, 3 { }, the extended and sorted sets I + � 6, 6, 5, 5, 3, 3, 3 { }, and the product width L k � 9. Suppose a dragonfy has the vector with [0.35, 0.45, 0.21, 0.76, 0.87, 0.98, 0.07] values.In this case, the sorted index vector is [h] � �→ � [6, 5, 4, 2, 1, 3, 7], meaning that the sixth dimension of the dragonfy is the largest and the ffth dimension of the dragonfy is the second largest.So cl [1] � l 6 � 3, meaning that if the frst order item is used, the length of the fnal product would be three units, which is less than L k , so the skiving process goes on with the second-order item.If the next item is skived, cl [2] � l [1] + l [2] � l 6 + l 5 � 3 + 3 � 6 is obtained.If the length obtained is still less than the desired threshold value for the product length, continue the skiving process with cl [3] � l [1] + l [2] + l [3] � l 6 + l 5 + l 4 � 3 + 3 + 5 � 11.Te threshold product length of L k is satisfed by these three skived items.So, the skiving process is stopped.For example, a dragonfy consisting of the vector [0.35, 0.45, 0.21, 0.76, 0.87, 0.98, 0.07] denotes a product consisting of two elements of length 3 and one element of length 5, giving a product of length 11. cl [3] − L k � 11 − 9 � 2 is the trim loss of this product.An essential tip is to maintain the positions of the dragonfies between 0 and 1 throughout the algorithm.Tis adjustment prevents extreme position values.Te trim loss of each dragonfy is calculated, and the objective function value vector, f → NoD , is obtained after decoding each dragonfy in the swarm.Each value of the vector is an indication of the objective function of a particular dragonfy.
Step 3: Te food source and the enemy are updated as food � pos b, * : Step 4: Suppose the instantaneous iteration is t.Te neighborhood border is updated as t/MaxIt.For each dragonfy, if there are no dragonfies in the neighborhood, a Lévy fight is used as described as follows [15]: where r 1 and r 2 are random values between 0 and 1 in which B is a customized parameter by the user and A is calculated as If there is at least one dragonfy in the neighborhood, then a weighted composite separation, alignment, cohesion, food source approach, and enemy escape vectors are computed, and the dragonfy's position change is as follows: Te following formulae are used to calculate these components separately [15]: Step Dragonfy positions are held between 0 and 1, so if the dragonfy position exceeds these limits in any dimension, the position will adjust to the closest border.
Step 6: Steps 2-5 are iterated until the maximum iterations are achieved.Te result of the algorithm is the minimum trim loss dragonfy, but it produces an extended and sorted version of the items.Each of the best dragonfies with the same objective function value constitutes a pattern.

4.1.3.
Postprocessing.Tis process breaks down the pattern and calculates the amounts of items in each pattern.Assume that the dragonfy given in the illustrative example is the best dragonfy and the outcome of the algorithm.Tis pattern uses two items with a width of 3 and one with a width of 5.
It should be kept in mind that diferent extended dragonfies can result in the same pattern.For example, let another dragonfy be [0.15,0.25, 0.11, 0.56, 0.77, 0.08, 0.89].Tis dragonfy also uses two items with length 3 and one item with length 5.
Te DA creates a minimal pattern pool to be used as a parameter in the SP model as shown in equations ( 8)- (17).Te produced minimal pattern pool is not necessarily proper.Te propriety of the pattern pool is determined as a result of the second phase, the sample-average approximation (SAA) algorithm.

Te Sample-Average Approximation (SAA) Algorithm.
SAA is implemented for large-size stochastic problems that cannot be solved easily by using exact solution methods because of the large number of scenarios.SAA approximates the objective function by using generated samples of scenarios.Te SAA generated N realizations of (n � 1, 2, . . ., N) of the random vector ξ.Tese realizations are denoted by ξ 1 , . . ., ξ N .Ten, the expectation E ξ Q(x, ξ) is approximated by the sample-average function Finally, the original problem ( 8)-( 17) is approximated by the SAA problem [10,16,56], where Q(x, ξ) is the objective function involving decision variables x and random variables ξ.According to Saphiro [57], SAA provides good convergence and robust statistical inferences, including analysis of error, stopping rules, and validation, and it is easy to implement with commercial software.Te steps of the SAA are given as follows [56]: Initialize: Generate G, (g � 1, 2, . . ., G), random samples from the distribution of random variable ξ, each of them is independent and identically distributed and has a sample size N where |N g | � N. Also, generate a sufciently large reference sample where N ′ ≫ N.
Step 1: Solve the problem (30) and optimal objective function value v g and candidate solution x g for each g.
Step 2: Compute v G average (31) which is the unbiased estimator of the objective function of the original problem v * and variance  σ 2 v G (32) of the objective function values obtained in the frst step.

Applied Computational Intelligence and Soft Computing
Step 3: Solve the problem g times with sample size N ′ (33) by using each candidate's solution x g of each g in order to fnd  v g for each and calculate  σ 2  v g (34).
Step 4: Compute the estimation of the optimality gap for candidate solution gap g (x g ) (35) and the variance of the optimality gap  σ 2 gap g (36) to analyze the quality of the candidate solution.
Step 5: Select the x g as an approximate solution of the SAA problem (x SAA ) that provides the best  v g , i.e., x SAA � argminv SAA where v SAA � min g�1,...,G  v g .
According to Ahmed and Saphiro [58], the lower bound for v * is provided by v G , and the upper bound for v * is obtained by  v g .Te optimal objective function value of the SAA problem converges to the optimal objective function value of the original problem in equation ( 3) with probability 1.00 as sample size N goes to infnity (N ⟶ ∞) [59].A larger sample size ensures a stricter approximation.However, it also increases the computational complexity [57].Terefore, using small independent and identically distributed (i.i.d) samples is more efcient than a large sample size.Te SAA algorithm is based on this principle.Te complexity increases exponentially as the sample size increases, especially for the integer problems [60].Tis increase in complexity eventually causes a trade-of between the approximation quality and computational complexity.

Te Two-Phase Solution Methodology. As shown in
Figure 1, we implement an iterative approach in two phases: (i) the DA and (ii) the SAA algorithm.In a nutshell, the DA initially uses the information of items and products to return a set of patterns.Tis set of patterns is called a minimal pattern set.Tis pattern set is fed into the SAA to obtain the upper bounds of production amounts, as shown in Figure 3. Tis upper bound also serves as a reference for the highest production amount.If the patterns found cannot satisfy this upper bound due to the availability constraints, then the pattern set cannot be deemed proper.We emphasize that if predetermined production levels can be satisfed by using generated minimal patterns, this minimal pattern set is called a proper set.Te DA is rerun to produce new patterns to obtain a proper minimal pattern set.Tese new patterns are, again, fed into the SAA.Tis recursion continues until convergence is obtained.We defne convergence as no further change in two aspects: (i) the minimal pattern set and (ii) the upper bound of production.
Te pseudocode of the methodology is given in Algorithm 1. Below, we defne the parameters and variables used in the pseudocode: z SP : the objective function value of the stochastic model given through equations ( 8)-( 17) x g jk : the candidate's solution x jk (the amount of pattern j in product k) of sample g, obtained by solving for z SP , g � 1, . . ., G prodLevel k (g): the total production amount for product k ∈ K obtained by  j∈J x g jk for sample g, g � 1, . . ., G maxProdLevels k : the maximum total production for product k ∈ K among all samples max g�1,...,G prodlevel k (g) � max g�1,...,G  j∈J x g jk stepsize k : the expansion amount in the search space for product k ∈ K upperBound k : the extended maximum production amount for product k ∈ K calculated by summing stepsize k and maxProdLevels k z det : the objective function of deterministic counterpart model equations ( 18)-( 22) output k : the total production for product k ∈ K obtained through the optimal solution  j∈J x * jk of z det equations ( 18)- (22) We explain the detailed steps of the proposed approach as follows.

Generation of the Initial Minimal Pattern Set.
In the proposed approach, the DA initially obtains a pattern set P * k (E) for product k, k ∈ K.It searches the patterns with the minimal trim loss, the extra width of the product that exceeds the width threshold, L k , for each product.A pattern can be minimal for at least one product; some patterns can be common among multiple products.Using one pattern for multiple products can decrease the set-up cost for each pattern change during production.Terefore, we aggregate all patterns for all products into a minimal pattern set pool, P * (E) � ∪ k∈K P * k (E).Additionally, we generate a binary pattern-product matrix, Δ, to tabulate the matches between patterns and products.δ jk � 1 if pattern j is a minimal pattern for product k, and 0 otherwise.Te outputs of DA are P * (E) and Δ.Both outputs of this phase serve as input parameters for the second phase.Due to the wordiness of the term and in the interest of simplifcation, we will refer to a "minimal pattern set" as simply a "pattern set" further on.

10
Applied Computational Intelligence and Soft Computing

Obtaining the Initial Production Amounts.
As aforementioned, the mathematical model receives P * (E) and Δ as parameters.If we run the SAA algorithm with this pattern set, we reach the optimal solution for only this pattern set.A diferent pattern set could return a diferent optimal result for this particular set.Terefore, the initial pattern set does not provide direct answers to the following questions: (1) Does the pattern set involve the pattern combinations that produce the products at a global optimal cost?

Applied Computational Intelligence and Soft Computing
(2) How does the global optimal solution compare to the optimal solution of this pattern set? (3) Are the item availabilities enough to produce either optimal amount?Terefore, the quality and the sufciency of this initial pattern set still need to be discovered.For this purpose, we frst calculate a sufciently high reference production amount for all products.As the iterations continue, we observe the changes in the production amounts and the growth in the pattern set.Based on these changes, we can clarify the answers to the questions above throughout iterations.
Recalling the defnition, for a pattern set to be proper, the patterns in a set should have sufcient availability to produce given target production amounts.A nonproper pattern set indicates that the patterns from the DA are insufcient, and we should generate more patterns to produce the target production amount.Terefore, it is only possible to comment on P * (E) being proper with a target production amount.
To this end, the frst recursion of the SAA (equations ( 8)-( 17)) solves a relaxed model with no availability constraints (by ignoring the constraint given in equation ( 9)) to obtain initial reference production amounts for each sample as presented in Figure 3.In fact, the constraint relaxation method is applied for the problem to obtain the lower bound of expected global optima for the cost function, and the results refect the optimal production amounts given the pattern set on hand without availability constraint [61].We generate G number of samples, each with a sample size N, to apply the SAA approach that solves the SMIP presented in Section 3.3.Each sample may result in a diferent solution (i.e., varying optimal production amount).Consequently, we may have diferent production amounts (candidate solutions) for each sample denoted by prodLevel k (g).P * (E) being proper (i.e., identical to P * p (E)) means that it constitutes a proper pattern set for all scenarios and samples.If P * (E) can produce the highest production amount (maxProdLevels k ), then it can produce all candidate solutions.Terefore, as the initial reference level, we calculate maxProdLevels k � max g: g�1,...,G { } prodLevels k (g). Figure 4(a) shows such production levels for two products.Figure 4(a) displays the maximum production levels for two products.

Calculation of the Upper Bounds Using the Step Size.
Te previous step calculates an optimal production amount for each product based on the pattern set on hand.Tree possibilities exist regarding the quality of this solution: (i) the items cannot produce the optimal amount for this pattern set due to their availabilities, new patterns may or may not improve the solution, (ii) the available items can produce the optimal amounts, but the solution could be improved if there were more patterns available, or (ii) the available items can produce the optimal amounts, and it is indeed the optimal solution, and therefore, any pattern changes would not afect the solution.In either possibility, new patterns must be generated to observe which case the solution fts.We construct a trigger for new pattern generation to understand if the skiving process requires more patterns than we have.Tis trigger entails increasing the maximum production amounts by a step size parameter.Te step size is a preventative measure, especially when the availabilities are sufcient to produce the upper bounds.Tis iterative numerical method can be explained as the stepping approach to convergence to the expected global optima by fnding better solutions in the search space.Briefy, producing the upper bounds indicates that more patterns could extend the search space and fnd a more afordable solution than the current one.Since the step size parameter is determined experimentally, it is better to run the algorithm many times with diferent step size parameters to avoid escaping better solution points (see Figure 4(b)) for a snapshot of the production amount and the upper bound and (see Figure 5) for changes between two consecutive iterations.

Checking for the Propriety of the Pattern Set (Te Inner
While Loop in Algorithm 1).In the next step, we include the item availability constraint in equation ( 9) to check whether the pattern set on hand can produce this upper bound.In other words, the propriety of P * (E) is checked for each upper bound by using the deterministic model equations ( 18)-( 22) as presented in Section 3.4.Te deterministic model itself is not used to solve the original stochastic problem, but it aids in deciding whether producing the upper bound is feasible and triggers the DA to generate new patterns.
Two possible outcomes exist: the items can or cannot produce the upper bound.Te DA is rerun to produce more patterns if the available items cannot produce the upper bound.In Figure 5, the item availabilities cannot produce the upper bounds, and the availabilities lead to another solution.
While generating additional patterns, denoted by the set P ′ (E), the new patterns should not involve any unavailable items.Similarly, the DA should not reproduce any alreadyexisting patterns.For this reason, we eliminate any information regarding unavailable items from the inputs and only present information related to items that are still available.Tis adjustment contributes to the generation of diferent pattern confgurations.Te DA produces the pattern set P ′ (E) for every product, and the new minimal pattern set is joined to the pool as P * (E) ≔ P * (E) ∪ P ′ (E).Similarly, the pattern-product matrix Δ is updated by adding the new patterns.Te pattern generation recursion continues until either one of the following conditions is satisfed: (i) Te DA has produced numerous patterns, but the production of the upper bound is infeasible.Te total width of the remaining items is less than the product width, so the pattern set is already exhaustive.(ii) Te most recent P * (E) can produce the upper bounds, and the minimal pattern set becomes proper.In this case, we move on to the next step.
12 Applied Computational Intelligence and Soft Computing In either case, we pass on to the next stage: solving the original problem with SAA.However, if the upper bound is infeasible, we run the SAA once to obtain the fnal results, knowing we cannot generate any more patterns.We continue the recursions if P * (E) is proper.
In Figure 4(c), the items cannot produce the upper bounds or the optimal amount, per se.Terefore, the initial pattern set needs to be revised.We rerun the DA until the upper bounds can be produced.Figure 4(d) shows such a case.Algorithm 1).Te SAA is run with the most recent pattern set on the original problem with availability constraints given through equations ( 8)- (17) as mentioned in Section 4.3.Tis recursion analyzes whether the newly added patterns change the previous optimal solution and the upper bound.If the upper bounds difer from the previous iterations, the new minimal pattern set enables diferent pattern-product confgurations than previous iterations.Figure 4(e) shows the new solution with the most recent pattern set and the item availabilities.Based on this solution, Figure 4(f ) fnds the new upper bounds.Te upper bounds difer from those found in Figure 4(b); hence, the recursion starts.

Recursion until Convergence. Te recursion between the pattern generation (
Step 4) and the SAA (Step 5) continues until (i) no further patterns can be obtained, or (ii) the production amounts and the upper bounds remain the same Te solution is the same as the previous iteration.Te addition of new patterns did not afect the solution.Hence, the production amounts have converged.(i) Since convergence is caught, we accept this solution as the optimal solution.between two consecutive iterations in which the original problem (problem with availability constraint) is solved in each iteration.If there is no further change in these outputs, the pattern set is sufcient to produce the optimal amounts, and the optimal solution has converged.
In Figure 4(g), we check if the new upper bounds can be produced with the pattern set from the previous iteration.If not, the DA produces new patterns, as mentioned in Step 4.However, Figure 4(g) shows the augmented set that can solve the upper bounds.When P * (E) is proper (P * P (E)), the SAA is rerun to fnd the optimal solution with the addition of new patterns as shown in Figure 4(h).Te solution found in Figure 4(h) is the same as in the previous iteration (as shown in Figure 4(e)).Te addition of new patterns has not changed the optimal solution.Te production amounts and the upper bounds have converged.Terefore, we stop the algorithm and accept the fnal solution as shown in Figure 4(i).
With these recursions, the search space boundaries are dynamically updated through iterations by using step size (see Figure 4).P * p (E) is obtained for each production amount of each sample by checking if P * (E) can produce the upper bounds.Finally, we obtain a solution of the original stochastic problem [16] as the SAA output.Te recursive structure is also visualized in Figure 3.

An Illustrative Example
In this section, we present a small instance of E ≔ (m, K, l, L, b) as a stochastic and multiproduct version of the SSP as an illustrative example.In this example, E ≔ (3, 2, (500, 400, 300), (1000, 1500, (45, 65, 85))).Te demand random variables are D 1 ∼ Pois(10) and D 2 ∼ Pois(20), respectively.In the illustrative example, the step size parameter of each product is expressed in terms of the standard deviation of the demand distributions.It helps us to track changes in results by changing the standard deviation multipliers.Terefore, the standard deviations for the demand are σ 1 ≈ 3.16 and σ 2 ≈ 4.47.Let d sk denote the value of the demand for product k ∈ K under scenario s ∈ S. Furthermore, assume that the random approved product rate is Υ ∼ N(0.95, 0.0001).C Pr � 100 euros per product, C O � 60 euros per pattern change, and C R � [75 60 45] euros for each item, i ∈ I.Moreover, let C H � [320 480] and C B � [800 1200] euros.Initially, we will assume the step size for each product arbitrarily as 2σ 1 and 3σ 2 and then explain the impact of the step size on the solution quality.Te following illustrates the methodology based on this example: Iteration 0 (1) Te DA is run to generate the minimal pattern set P * (E).Matrices a and Δ are presented below.Each column of a represents a pattern, and each row represents an item.Consequently, each cell a ij indicates the amount of item i used in pattern j.
(2) Te SAA is applied to the relaxed problem (without the item availability constraint given in equation ( 9)) with G � 10.Initial reference values for ProdLevels k (g) and maxProdLevels k for all k are given as follows: (3) Upper bounds for each product (upperBound k ) are computed such that upperBound � 11 + 2σ 1 22 + 3σ 2 ,   making upperBound 1 � 17 and upperBound 2 � 33.Iteration 1: (4) Te deterministic SSP model in equations ( 18)-( 22) is used to check whether the minimal pattern set on hand satisfes the upper bound equation (20) If not, the DA would have to generate an additional minimal pattern set until the upper bound can be produced.In that case, the proper minimal pattern set is produced.In our example, the output (solution) of the deterministic SSP indicates that P * p (E) was obtained.
and by using P * (E), total production amount for each product is obtained.
Recall from Section 4.2, the SAA approach searches for the solution of N g instances having N scenarios each given in the model presented in equations ( 8)- (17).Te candidate solutions are evaluated by solving the SAA algorithm with each solution obtained from N g instances in the reference sample N ′ .In this illustrative example, the number of samples (G), the sample size for each sample (N), the reference sample size (N ′ ), the candidate solutions, and the summary statistics for the objective function values are presented in Table 1.Monte Carlo sampling is used [16] in which the sampling takes place before the solution procedure.
Te DA is coded in MATLAB R2017b, and the SAA is implemented in GAMS 34.2.0.CPLEX is used as a solver in GAMS.Te algorithm is executed on a computer with Intel core i5-3230M, 2.60 GHz CPU, and 4 GB RAM.Te candidate solutions and statistical computations of objective function values are presented in Table 1.According to this Applied Computational Intelligence and Soft Computing table, the frst, third, sixth, and ninth samples have the smallest expected total costs according to Table 1.Moreover, the convergence is more robust in these samples since they have smaller optimality gaps and smaller variances for the optimality gap than the other samples.Terefore, the results of the frst, third, sixth, and ninth samples can be denoted as the best candidate solutions.According to these results, the production amounts are x 1 � 10 and x 2 � 20 or 21.G, N, N ′ can be increased to obtain a stricter convergence.Nonetheless, the increase in computational complexity should not be overlooked in response to increasing sample sizes.
An important point is that the SAA method solves the SMIP problem optimally under the pattern set generated by the DA.Hence, diferent pattern sets may lead to diferent optimal points.Because the DA performance directly afects the solution quality, parameter tuning becomes essential to the DA.Te parameters of the SAA are G � 30, N � 100, and N ′ � 400.Similarly, the parameters of the DA are B � 0.05, α � 0.5, β � 0.03, c � 0.07, η � 0.05, ϵ � 0.05, ω � 0.05.Finally, the step sizes are 0.6σ 1 and 0.75σ 2 .

Numerical Example and Results
We run the algorithm three times with diferent Λ b ′ � (20, 50, 70) and MaxIt � (10,20,30) values to capture the behavior of the SAA for diferent P * p (E). Te statistics for the computational complexity of the SAA and their deterministic equivalents and trial results are presented in Table 2.
Te objective function values, the optimality gaps, and the variances of the gaps are highly close to each other.Because the second trial has the smallest expected total cost, we can choose P * p (E) of the second trial.Te candidate solution of the seventh sample with minimum  v g can be determined as the best solution in the second trial, i.e., x SAA � argminv SAA where v SAA � min g�1,...,G  v g .Te obtained solution is presented in Table 3.
Te used amount of items is presented as a vector r One of the contributions of our algorithm is to obtain the preferred P * p (E) by using the minimization model equations ( 18)- (22), which controls the propriety of the P * (E).In our 50-item example, the computational time given in Table 2 displays the time required by the initial proper minimal pattern set.Even with the enlarged pattern sets, the computational time is around two to three minutes.
We conduct a further sensitivity analysis to analyze the impact of the step size through diferent multipliers for σ.Given all other parameters and random variable values remaining the same, the comparisons regarding various statistics are presented in Table 4. Te primary efect of the step size is on the proper minimal pattern set and the computational time.A larger step size yields a higher production upper bound, which, as a result, requires a more extensive pattern set.Te increase in the pattern set also requires a higher computational time.However, the most important performance measures are (i) the expected total cost, (ii) the variance of the total cost, (iii) the optimality gap, and (iv) the variance of the optimality gap.Table 4 shows the average mean and variance of the objective function values, the optimality gaps, and the average CPU time (in terms of seconds) of 30 runs of the algorithm.For a fair comparison, all scenarios are kept fxed for every step size value.Table 4 visualizes the efect of the step size on the average objective function value and the CPU time.As can be seen from the table, even though the step size is increased more than ten times, the computational time increased logarithmically.However, the decrease in the objective function value as the step size increases is much less signifcant compared to the increase in the computational time.Te improvement in the objective function value remains less than 0.1%.Even though this study is not a multiobjective analysis of the stochastic SSP, it also supervises the trim loss.While the main objective of the SAA is to minimize the total cost incurred throughout the production process, the DA also contributes to minimizing the trim loss.Te variable δ jk only allows minimal patterns to be used so that the SAA model does not allow any nonminimal patterns to be included in skiving even if the end product satisfes the given thresholds.A counterargument for this structure is the case of a very high set-up cost.If the set-up cost is considerably high, the model may sacrifce the trim loss, and the longest pattern can be used to manufacture all products.Tis way, the model avoids paying expensive set-up costs when patterns change.In such cases, δ jk � 1 for shorter products even though the pattern is not minimal.For example, if the set-up cost was 6000 instead of 60, then the patterns for the second product (with a width of 1300) could be used for the frst product (with a width of 1200).

Performance
Analysis for the DA.While the frst phase of the algorithm uses a metaheuristic, the SAA in the second phase implements an MIP algorithm under various scenarios.Hence, the second phase solves the problem of optimality under given parameters and scenarios.If the number of samples and scenarios is sufcient, the result converges to the optimal solution by the law of large numbers.Nonetheless, the parameters fed into the second phase are the results of the DA, which does not guarantee the optimal pattern set.Hence, in this section, we analyze the efect of the DA parameters on the objective function value.Te literature recommends that α + β + c � 1 hence, the frst nine rows of Table 5 analyze this case, and the last nine rows use α + β + c < 1 for slower convergence and avoiding jumping over the optimal solution.As can be seen from the table, the mean absolute diference rate between the highest and lowest values of the average objective function is approximately 0.3%.For this problem, the DA is robust against parameter changes and produces an abundance of patterns.
Further analysis is carried out about the run time of the algorithm by increasing the number of item types in Figure 6 and the number of product types in Figure 7.According to both graphics, when the number of item types and the number of product types increase, the run time increases, as shown in these fgures.Next, we increase the means of demand random variables of both product types by using several growth rates to investigate the relation between run time and demand quantity (Figure 8).
For further analysis, experiments for several demand levels were carried out to analyze and compare the performance of the particle swarm optimization algorithm (PSO) and DA for the model as shown in equations ( 18)-( 22) (Table 6).For each demand level, PSO and DA are run fve times, and we recorded the minimum results among fve runs for the PSO and DA.According to the results, for the minimization of the total cost, DA is superior to the PSO for every demand level in the experiments.Moreover, the CPU times of the DA are more stable than the CPU times of the PSO.Especially, for the high demand levels, there is a big diference in terms of run time.For high demand levels, the PSO generates a large number of unnecessary patterns because the generated patterns seem to be similar.On the other hand, DA seems to generate patterns with diferent confgurations because DA has better exploration features.

Discussions
Zak [1] denotes that the CSP is a set-covering problem and the SSP is a set-packing problem.Teir names, however, are the names of the two fundamental technological processes: cutting and skiving.A cutting process is the same as a packing process.It is the same for the skiving and covering.Zak also noted that the cutting (packing) process is wellmodeled by a knapsack, while the skiving (covering) process is well-modeled by an unbounded knapsack problem (UKP).Moreover, according to Zak, the CSP can be converted to a set-covering problem, while the SSP can be converted to a set-packing problem.Because they have similar input matrices, they have a strong relationship with each other in terms of modeling and solution approaches [1].
Such as the CSP, the SSP is also an NP-hard problem [4,20], reducible to the unbounded knapsack problem or  [23,62].Te overall problem can be formulated as an integer linear programming problem [1,20].Te dual bin packing can be described as a special case of the unbounded multiple knapsack problem (UMKP), a particular generalized assignment problem [63].UMKP describes each item by its weight and integer value.Tere are m knapsacks, each with its carrying capacity.Te aim is to pack a subset of the items so that all the items ft into the knapsacks without violating the weight constraints, and the total value of the packed items is as large as possible.In the uniform UMKP, the capacities of the bins are the same.Te problem is studied for the frst time and proved to be NPhard by Assmann et al. [23].Furthermore, the DBBP, and hence MKP, is strongly NP-hard even for m � 2 by [63].
Our model is the well-known multiconstrained integer linear programming problem (the deterministic equivalent of the stochastic integer program), which is the function of the nonuniform DBPP or the UMKP.Te DBPP or the   Applied Computational Intelligence and Soft Computing UMKP is the NP hard where optimal solution of the IP in our study is greater and equal to output maximization of the pure SSP (maximum output for the DBPP) (another aspect x * sp ⟶ x relaxation ssp ).Te IP is also considered as the NP-hard [62].As a result, as the number of bins increases, in other words, the number of patterns, which are the decision variables of our IP model, increases, the complexity of our problem increases rapidly.
On the other hand, for the complexity of stochastic programming problems such as problem, Shapiro [57] states that the approximate solutions, with sufciently high accuracy, of linear two-stage stochastic programs with fxed recourse are NP-hard even if independent uniform distributions govern the random problem data.
Another discussion about combinatorial optimization problems is the solution methodology.While many methods exist in the literature for decision-making under uncertainty, the most comparable approach to the SP is robust optimization (RO).Tis study implements the SP due to its fexibility to employ multistage models and its ability to handle scenarios with recourse actions that are not traditionally a part of RO [64].RO approaches are valuable in risk-sensitive and risk-critical systems, such as power-grid maintenance optimization to avoid blackouts [65] or the adoption of new surgical treatments [66].In addition, the RO becomes necessary in the absence of information on the probability distributions.However, its conservative planning nature toward the worst-case scenarios can lead to extreme risk-averse decision-making in more risk-neutral systems [67].Nonetheless, numerous authors propose a trade-of between the SP and the RO to balance the costs related to risks [68][69][70].Tis study implements the SP model due to the availability of the probability distributions and the recourse actions.Moreover, a risk-averse decision in the SSP leads to higher inventory levels for unlikely extreme demand scenarios.Regardless of these assumptions, the stochastic SSP is also a suitable candidate for the analysis of diferent risk-based approaches depending on the criticality of the decisions.

Conclusions and Future Work
Skiving is essential in various industries, such as steel and textiles.Despite being a long process in the industry, the SSP is still an emerging feld in research.While several studies have dealt with the deterministic version of the SSP, the stochastic nature of the problem is still under investigation.Tis study has developed an iterative two-phase algorithm to solve the multiproduct stochastic SSP problem.We have adapted the DA to the problem to obtain an efcient set of proper patterns in the frst phase.Tis set provides a feasible landscape for the second phase in which the SAA solves the stochastic problem.Tis phase ofers the pattern set with minimized trim loss, whereas the later phase minimizes the total costs.While cost minimization is the primary objective in this problem, trim loss is secondarily minimized by the DA.Finally, the SAA fnds the solution to the SMIP model through the pattern set obtained from the frst phase and the best candidate solution, which gives the minimum or improved expected total cost for the stochastic problem.Te case of pattern-dependent set-up costs also poses a consideration for future work.Additional research directions for the stochastic SSP involve developing a random search procedure for the upper bounds of the production amounts, combining two phases of the algorithm instead of expanding the search space by using the stepping method.δ jk : A binary variable indicating whether pattern j can be used for product k or not a ij : Te number of item type i in pattern j.

4. 1 . 2 .
Initializing.Te algorithm parameters are initiated.Te change in position change (Δpos) is started as a zero matrix, i.e., 0 NoD,  I where NoD is representative of the dragonfy number.Te maximum iteration number (MaxIt) is returned.
Δpos b, * (t) � αSep b + βAln b + cCoh b + η food − pos b, *   + ϵ enemy + pos b, *   + ωΔpos b, * (t−1), (27) where α, β, c, η, ϵ are given and tuned coefcients, ω is the inertia rate.food and enemy are the dragonfy positions with the best and worst objective function values.Suppose that Λ b is the set of dragonfies in the neighborhood of the b th dragonfy and λ b is the number of neighboring dragonfies.Te separation factor (Sep b ) prevents the dragonfies from colliding.Te alignment (Aln b ) and cohesion (Coh b ) factors allow the dragonfies to use the search space with similar speeds and positions.

5 :
Te change in position is updated by means of the velocity and the inertia of the change in position such that pos b, * (t) � pos b, * (t − 1) + Δpos b, * .

Figure 3 :
Figure 3: Te recursion between the DA and the SAA.

Figure 4 :
Figure 4: (a) Te initial solution is free of item availability restrictions and serves as an initial reference point.(b) Te step size augments the search space.(c) Te available items cannot produce the upper bound or the optimal point (greyed-out point).Te item availability constraints are binding and lead to another solution (shown in black).(d) Te DA fnds new patterns and augments the pattern set to produce the upper bound.Te available items can produce the upper bounds.Tis augmented pattern set is proper.(e) Te SAA fnds a new optimal solution with item availabilities.(f ) Te new upper bound is calculated and a diferent upper bound than the previous iteration.Te solution may improve when new patterns are added.We check if this new upper bound triggers new pattern generation.(g) Te existing pattern set cannot produce the most recent upper bound.Hence, the DA is rerun until it can.(h) Te SAA is rerun with all patterns found.Te solution is the same as the previous iteration.Te addition of new patterns did not afect the solution.Hence, the production amounts have converged.(i) Since convergence is caught, we accept this solution as the optimal solution.

Figure 5 :
Figure 5: Te expansion of the search space at each iteration.

( 5 )
Te SAA is applied to the original problem (with the availability constraint (9)).ProdLevels, maxProdLevels, and upperBound are computed.

Figure 6 :
Figure 6: CPU time (sec) vs. number of item types.

Figure 7 :
Figure 7: CPU time (sec) vs. number of product types.

Figure 8 :
Figure 8: CPU time (sec) vs. growth rates for the demand means of two product types.

Table 3 :
Frequency of patterns in x SAA .Product(k)/Pattern(j) x 2 x 12 x 18 x 19 x 27 x 40 x 43 x 44

Table 4 :
Analysis of diferent step sizes.

Table 5 :
Performance analysis for the parameters of the DA.

Table 6 :
Comparison of the PSO and DA for the total cost minimization.