^{1}

^{1}

^{2}

^{1}

^{1}

^{2}

New constructive algorithms for the two-dimensional guillotine-cutting problem are presented. The algorithms were produced from elemental algorithmic components using evolutionary computation. A subset of the components was selected from a previously existing constructive algorithm. The algorithms’ evolution and testing process used a set of 46 instances from the literature. The structure of three new algorithms is described, and the results are compared with those of an existing constructive algorithm for the problem. Several of the new algorithms are competitive with respect to a state-of-the-art constructive algorithm. A subset of novel instructions, which are responsible for the majority of the new algorithms’ good performances, has also been found.

Various industrial processes exist in which the raw material must be cut into smaller sections that must be assembled to produce the final product, as in the case of cutting plastics, glass, paper, and metals [

Often approached from a combinatorial optimization perspective, cutting problems represent an intellectual challenge because of the computational difficulty that arises when attempting to solve them [

The statement of the problem considers a rectangular plate of length _{1}, _{2},…, _{m} of sizes _{i} and area _{i} > 0 ∀ _{i} ∈ _{i} ≤ _{i}, ∀

The problem has been studied not only for its impact on the optimization of raw material use in industrial processes but also for the computational difficulty that arises when attempting to solve it by exact methods. Three initial approaches that are based on the dynamic programming formulation (DPF) have generated these impacts [

For large instances of the problem, the heuristic combinations are more complex. Álvarez-Valdés et al. [

During the last four decades of research on the CW_TDC problem, researchers have performed computational studies of some specific hybridizations of the existing methods to obtain better performance in terms of both the computational time and the quality of the obtained solution. However, other potential hybridizations have not been explored. This paper proposes that the exploration of those methods can be accelerated with the support of evolutionary computation, specifically using the same ideas that support genetic programming [

The algorithms are generated by genetic programming (GP), a particular technique in the field of evolutionary computation. The latter is an area of knowledge that refers to the study of methods inspired by Darwinian evolution to solve problems in science and engineering [

Evolution of algorithms.

Algorithms for CW_TDC are gradually built by an evolutionary process. To perform this task, each algorithm is represented as a tree of instructions, where the intermediate nodes are high-level instructions, and leaf nodes corresponding to problem-specific functions are entrusted to build the layout. The process is outlined in Figure

Several definitions are needed to implement this approach. The first is the definition of the sets of high-level instructions and of problem-specific functions for the CW_TDC problem. It is also necessary to have sets of adaptation and testing problem instances. Finally, a fitness function, which is responsible for guiding the evolutionary process, is required. The basic idea is to generate a new algorithm from a well-known, previously existing heuristic for a problem and from there, start an improvement process. In this sense, we consider CONS [_{k} with dimensions _{k} and _{k}. A guillotine-type cut generates four new, smaller external rectangles

(a) Vertical and (b) horizontal cuts.

The piece chosen for assigning in a rectangle _{k} is the one that generates the maximum estimated profit. This value is calculated as the profit of the piece plus the sum of the profits when assigning, in decreasing order of _{1} to solve the knapsack problem [_{2} is also defined, which, unlike _{1}, inserts the first piece available as many times as it fits in the rectangle and applies the estimates based on the knapsack problem for spaces not occupied by the first piece.

In addition to _{1} and _{2}, we define two new procedures, namely, _{3} and _{4}, to calculate the estimated profit in the outer rectangles. _{3} is a variant of _{2} that assigns available pieces to the horizontal base of the rectangle, in decreasing order of _{1} in the rectangles ^{1}, ^{2}, …, ^{q}, which are generated by considering a horizontal line drawn from the widest piece already assigned.

Rectangles for evaluating _{3}: ^{1}, ^{2}, ^{3}, and ^{q}.

_{4} is defined by locating the pieces sequentially to different rows of the rectangle, as defined in the work of Coffman et al. [_{k} (Figure _{k} and between the sum of the lengths of the pieces assigned to the row and _{k}) are ignored (Figure

Geometry to obtain the upper bound _{4} in rectangle _{k}. (a) Completed and (b) in process.

The algorithms to be constructed must operate on three data structures. First, a list of pieces available (LPA) stores the pieces remaining at each stage. Second, a list of rectangles (LR) is completed as the algorithm advances, with rectangles _{k} still to be processed. At each step, there is an active rectangle being processed. At the beginning, the list contains only the original rectangle. Those rectangles in which no available piece fits are considered losses. Finally, a stack of blocks (SB) contains blocks of pieces constructed using vertical or horizontal joining operations either between pieces or blocks according to the Wang heuristic [_{k}.

The elemental components of the algorithms to be produced are translated into two sets of functions. The first set contains the basic instructions in most computer languages and is defined with the parameters _{1} and _{2} as integer variables, considering that a value greater than 0 corresponds to the “true” logical value and that a value equal to 0 corresponds to “false.” All functions return true or false, and they are While (_{1}, _{2}), IfThen (_{1}, _{2}), Not (_{1}), And (_{1}, _{2}), Equal (_{1}, _{2}), and Or (_{1}, _{2}).

The second set contains specific functions for the CW_TDC problem that are sufficient to allow for the reconstruction of the reference algorithm. Figure _{k} receiving a block and therefore giving rise to two new rectangles to be placed in LR. Functions selecting a piece from LPA to be combined with the already existing blocks in SB are also sketched. The following variables are necessary:

_{i} procedures. The default value is

_{i} procedure to be used. The default value is _{1}, indicating that _{1} must be used.

_{k}: this variable stores the mechanism for selecting the next rectangle _{k} to be assigned to LR. The default value is

Function operations on the lists LPA, SB, and LR.

Then, the following specific functions are defined:

_{1} over the active rectangle _{k}. The function returns the piece number.

_{k} and deletes it from LR. Two rectangles are generated, either _{1} is selected. The selected rectangles are stored in LR. Then, a new rectangle _{1} with the available pieces. The function always returns 1 (true).

_{2}(): a function that acts as a flag to indicate that, in the next execution of the _{2} must be used. It always returns 2 (true).

_{k} with the smallest area must be selected. It returns 1 (false) or 2 (true) if there is a rectangle _{k}

Based on the set of functions and variables

_{1};

To provide greater variability for algorithm construction, new specific functions are generated based on the following four strategies:

Provide greater freedom to join a piece with a block in the SB, according to the horizontal or vertical construction method. With this strategy, the pieces can be assigned as they are selected, and they can also be joined, forming a block to be assigned later.

Establish the order in which the pieces must be inserted in the active rectangle to estimate the profits from using _{1}, _{2}, _{3}, and _{4}.

Establish a selection criterion of the next rectangle _{k} to be assigned.

Define sensors that deliver online information about the characteristics of the problem at any instant of the process.

The new specific functions are as follows:

_{1}, _{2}): a function that acts as a sensor to estimate the average number of times that the available pieces fit in the rectangle _{k.} If this average is greater than 2, it performs _{1}; otherwise, it performs _{2}. It returns the value of the parameter that was executed.

_{1}, _{2}): a function that acts as a sensor that correlates the values of _{i} of the available pieces that fit in _{k}_{1} is performed; otherwise, _{2} is performed. The correlation index ranges from 0 to 1, with 1 indicating that the variables are completely correlated. The correlation is calculated as follows: covariance / (standard deviation _{i}). The function returns the value of the executed parameter.

_{1}, _{2}): a function that acts as a sensor to estimate the size of the available pieces. If at least 50% of the available pieces have an area greater than one-eighth of the plate, then _{1} is performed; otherwise, _{2} is performed. The function returns the value of the executed parameter.

_{3}(): a function that acts as a flag to indicate that, in the next execution of the _{3} must be used. The function always returns 3 (true).

_{4}(): a function that acts as a flag to indicate that, in the next execution of the _{4} must be used. The function always returns 4 (true).

_{i} estimator must use the pieces in order from greatest to smallest areas. This function returns 2 (true).

_{i} estimator must use the pieces in order from smallest to greatest areas. This function returns 2 (true).

_{i} estimator must use the pieces in order from longest to shortest length, as long as the length is greater than the width. If not, the piece is considered to be rotated 90 degrees. This function returns 3 (true).

_{i} estimator must use the pieces in order from shortest to longest length, as long as the length is greater than the width. If not, the piece is considered to be rotated 90 degrees. This function returns 4 (true).

_{i} estimator must use the pieces in order from longest to shortest length. This function returns 5 (true).

_{i} estimator must use the pieces in order from greatest to smallest width. This function returns 6 (true).

_{i} estimator must use a decreasing ranking of the pieces for profit versus area

_{k} with the largest area must be selected. It returns 1 (false) or 2 (true) if there is a rectangle _{k}

For the evolution and evaluation of algorithms, 46 instances of problem CW_TDC were used (Table

GT1: “2s,” “Hchl4s,” “CHL2s,” “CHL5s,” “Hchl3s,” “OF1,” “OF2,” “Hchl5s,” “A5,” “A4,” “Hchl6s,” and “STS4S.”

GT2: “CHL6,” “CHL1s,” “APT34,” “CHL7,” “A3,” “Hchl7s,” “APT35,” “APT36,” “APT30,” “Hchl2,” “CU7,” “Hchl1,” “A2s,” and “APT38.”

GT3: “CU1,” “wang1,” “APT37,” “APT39,” “STS2s,” “APT33,” “APT32,” “CU11,” “CU9,” “CU8,” “CU2,” “APT31,” “CU10,” “A1s,” “CU4,” “W,” “3s,” “CU6,” “CU5,” and “CU3.”

Testing instances.

Number of instances | Instances | Reference |
---|---|---|

14 | OF1, OF2, W, CU1, CU2, CU3, CU4, CU5, CU6, CU7, CU8, CU9, CU10, CU11 | Fayard et al. [ |

14 | STS2, STS4, A1s, A2s, STS2s, STS4s, CHL1s, CHL2s, A3, A4, A5, CHL5, CHL6, CHL7 | Cung et al. [ |

5 | Hchl3s, Hchl4s, Hchl5s, Hchl6s, Hchl7s | Álvarez-Valdés et al. [ |

9 | APT30, APT31, APT32, APT33, APT34, APT35, APT36, APT37, APT38, APT39 | Álvarez-Valdés et al. [ |

4 | 2s, wang1, wang2, wang3 | Others |

The fitness function considers two objectives. The first one is the quality of the algorithm or relative error, whereby the smaller the relative error, the greater the quality. The second criterion considers the relative deviation of the algorithm’s number of nodes related to an initially fixed number of nodes. Both terms are expressed in Equation (

Performing the evolutionary process uses an adaptation of the platform originally developed to implement the GP application GPC++ and designed to evolve tree structures [

A population size of 1000 individuals and 100 generations was used, and the crossover and mutation probabilities were set at 85% and 5%, respectively. The “ramped half-and-half” method was used to create the initial population, with a controlled initial tree size that could later grow to a height of 13. The selection of the fittest individual was performed by a tournament. The mutations used were “swap mutation” and “shrink mutation” [

The experiment is made up of two parts: the evolution process and the evaluation process. In the first process, the new algorithms face the GT1 set of instances, and the experiment is repeated 30 times to select the best algorithm of each execution. With the selected 30 algorithms, the second process follows, which is divided into two parts: First, the algorithms are evaluated with the GT2 set of instances and later, with GT3.

In the 30 executions of the experiment, the convergence curve shows that the individuals of each generation systematically converge until reaching an average error of between 2 and 4%. The graph in Figure

Convergence for the 30 runs.

The best resulting algorithm of each of the executions is selected. Table

Best algorithm of each of the 30 runs.

Algorithm | Avg. fitness (%) | Avg. error (%) | Best error (%) | Worst error (%) | SD (%) | No. of hits | No. of nodes | Tree height | Time (s) |
---|---|---|---|---|---|---|---|---|---|

A1 | 6.52 | 7.40 | 0.00 | 15.22 | 5.17 | 1 | 13 | 4 | 553.00 |

A2 | 4.69 | 5.33 | 1.36 | 20.75 | 5.42 | 0 | 13 | 4 | 560.82 |

A3 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 3 | 592.03 |

A4 | 4.38 | 4.96 | 2.02 | 18.29 | 4.51 | 0 | 13 | 3 | 532.11 |

A5 | 5.18 | 5.89 | 0.86 | 10.57 | 3.67 | 0 | 13 | 5 | 491.15 |

A6 | 5.85 | 6.66 | 1.36 | 12.93 | 3.73 | 0 | 13 | 3 | 595.45 |

A7 | 4.28 | 4.85 | 1.36 | 10.74 | 2.91 | 0 | 13 | 4 | 564.03 |

A8 | 5.85 | 6.66 | 1.36 | 12.93 | 3.73 | 0 | 13 | 4 | 599.26 |

A9 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 4 | 603.54 |

A10 | 4.63 | 5.26 | 0.00 | 11.64 | 3.79 | 1 | 13 | 4 | 600.85 |

A11 | 5.19 | 5.91 | 1.36 | 14.96 | 3.65 | 0 | 13 | 3 | 511.67 |

A12 | 3.49 | 3.96 | 0.00 | 9.83 | 2.63 | 1 | 13 | 3 | 556.78 |

A13 | 4.50 | 5.14 | 0.00 | 13.87 | 3.98 | 1 | 13 | 4 | 659.11 |

A14 | 3.92 | 4.45 | 0.75 | 10.23 | 3.09 | 0 | 13 | 4 | 597.02 |

A15 | 5.63 | 6.41 | 1.31 | 20.75 | 6.47 | 0 | 13 | 6 | 596.93 |

A16 | 4.64 | 5.28 | 0.75 | 13.17 | 3.40 | 0 | 13 | 4 | 518.51 |

A17 | 6.15 | 6.98 | 0.86 | 17.77 | 5.19 | 0 | 13 | 3 | 607.99 |

A18 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 3 | 607.98 |

A19 | 6.34 | 7.20 | 0.00 | 14.82 | 4.62 | 1 | 13 | 4 | 591.04 |

A20 | 5.03 | 5.70 | 0.00 | 12.52 | 4.02 | 1 | 13 | 5 | 618.95 |

A21 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 4 | 596.01 |

A22 | 5.59 | 6.37 | 1.36 | 13.14 | 4.16 | 0 | 13 | 5 | 592.75 |

A23 | 5.09 | 5.81 | 0.00 | 17.95 | 4.85 | 1 | 13 | 4 | 598.61 |

A24 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 4 | 580.30 |

A25 | 4.23 | 4.80 | 0.00 | 14.48 | 4.26 | 1 | 13 | 5 | 648.99 |

A26 | 5.15 | 5.86 | 0.00 | 17.95 | 4.62 | 1 | 13 | 4 | 516.53 |

A27 | 5.84 | 6.63 | 0.00 | 14.37 | 4.36 | 1 | 13 | 4 | 602.66 |

A28 | 4.38 | 4.96 | 2.02 | 18.29 | 4.51 | 0 | 13 | 4 | 587.45 |

A29 | 4.69 | 5.33 | 1.36 | 20.75 | 5.42 | 0 | 13 | 5 | 549.60 |

A30 | 6.25 | 7.11 | 0.00 | 15.81 | 4.82 | 1 | 13 | 4 | 573.66 |

Average | 5.22 | 5.93 | 0.60 | 14.85 | 4.29 | 0.5 | 13 | 4 | 580.16 |

Of the 30 selected algorithms, 16 reach optimum values for at least one instance. Consequently, the best error value is 0.00. In general, all the algorithms are capable of determining a near optimum solution for some of the 12 instances. This is evidenced by the best error value of 0.60%. In contrast, an error of 14.85% found as a worst error average shows that all the algorithms face some difficulty with at least one of the instances. The lowest error average is found in execution 12 at approximately 3.96%, and the lowest fitness average is also found in algorithm 12 at 3.49%. The required computer time for evolution is 8.19 minutes for execution 5, and the greatest time is 10.99 minutes for execution 13. The average fitness is 5.22%, and the best average is 5.93%.

The generated algorithms are robust, and they do not over specialize. To demonstrate this, an evaluation process of the best algorithms found was used. This process consists of evaluating the 30 best algorithms in instances different from those used in their creation. Therefore, groups of instances GT2 and GT3 were used. Table

Evaluation of algorithms with instances from group GT2.

Algorithm | Avg. fitness (%) | Avg. error (%) | Best error (%) | Worst error (%) | SD (%) | Hits | Time (s) |
---|---|---|---|---|---|---|---|

A1 | 8.33 | 9.35 | 2.34 | 16.12 | 4.43 | 0 | 21.0 |

A2 | 3.53 | 3.97 | 1.47 | 8.14 | 2.20 | 0 | 20.0 |

A3 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 18.0 |

A4 | 3.51 | 3.94 | 0.88 | 7.24 | 2.43 | 0 | 17.0 |

A5 | 4.43 | 4.95 | 1.41 | 13.89 | 3.54 | 0 | 25.0 |

A6 | 5.64 | 6.34 | 1.32 | 18.53 | 4.61 | 0 | 23.0 |

A7 | 4.53 | 5.09 | 1.01 | 11.76 | 2.85 | 0 | 15.0 |

A8 | 5.64 | 6.34 | 1.32 | 18.53 | 4.61 | 0 | 14.0 |

A9 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 17.0 |

A10 | 7.15 | 8.03 | 3.09 | 16.12 | 4.11 | 0 | 15.0 |

A11 | 4.29 | 4.83 | 1.32 | 11.76 | 2.94 | 0 | 13.0 |

A12 | 4.56 | 5.12 | 1.07 | 18.53 | 4.57 | 0 | 11.0 |

A13 | 4.32 | 4.84 | 1.38 | 12.20 | 3.24 | 0 | 19.0 |

A14 | 3.61 | 4.06 | 1.59 | 8.14 | 2.08 | 0 | 20.0 |

A15 | 3.55 | 4.00 | 1.21 | 12.20 | 3.26 | 0 | 22.0 |

A16 | 4.29 | 4.83 | 1.32 | 11.76 | 2.94 | 0 | 27.0 |

A17 | 6.00 | 6.73 | 1.64 | 18.50 | 4.71 | 0 | 21.0 |

A18 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 22.0 |

A19 | 8.33 | 9.35 | 2.34 | 16.12 | 4.43 | 0 | 27.0 |

A20 | 5.73 | 6.46 | 1.64 | 16.12 | 4.03 | 0 | 26.0 |

A21 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 20.0 |

A22 | 6.30 | 7.07 | 1.64 | 18.50 | 4.70 | 0 | 19.0 |

A23 | 3.89 | 4.38 | 1.83 | 8.97 | 2.18 | 0 | 20.0 |

A24 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 24.0 |

A25 | 5.13 | 5.78 | 2.00 | 16.12 | 3.55 | 0 | 15.0 |

A26 | 6.06 | 6.81 | 3.08 | 16.12 | 3.38 | 0 | 25.0 |

A27 | 6.04 | 6.80 | 3.08 | 16.12 | 3.46 | 0 | 19.0 |

A28 | 3.51 | 3.94 | 0.88 | 7.24 | 2.43 | 0 | 17.0 |

A29 | 3.53 | 3.97 | 1.47 | 8.14 | 2.20 | 0 | 14.0 |

A30 | 5.99 | 6.73 | 1.32 | 18.53 | 4.94 | 0 | 15.0 |

Average | 5.27 | 5.92 | 1.90 | 14.20 | 3.50 | 0 | 19.0 |

The produced algorithms present similar computational performances in the evolution and evaluation stages. The instances used to evaluate the algorithms present greater flexibility in terms of the ratio area of the plate/area of the pieces. Table

Evaluation of algorithms with instances from group GT3.

Algorithm | Avg. fitness (%) | Avg. error (%) | Best error (%) | Worst error (%) | SD (%) | No. of hits | Time (s) |
---|---|---|---|---|---|---|---|

A1 | 5.09 | 5.71 | 0 | 12.86 | 3.91 | 1 | 48.0 |

A2 | 2.32 | 2.60 | 0 | 5.56 | 1.33 | 1 | 44.0 |

A3 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 51.0 |

A4 | 3.33 | 3.74 | 0 | 11.58 | 2.61 | 1 | 41.0 |

A5 | 4.03 | 4.52 | 0 | 10.12 | 2.74 | 1 | 44.0 |

A6 | 3.02 | 3.39 | 0 | 8.23 | 2.31 | 1 | 46.0 |

A7 | 3.41 | 3.83 | 0 | 8.34 | 2.14 | 1 | 45.0 |

A8 | 3.02 | 3.39 | 0 | 8.23 | 2.31 | 1 | 42.0 |

A9 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 40.0 |

A10 | 4.21 | 4.73 | 0 | 11.85 | 3.13 | 1 | 39.0 |

A11 | 2.66 | 2.99 | 0 | 6.96 | 1.82 | 1 | 50.0 |

A12 | 3.70 | 4.15 | 0 | 20.42 | 4.41 | 1 | 49.0 |

A13 | 2.14 | 2.41 | 0 | 7.80 | 1.74 | 1 | 48.0 |

A14 | 2.68 | 3.01 | 0 | 8.23 | 2.09 | 1 | 41.0 |

A15 | 2.31 | 2.59 | 0 | 7.80 | 2.01 | 1 | 43.0 |

A16 | 2.66 | 2.99 | 0 | 6.96 | 1.82 | 1 | 42.0 |

A17 | 5.37 | 6.03 | 0 | 15.73 | 4.14 | 1 | 51.0 |

A18 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 42.0 |

A19 | 5.09 | 5.71 | 0 | 12.86 | 3.91 | 1 | 43.0 |

A20 | 4.43 | 4.97 | 0 | 13.20 | 3.43 | 1 | 38.0 |

A21 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 45.0 |

A22 | 5.37 | 6.03 | 0 | 15.73 | 4.14 | 1 | 42.0 |

A23 | 2.43 | 2.73 | 0 | 6.96 | 1.76 | 1 | 45.0 |

A24 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 48.0 |

A25 | 3.52 | 3.96 | 0 | 8.96 | 2.80 | 1 | 43.0 |

A26 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 50.0 |

A27 | 4.22 | 4.74 | 0 | 8.96 | 2.93 | 1 | 51.0 |

A28 | 3.33 | 3.74 | 0 | 11.58 | 2.61 | 1 | 51.0 |

A29 | 2.32 | 2.60 | 0 | 5.56 | 1.33 | 1 | 42.0 |

A30 | 3.02 | 3.39 | 0 | 8.23 | 2.31 | 1 | 46.0 |

Average | 3.63 | 4.08 | 0 | 9.89 | 2.71 | 1 | 45.0 |

The size of the algorithms tends to stabilize at the initially predefined size. There is an indirect evolution of the size of an algorithm during its evolutionary construction toward the predefined size, as specified in Equation (

Results in Table

Tree of the algorithm from run 13.

The found algorithms follow a constructive and an improvement logic. All the generated algorithms have at least one cycle, and within them, they build solutions from an initial plate until no other piece fits in the plate. The algorithms represented in Figure

Tree instructions for algorithms A4, A13, and A14.

The algorithms assemble the location of the pieces using the criteria of best fit. This behavior is found specifically in the _{i} estimators because each estimator has a different criteria to fit pieces. If the default _{1} estimator is not useful for fitting pieces, then it is possible for the algorithm to use a different estimator in one of its tree’s branches. Moreover, there is another way to use the best piece, based on the ordered lists that offer the best possible fit, using different methods of sorting the pieces during the selection. These methods may be from largest to smallest or from smallest to largest, considering its own criterion that indicates the type of sorting applied (width, length, area, etc.). A clear example is algorithm A4 of Figure _{2} estimator and an Add-p terminal and two _{3} terminals in its right branch.

The found algorithms are a generalization of good existing heuristics for this problem. The CONS heuristic is the fundamental base of the generated algorithms. The algorithms find a solution that begins with the greatest loss and gradually decreases as pieces are assigned to the solution. Figure

Constructive cycles prevail in the found algorithms. In most of the analyzed algorithms, there are cycles that try to find a possible solution using estimators and piece lists. Always connected by a While cycle, Add-p and Cut prioritize the construction of blocks to find the solution, as can be observed in the right branch of algorithm A14 presented in Figure _{2} and Cut with _{3} with While) as the base of the constructive cycles.

The cycles in the algorithms operate as instruction compacters. The resulting algorithms are capable of repeating the process a great number of times, which are not always the same, using few code lines. In this way, a great number of operations are conducted, but instructions are compacted in small and easy to understand branches. An example appears in Figure

Several algorithms obtained are competitive with respect to a state-of-the-art constructive algorithm. Table

Evaluation of algorithms with instances in groups GT2 and GT3.

Algorithm | Avg. error (%) GT2 | Hits GT2 | Time (s) GT2 | Avg. error (%) GT3 | Hits GT3 | Time (s) GT3 |
---|---|---|---|---|---|---|

CONS | 4.53 | 0 | 2.45 | 3.36 | 0 | 2.75 |

GRASP | 1.63 | 2 | 142.44 | 0.73 | 7 | 158.4 |

TABU | 0.34 | 5 | 1459.86 | 0.25 | 9 | 1730.1 |

This paper describes a computational model and experiment that allowed for the generation of algorithms to solve a set of instances of the guillotine-cutting problem. The generated algorithms were decoded from tree structures that were evolved with a computational tool based on GP. The functions that constitute the basic components of the produced algorithms were deduced by identifying the basic components of an existing algorithm. Other functions inspired by the geometric and algorithmic solutions of the problem were added to provide greater variability in the algorithm search. The best 30 algorithms were identified and tested with 46 representative instances of the problem. The average error of the algorithms varied between 3.00 and 5.00%. The generated algorithms are able to find better results by working on instances with more possible combinations among their pieces. The computational results are similar between instances of different combinatory degrees.

The data set used in this paper is very common in the field of cutting problems and can be obtained from a public web site:

The authors declare that they have no conflicts of interest.

This research was partially funded by the Complex Engineering Systems Institute (ICM-FIC: P05-004-F, CONICYT: FB0816).