The setup of heuristics and metaheuristics, that is, the finetuning of their parameters, exercises a great influence in both the solution process, and in the quality of results of optimization problems. The search for the best fit of these algorithms is an important task and a major research challenge in the field of metaheuristics. The finetuning process requires a robust statistical approach, in order to aid in the process understanding and also in the effective settings, as well as an efficient algorithm which can summarize the search process. This paper aims to present an approach combining design of experiments (DOE) techniques and racing algorithms to improve the performance of different algorithms to solve classical optimization problems. The results comparison considering the default metaheuristics and ones using the settings suggested by the finetuning procedure will be presented. Broadly, the statistical results suggest that the finetuning process improves the quality of solutions for different instances of the studied problems. Therefore, by means of this study it can be concluded that the use of DOE techniques combined with racing algorithms may be a promising and powerful tool to assist in the investigation, and in the finetuning of different algorithms. However, additional studies must be conducted to verify the effectiveness of the proposed methodology.
The finetuning of heuristics and metaheuristics is, usually, a tedious and laborious work for almost all researchers. However, it exerts strong influence in the solution process and in the quality of results of optimization problems. Some researchers classify this process as an optimization problem with many variables (e.g., the parameters to be set) subject to several constraints (e.g., ranges of the parameters), where the choice of inappropriate values may result in a poor performance of algorithms and/or lowquality solutions.
Researches about metaheuristics are constantly evolving and cover theoretical developments, new algorithms, and the enhancement techniques to assist the researchers. Since the last decade, there is a growing interest in methods to assist the tuning of these algorithms and reduce the work related to this activity. Therefore, the finetuning of metaheuristics is an important field of research in the development context of these algorithms and in the evaluation of problems from areas such as Operations Research and Engineering.
Since the last decade, many researchers (e.g., [
This paper aims to present an approach combining DOE techniques and racing algorithms to improve the performance of different metaheuristics from distinct nature, such as genetic algorithm (GA) and simulated annealing (SA), where the main difference between them is the way that the searching patterns are implemented. Our approach will be illustrated by means of a case study, where a set of parameters of both algorithms will be simultaneously studied by means of the response surface methodology (RSM), in order to define the space of search (e.g., candidate configurations) of each one, followed by a racing algorithm to define the best fit of the algorithms. The quality of the proposed settings for GA and SA will be evaluated applying the selected metaheuristics in classical optimization problems, such as the travelling salesman problem (TSP) and the scheduling problem to minimizing the total weighted tardiness in a single machine (TWTP), and through the results comparison from the default metaheuristics and ones using the settings suggested by the study of finetuning.
The rest of the paper is structured as follows: Section
Many optimization problems, especially those related to the real world (resource allocation, facility location, vehicle routing, etc.), cannot be solved exactly considering realistic time limits. Essentially, these problems consist in finding an absolute extreme (maximum or minimum), called optimum, from an objective function with many local extremes (Figure
Objective function for a minimization problem.
Generally, such problems are inherently complex and therefore require a lot of computing effort. Some of those problems are classics in the Operations Research field, that is, travelling salesman and scheduling, since it involves a significant number of publications in the specialized literature [
The travelling salesman problem (TSP) is a classical optimization problem, where the idea is to find the shortest route between a set of given cities, starting and ending at the same city, such that each city should be visited exactly once. A TSP consists of a set
Scheduling is another classical optimization problem involving tasks that must be arranged in
Both problems are known to be NPhard [
Metaheuristics are one of the best known approaches to solving problems for which there is no specific and/or efficient algorithm. Many metaheuristics are inspired by metaphors from different knowledge areas, such as biology (genetic algorithms and neural networks) and physics (particle swarm optimization and simulated annealing). Usually, these algorithms differ from each other in terms of searching pattern but offer accurate and balanced methods for diversification (search space exploration) and intensification (exploitation of a promising region), share features, such as the use of stochastic components (involving randomness of variables), and have a variety of parameters that must be set according to the problem under study.
The genetic algorithm (GA) is a populationbased method invented by Holland [
The main difference between these algorithms is the way that the searching methods are implemented. GA operates on a population of solutions, where the new generations (offspring) are generated from the fittest individuals of previous generations (parents). This feature (survival principle) guarantees an increase in the quality of solutions as new generations are created. On the other hand, the SA performs constant movement between one solution (
Both GA and SA algorithms have a wide range of parameters (e.g., rates of crossover and mutation and population size, for GA, and initial temperature and its rate of decrement and number of interactions, for SA) that must be tuned before starting the solution of a problem. Since the metaheuristics are extremely dependent on the values assigned to those parameters, they must be carefully studied during the process of finetuning, since they can define the success of the algorithm.
Our approach to finetune algorithms can be expressed as a procedure that begins with an arbitrary selection of instances from a class of optimization problems and followed by the definition of ranges for each parameter from the algorithm, to apply
Factorial designs are useful for identifying the factors influencing the response. However, when the interest is to define the settings of the factors that can optimize the process and produce values (of factors) closer to the optimum, (
The result of RSM is a secondorder model [
Response surface shapes. (a) Maximum; (b) minimum; (c) saddle point. Source: Montgomery, 2001.
The final stage of our approach consists in applying a racing algorithm to define the setup of the algorithms. Racing was introduced by Maron and Moore [
To illustrate our approach of finetuning algorithms, we selected a set of parameters that intuitively seems to influence the performance of the metaheuristics GA and SA, regardless of the studied problem. For GA, we chose the following parameters: mutation probability (pMut), crossover probability (pCros), size of the population (sPop), and number of generations to be computed (nGen). For SA, the considered parameters are initial probability of accepting a solution (pIni), number of temperature stages (iExt), number of iterations during one temperature stage (iInt), and decrease in temperature (dTem).
To generalize our results and compare them among themselves, we use the relative deviation from the optimum, given by
The parameters and their corresponding levels (low and high) required by a
Parameters settings for GA and SA.
GA param.  pCros  pMut  sPop  nGen 

Low  0.01  0.01  10  10 
High  0.99  0.99  200  200 


SA param.  pIni  iExt  iInt  dTem 


Low  0.50  30  1000  0.01 
High  0.95  100  2000  0.99 
The finetuning of the metaheuristics GA and SA on TSP uses four arbitrary instances from the TSPLIB (URL:
The
All four factors studied are significant for the process, regardless of the instance selected.
There are differences between interactions of the factors according to the instance studied.
The next stage consists in applying the RSM to explore the neighborhood regions around a promising region and get values for each parameter according to the studied instance. The procedure employed consists in simultaneous study of all four parameters of each algorithm until ANOVA shows statistical significance and suggests an empirical model to explain the relationships between them. The direct search algorithm based on the simplex of Nelder and Mead [
After RSM, the next stage uses a racing algorithm inspired by
pCros
pMut
sPop
nGen
Parameters settings for GA.
GA param.  Inst. 1  Inst. 2  Inst. 3  Inst. 4 

pCros  0.54  0.57  0.63  0.57 
pMut  0.59  0.70  0.72  0.86 
sPop  110  116  129  113 
nGen  176  198  230  229 
Parameters settings for SA.
SA param.  Inst. 1  Inst. 2  Inst. 3  Inst. 4 

pIni  0.74  0.75  0.74  0.76 
iExt  64  65  64  65 
iInt  1496  1491  1493  1489 
dTem  0.55  0.54  0.51  0.53 
The same methodology produces the following ranges for SA:
pIni
iExt
iInt
dTem
Our idea of using a racing algorithm is to select the as good as possible configuration out of a lot of options. For this study, the settings used for GA are
Default and suggested parameters settings.
GA param.  pCros  pMut  sPop  nGen 

Default  0.70  0.10  100  10 
Suggested  0.54  0.79  110  203 


SA param.  pIni  iExt  iInt  dTem 


Default  0.80  100  1000  0.90 
Suggested  0.74  64  1490  0.51 
This approach begins as previously presented from the arbitrary selection of four instances of the “wt40,” a TWTP benchmark with 40 tasks from the OR Library (URL:
Here, all four factors studied are also significant for the process, but it is possible to identify differences between their interactions. The results of applying RSM in the instances are presented in Tables
Parameters settings for GA.
GA param.  Inst. 1  Inst. 2  Inst. 3  Inst. 4 

pCros  0.54  0.51  0.55  0.53 
pMut  0.56  0.58  0.54  0.57 
sPop  110  121  140  116 
nGen  123  133  137  120 
Parameters settings for SA.
SA param.  Inst. 1  Inst. 2  Inst. 3  Inst. 4 

pIni  0.74  0.76  0.74  0.71 
iExt  62  61  64  64 
iInt  1439  1429  1478  1401 
dTem  0.58  0.56  0.58  0.53 
Once again, we applied a racing algorithm considering the space of search of candidate configurations built from ranges of minimum and maximum of each parameter (Tables
pCros
pMut
sPop
nGen
The same methodology suggests the following ranges for SA:
pIni
iExt
iInt
dTem
Here, the settings used for GA are
Default and suggested parameters settings.
GA param.  pCros  pMut  sPop  nGen 

Default  0.70  0.10  100  10 
Suggested  0.52  0.55  110  140 


SA param.  pIni  iExt  iInt  dTem 


Default  0.80  30  1000  0.90 
Suggested  0.71  61  1416  0.56 
Our results were collected using the scientific software Scilab (
All results presented in this section were computed by means of (
Statistics of the GA after 10 runs on 10 instances of TSP.
Inst. 





Berlin52  1.84  0  0.32  0 
Eil51  1.78  0  0.36  0 
Eil76  2.61  0  0.84  0 
KroA100  5.09  0  1.91  0 
Pr76  2.94  0  0.90  0 
St70  3.06  0  0.83  0 
Eil101  3.27  0  1.33  0 
Lin105  5.28  0  1.70  0 
Ch130  5.20  0  2.46  0 
Tsp225  7.84  0  4.29  0 










It should be noted that the procedure to improve the performance of metaheuristics (Section
The first set of results (Tables
Statistics of the SA after 10 runs on 10 instances of TSP.
Inst. 





Berlin52  0.34  0  0.07  0 
Eil51  0.35  0  0.06  0 
Eil76  0.64  0  0.09  0 
KroA100  1.10  0  0.09  0 
Pr76  0.39  0  0.05  0 
St70  0.60  0  0.06  0 
Eil101  1.03  0  0.10  0 
Lin105  1.50  0  0.12  0 
Ch130  1.22  0  0.11  0 
Tsp225  1.64  0  0.38  0 










The statistics of
When we analyze the time series of both algorithms (Figure
Time series of a single instance (Inst. Berlin52) of TSP.
The substantial increase in the performance of both algorithms (GA and SA) can also be noted when we analyze the normalized execution times (Figure
Performance of the metaheuristics over time.
Through the results (Tables
Variability of the studied metaheuristics.
In Tables
Statistics of the GA after 10 runs on wt40 instances.
Inst. 





1  1.63  0  0.08  1 
2  0.59  0  0.16  0 
3  2.42  0  0.20  0 
4  0.53  0  0.15  0 
5  0.87  0  0.07  5 
6  0.70  0  0.04  3 
7  0.74  0  0.06  1 
8  0.54  0  0.04  1 
9  0.32  0  0.02  0 
10  0.48  0  0.03  0 










Statistics of the SA after 10 runs on wt40 instances.
Inst. 





1  1.13  0  0.04  1 
2  0.60  0  0.08  1 
3  1.93  0  0.08  0 
4  0.55  0  0.04  4 
5  0.51  0  0.04  7 
6  0.60  0  0.08  1 
7  0.72  0  0.05  2 
8  0.53  0  0.05  3 
9  0.33  0  0.03  0 
10  0.56  0  0.05  0 










The statistics of
Through the analysis of the time series of both algorithms (Figure
Time series of a single instance (Inst. 1) of TWTP.
Just as previously noted (Section
Performance of the metaheuristics over time.
The statistics (Tables
Variability of the studied metaheuristics.
This paper presented a study on finetuning of different metaheuristics through a statistical approach combining RSM and racing algorithm. From a case study was investigated the influence of different parameters of the GA and SA, applied in different instances of classical optimization problems, such as TSP and TWTP. The quality of settings for GA and SA was evaluated and compared considering default algorithms settings and ones using the settings suggested by the study of finetuning.
The use of RSM allows defining the search space of candidate configurations, explored by means of a racing algorithm to reach the best fit of each parameter of the metaheuristics to solve different instances of studied problems.
From a case study we collected different results of the TSP and TWTP before and after the finetuning of the algorithms. In general, it can be seen that, regardless of the nature of the metaheuristic, the finetuning process improves the quality of solutions and allows for both GA and SA achieving better results for different instances of the problems. In the comparisons of our results with themselves, SA has the best results for the TSP (quality of solution), whereas for the TWTP, they both have similar performance, but SA is slightly better than GA. In terms of the execution time, in general, the SA produces good performance in lesser time for both of the studied problems, whereas the GA is the slowest in finding better results.
It is important to note that the metaheuristics GA and SA, as well as the problems TSP and TWTP, were used in this work only to demonstrate our approach combing RSM and racing algorithm. The aim was to verify the effectiveness of the proposed methodology applied in these cases. Our results suggest that the proposed approach may be a promising and powerful tool to assist in the finetuning of different algorithms. However, additional studies must be conducted to verify the effectiveness of the proposed methodology, mainly when applied on already wellconfigured algorithms.
The authors declare that there is no conflict of interests regarding the publication of this paper.