Algorithms that aim to solve optimisation problems by combining heuristics and mathematical programming have attracted researchers’ attention. These methods, also known as
Solving mixed integer programming (MIP) problems as well as combinatorial optimisation problems is, in general, a very difficult task. Although efficient exact methods have been developed to solve these problems to optimality, as the problem size increases exact methods fail to solve it within an acceptable computational time. As a consequence, nonexact methods such as heuristic and metaheuristic algorithms have been developed to find good quality solutions. In addition, hybrid strategies combining different nonexact algorithms are also promising ways to tackle complex optimisation problems. Unfortunately, algorithms that do not consider exact methods cannot give us any guarantee of optimality and, thus, we do not know how good (or bad) solutions found by these methods are.
One hybrid strategy that combines nonexact methods are memetic algorithms, which are populationbased metaheuristics that use an evolutionary framework integrated with local search algorithms [
To overcome the situation described above, hybrid methods that combine heuristics and exact methods to solve optimisation problems have been proposed. These methods, also known as matheuristics [
Because of their complexity, MIP problems as well as combinatorial optimisation problems are often tackled using matheuristic methods. One common strategy to solve this class of optimisation problems is to divide the main optimisation problem into several subproblems. While heuristics are used to seek for promising subproblems, exact methods are used to solve them to optimality. One advantage of this approach is that it does not depend on the (non)linearity of the resulting subproblem. Instead, it has been pointed out that it is desirable that the resulting subproblem would be convex [
In this paper we aim to study the impact of parameter tuning on the performance of matheuristic methods as the one described above. To this end, the wellknown capacitated facility location problem is used as an application of hard combinatorial optimisation problem. To the best of our knowledge, no paper has focused on parameter tuning for matheuristic methods.
This paper is organised as follows: Section
This section is twofold. We start by describing a general matheuristic framework that is used to solve both MIP problems and combinatorial optimisation problems and how it is different from other commonly used approaches such as memetic algorithms and other evolutionary approaches. After that, we present the localsearchbased algorithms we consider in this work to perform our experiments. We finish this section by introducing the parameter we will be focused on in this study.
Equations (
Although there exist a number of exact algorithms that can find an optimal solution for the MIP
During the last two decades, the idea of combining heuristic methods and mathematical programming has received much more attention. Exploiting the advantages of each method appears to be a senseful strategy to overcome their inherent drawbacks. Several strategies have been proposed to combine heuristics and exact methods to solve optimisation problems such as the MIP
In this paper we assume that a constraint on the
In some cases, the value of
Interaction between a heuristic method and an exact method.
One distinctive feature of matheuristics is that there exists an interaction between the heuristic method and the exact method. In the method depicted in Figure
As mentioned above, there is one parameter that is not part of the set of parameters of the heuristic method nor part of the set of parameters of the exact method. This parameter, which we call
As mentioned in previous sections, the aim of this paper is not to provide a “stateoftheart” algorithm to solve MIP problems but, instead, to study the effect that changing the size of subproblems has on the quality of the obtained solutions when using localsearchbased matheuristic methods as the one described in Section
Since the local search algorithms move on the subproblem space, that is, the local search algorithms look for promising subproblems
The steepest descent algorithm starts with an initial
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
Although the steepest descent algorithm can be considered, in general, a deterministic algorithm, in the sense that, given an initial solution, it converges to the same local optima, in our implementation the algorithm does not visit all possible neighbours and, therefore, it becomes a stochastic local search. Since only one local optimum is generated at each run, we repeat the algorithm until the time limit is reached. Same is done for both ND and TS algorithms we introduce next.
As mentioned in Section
(
(
(
(
(
(
(
(
(
(
(
(
(1
(
(
(
Just as in the steepest descent algorithm, the next descent algorithm starts with an initial solution, which is labelled as the current solution. Then, a random element from the neighbourhood of the current solution is selected and solved, and the objective function value of its optimal solution is compared to the objective function value of the current solution. Like in the SD algorithm, if the neighbour solution is not better than the current solution, another randomly selected set
Unlike the algorithms described in the previous sections, tabu search is a local search technique guided by the use of adaptive or flexible memory structures [
As in the algorithms introduced above, TS also starts with an initial set of constraints
Unlike the algorithms introduced above, TS does not require next current solution to be better than the previous one. This means that it is able to avoid local optimal by choosing solutions that are more expensive as they allows the algorithm to visit other (hopefully promising) areas in the search space. However, in case the algorithm cannot make any improvement after a predefined number of iterations, a diversification mechanism is used to get out from lowquality neighbourhoods and “jump” to other neighbourhoods. The diversification mechanism implemented here is a restart method, which set the current solution to a randomly generated solution without losing the best solution found so far. Termination criterion implemented here is the time limit.
The tabu search implemented in this paper is as follows.
As Algorithm
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
We finally implement a very simple heuristic method we call
(
(
(
(
(
(
(
(
(
(
(
(
As we can see, no additional intelligence is added to the blind algorithm. It is just a random search that, after a predefined number of iterations (or any other “stop criterion”), returns the best solution it found during its search. Thus, we can consider this algorithm as a baseline of this study.
We apply the all four algorithms described in this section to two prostate cases. Details on this case and the obtained results are presented in the next section.
All the algorithms above perform very different as the value of the input parameter
In next section we explain the experiments that we perform to study how the choice of
This section starts briefly introducing the problem we consider in this paper. Then, the experiments performed here are presented and their results are discussed.
The capacitated facility location problem (CFLP) is a wellknown problem in combinatorial optimisation. The CFLP has been shown to be NPhard [
Distribution network structure considered in this study. It consists of one central plant, a set of potential warehouses, and a set of customers or retailers [
The optimisation model considers the
Equation (
In this paper three benchmarks for the CFLP are considered. The first benchmark corresponds to problem sets
Instances for the first benchmark used in this study (OR Library).



Instances 







19,240,822.45  1,745 



18,438,046.54  1,778 



17,765,201.95 




17,160,439.01 




13,656,379.58 




13,361,927.45 




13,198,556.43 




13,082,516.50 




11,646,596.97 




11,570,340.29 




11,518,743.74 




11,505,767.39 

The second benchmark is a set of instances where clients and warehouses are uniformly distributed over an imaginary square of
Instances for the













 
(1) 

120670.3  170  95  287  123900.7  191  99  410 
(2) 

110534.5  267  194  356  113527.1  262  201  410 
(3) 

149539.9  261  105  424  149343.2  287  120  522 
(4) 

141847.7  399  309  568  137451.4  331  255  501 
(5) 

182434.9  427  128  674  181149.3  292  96  645 
(6) 

161882.7  726  263  1515  161872.9  596  306  1673 
(7) 

209319.5  455  227  748  207036.9  343  131  648 
(8) 

193647.7  761  399  1679  188859.7  1040  590  1828 
(9) 

243403.7  391  179  681  236287.7  428  178  1024 
(10) 

211302.4  1228  672  1787  215410.7  1158  507  1564 
(11) 

270449.9  448  189  1484  265805.9  407  154  642 
(12) 

243625.0  1187  670  1791  239519.8  1419  588  2653 
Example of the instances considered in this study.
Clients distributed in clusters
Clients uniformly distributed
Finally, a third benchmark consisting on clients that are organised in clusters is considered. We call this benchmark
Table
After we have solved the problem using the MIP solver, we apply the all three localsearchbased matheuristics and the blind algorithm to each instance and allow them to run for 2,000 secs. The proposed algorithms solve each instance 10 times for each value of
Results obtained by the local search methods for the OR Library’s instances.
Inst 

Blind  SD  ND  TS  

GAP  Time  GAP  Time  GAP  Time  GAP  Time  

0.1  5,22%  622  9,65%  248  9,45%  1424  6,73%  785 
0.3  1,63%  1124  0,00%  1240  0,00%  489  0,00%  736  
0.5  0,32%  664  0,00%  1095  0,00%  274  0,24%  700  
0.7  0,10%  745  0,00%  366  0,00%  138  0,00%  1476  



0.1  4,31%  463  0,00%  833  0,00%  1415  0,00%  866 
0.3  1,22%  893  0,00%  366  0,19%  1290  0,19%  615  
0.5  0,52%  987  1,83%  52  0,09%  956  0,00%  745  
0.7  0,11%  282  0,09%  378  0,00%  1562  0,00%  743  



0.1  1,81%  841  0,00%  341  0,31%  65  0,00%  686 
0.3  0,23%  503  0,00%  602  0,00%  693  0,00%  1083  
0.5  0,12%  880  0,00%  431  0,00%  442  0,00%  659  
0.7  0,00%  272  0,00%  151  0,00%  211  0,00%  211  



0.1  1,56%  990  0,00%  25  0,00%  84  0,00%  84 
0.3  0,00%  411  0,00%  8  0,00%  31  0,00%  31  
0.5  0,00%  111  0,00%  2  0,00%  14  0,00%  14  
0.7  0,00%  23  0,00%  1  0,00%  7  0,00%  7  



0.1  5,22%  622  0,03%  639  0,23%  960  0,01%  319 
0.3  1,63%  1124  0,01%  588  0,08%  436  0,03%  1452  
0.5  0,32%  664  0,00%  707  0,03%  216  0,00%  605  
0.7  0,10%  745  0,01%  347  0,00%  1630  0,00%  1192  



0.1  4,31%  463  1,44%  806  2,75%  222  0,09%  714 
0.3  1,22%  893  0,00%  135  0,00%  69  0,00%  1165  
0.5  0,52%  987  0,00%  186  0,00%  196  0,05%  530  
0.7  0,11%  282  0,00%  112  0,05%  315  0,00%  261  



0.1  1,81%  841  0,40%  403  0,43%  1270  0,30%  142 
0.3  0,23%  503  0,00%  96  0,00%  181  0,00%  267  
0.5  0,12%  880  0,00%  588  0,00%  236  0,00%  564  
0.7  0,00%  272  0,00%  179  0,00%  316  0,00%  316  



0.1  1,56%  990  0,00%  1309  0,32%  144  0,32%  144 
0.3  0,00%  411  0,00%  24  0,00%  360  0,00%  360  
0.5  0,00%  111  0,00%  57  0,00%  381  0,00%  381  
0.7  0,00%  23  0,00%  28  0,00%  349  0,00%  349  



0.1  8,09%  712  9,70%  1220  8,14%  92  10,27%  133 
0.3  3,20%  919  0,00%  657  0,00%  486  0,00%  394  
0.5  1,99%  1105  0,00%  173  0,00%  496  0,00%  1469  
0.7  0,48%  863  0,00%  239  0,00%  355  0,00%  247  



0.1  5,74%  937  1,42%  1254  2,32%  1397  0,70%  84 
0.3  1,64%  934  0,00%  136  0,00%  107  0,00%  172  
0.5  0,35%  562  0,00%  1225  0,00%  248  0,00%  537  
0.7  0,02%  766  0,00%  1294  0,00%  164  0,00%  169  



0.1  5,17%  979  1,11%  1254  1,54%  819  0,20%  585 
0.3  1,04%  993  0,00%  73  0,00%  38  0,00%  272  
0.5  0,22%  906  0,00%  587  0,00%  64  0,00%  1387  
0.7  0,04%  740  0,00%  551  0,00%  149  0,00%  779  



0.1  4,86%  663  1,35%  153  0,34%  523  0,85%  313 
0.3  0,65%  711  0,03%  7  0,03%  35  0,03%  107  
0.5  0,08%  821  0,00%  18  0,03%  11  0,03%  13  
0.7  0,02%  228  0,03%  8  0,03%  22  0,00%  29 
Figures
Average results obtained by the local search algorithms for each value of
Evolution of the GAP for
Evolution of the time for
Average results obtained by the local search algorithms for each value of
Evolution of the GAP for
Evolution of the time for
Average results obtained by the local search algorithms for each value of
Evolution of the GAP for
Evolution of the time for
Since the problems from the OR Library are only medium size instances, localsearchbased matheuristics consistently find the optimal solution for almost all instances for
Figures
We now move on the
Average GAP values obtained by the local search algorithms for each value of
Evolution of the GAP for
Evolution of the GAP for
Just as in the OR Library instances, as the parameter
Figures
Average times needed by the local search algorithms for each value of
Evolution of the time for
Evolution of the time for
Results in Figures
We can also note that instances that include clusters tend to take longer to converge. This is especially true as parameter
In this paper we show the impact of parameter tuning on a localsearchbased matheuristic framework for solving mixed integer (non)linear programming problems. In particular, matheuristics that combine local search methods and a MIP solver are tested. In this study, we focus on the size of the subproblem generated by the local search method that is passed on to the MIP solver. As expected, the size of the subproblems that are solved in turn by the matheuristic method has a big impact on the behaviour of the matheuristic and, consequently, on its obtained results: as the size of the subproblem increases (i.e., more integer/binary decision variables are considered) the results obtained by the MIP solver are closer to the optimal solution. The time required by the algorithms tested in this paper to find its best solution also increases as the subproblem gets larger. Further, as the subproblem gets larger, fewer iterations can be performed within the allowed time. This is important as other heuristics such as evolutionary algorithms and swarm intelligence, where many iterations are needed before converging to a good quality solution, might be not able to deal with large subproblems. We also note that the improvement in the GAP values after certain value of parameter
As a future work, strategies such as evolutionary algorithms and swarm intelligence will be tested within the matheuristic framework considering the results obtained in this study. We expect that intelligent methods such as the ones named before greatly improve the results obtained by the local search methods considered in this study. Moreover, the matheuristic framework used in this paper might also be applied to other MILP and MINLP problems such as, for instance, the beam angle optimisation problem in radiation therapy for cancer treatment.
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Guillermo CabreraGuerrero wishes to acknowledge