This paper proposes a cluster partitioning technique to calculate improved upper bounds to the optimal solution of maximal covering location problems. Given a covering distance, a graph is built considering as vertices the potential facility locations, and with an edge connecting each pair of facilities that attend a same client. Coupling constraints, corresponding to some edges of this graph, are identified and relaxed in the Lagrangean way, resulting in disconnected subgraphs representing smaller subproblems that are computationally easier to solve by exact methods. The proposed technique is compared to the classical approach, using real data and instances from the available literature.
The covering class of facility location problems deals with the maximum distance between any client and the facility designed to attend an associated demand. These problems are known as covering problems and the maximum service distance is known as covering distance. The Set Covering Problem [
Covering models are often found in problems of public organizations for the location of emergency services. Early techniques for solving the MCLP tried to obtain integer solutions from the linear relaxation equivalent of the model proposed by Church and ReVelle [
MCLP applications range from emergency services [
In this paper is presented a cluster relaxation technique to solve large-scale maximal covering location problems. The proposed approach requires the identification of a graph related to a set of constraints. If some of these constraints are relaxed, this graph can be partitioned into subgraphs (clusters), corresponding to smaller problems that can be solved independently.
This paper is organized as follows. Section
The MCLP was formulated in [
The objective function maximizes the covered demand. Constraints (
The traditional Lagrangean relaxation approach [
It is easy to see by the integrality property that
In this paper, a decomposition approach based on the Lagrangean relaxation with clusters (LagClus) of Ribeiro and Lorena [
Consider the MCLP instance represented in Figure
A MCLP instance.
Assuming the number of facilities to be installed as
Let
A covering graph.
It is easy to note that the edges in a covering graph are related to the set of constraints (
Now, consider that a covering graph is partitioned in some way. Figure
Partitioning a covering graph.
A possible partition
The corresponding subgraphs
This partition corresponds to relax in the Lagrangean way the constraints (
Note that these subproblems correspond to the following clusters (which are associated with the subgraphs of the covering graph): Cluster 1: Cluster 2:
The resulting Lagrangean relaxation does not have the integrality property, being stronger than (
For the above example, one can apply a subgradient optimization method in order to determine the values of the dual variables
It is interesting to observe that, due to the relaxation of constraints (
Therefore, the proposed decomposition approach can be established in the following steps. Build a covering graph Apply a graph partitioning heuristic to divide the covering graph Using distinct nonnegative multipliers, relax in the Lagrangean way the constraints corresponding to the edges connecting the clusters (defining the set The resulting Lagrangean relaxation is decomposed into Apply the standard subgradient method in order to optimize the dual variables
The subgradient method used in the step (e) can be written as follows. Set, initially,
While (the stop conditions are not satisfied) do the following Solve subproblems S Calculate
Update If Calculate Update Calculate
Update the step size Update
End-While.
The step size the subgradient vector
The LagClus algorithm was coded in C and the tests were conducted on a notebook with Intel Core 2 Duo 2.0 GHz processor and 2.0 GB RAM, running Windows XP (Service Pack 3), and ILOG CPLEX 10.1.1 [
For the graph partitioning task was used the well-known METIS heuristic for graph partitioning problems [
The results obtained are shown in Tables
Computation times for SJC instances,
324 | 20 | 7302 | 0.000 | 0.015 | 296 | 2.543 | 2.537 | 1449 | 9.875 | 9.739 | ||||
30 | 9127 | 0.027 | 0.047 | 24.650 | 24.635 | 44.596 | 41.101 | |||||||
40 | 10443 | 0.156 | 0.188 | 25.985 | 25.969 | 0.157 | 45.122 | 42.353 | ||||||
50 | 11397 | 0.180 | 0.391 | 24.452 | 24.422 | 0.195 | 43.748 | 43.459 | ||||||
60 | 11991 | 0.184 | 0.235 | 44.514 | 44.421 | 0.221 | 48.049 | 47.909 | ||||||
80 | 12152 | 0.000 | 0.031 | 8.876 | 8.816 | 0.003 | 109.129 | 108.115 | ||||||
108 | 12152 | 0.000 | 0.016 | 1.595 | 1.595 | 2.977 | 26.533 | 26.487 | ||||||
500 | 40 | 13340 | 0.000 | 0.047 | 108 | 3.453 | 3.361 | 804 | 17.186 | 11.499 | ||||
50 | 14773 | 0.014 | 0.047 | 4.938 | 4.514 | 59.019 | 36.293 | |||||||
60 | 15919 | 0.048 | 0.063 | 8.233 | 7.243 | 57.157 | 34.737 | |||||||
70 | 16908 | 0.000 | 0.031 | 3.723 | 3.370 | 22.891 | 14.421 | |||||||
80 | 17749 | 0.000 | 0.015 | 5.406 | 4.766 | 26.686 | 16.697 | |||||||
100 | 18912 | 0.098 | 0.109 | 10.276 | 7.171 | 62.071 | 37.748 | |||||||
130 | 19664 | 0.041 | 0.297 | 30.827 | 24.588 | 69.934 | 43.373 | |||||||
167 | 19706 | 0.005 | 0.047 | 14.600 | 14.235 | 46.078 | 35.414 | |||||||
818 | 80 | 23325 | 0.055 | 0.140 | 166 | 45.564 | 21.880 | 1649 | 0.061 | 85.922 | 45.819 | |||
90 | 24455 | 0.123 | 0.266 | 56.388 | 24.747 | 0.143 | 87.797 | 47.001 | ||||||
100 | 25435 | 0.127 | 0.344 | 87.279 | 34.481 | 0.140 | 96.124 | 52.060 | ||||||
120 | 26982 | 0.084 | 0.297 | 69.658 | 31.368 | 105.547 | 54.658 | |||||||
140 | 28002 | 0.140 | 0.359 | 52.966 | 26.271 | 121.127 | 63.713 | |||||||
160 | 28699 | 0.128 | 0.391 | 58.453 | 24.904 | 96.017 | 50.828 | |||||||
200 | 29153 | 0.018 | 0.234 | 61.531 | 28.301 | 0.039 | 253.908 | 135.048 | ||||||
273 | 29168 | 0.000 | 0.031 | 3.343 | 2.545 | 0.554 | 46.766 | 37.178 |
The values marked with an asterisk in Table
From these results one can observe that the smaller the number of clusters is, the better are the upper bounds obtained (smaller are the gaps). On the other hand, as the number of clusters increases, the computational effort for solving the subproblems is reduced.
In the results shown that for
Therefore, as shown in Tables
Computation times for SJC instances,
324 | 20 | 9670 | 0.334 | 0.172 | 980 | 19.293 | 19.293 | 3008 | 0.347 | 32.943 | 32.911 | |||
30 | 11737 | 0.087 | 0.484 | 28.943 | 28.943 | 0.094 | 69.959 | 69.959 | ||||||
40 | 12151 | 0.005 | 0.094 | 31.066 | 31.066 | 0.138 | 77.872 | 77.612 | ||||||
50 | 12152 | 0.000 | 0.015 | 9.926 | 9.926 | 61.456 | 61.395 | |||||||
60 | 12152 | 0.000 | 0.047 | 4.575 | 4.575 | 14.376 | 14.376 | |||||||
80 | 12152 | 0.000 | 0.016 | 3.670 | 3.670 | 33.936 | 33.936 | |||||||
108 | 12152 | 0.000 | 0.031 | 0.248 | 11.343 | 11.343 | 0.001 | 897.830 | 893.405 | |||||
500 | 40 | 17077 | 0.453 | 0.203 | 657 | 24.668 | 20.669 | 2625 | 0.469 | 55.001 | 51.048 | |||
50 | 18361 | 0.014 | 0.109 | 39.109 | 32.248 | 0.025 | 67.626 | 62.596 | ||||||
60 | 19153 | 0.035 | 0.063 | 52.639 | 35.363 | 0.112 | 85.374 | 76.578 | ||||||
70 | 19551 | 0.110 | 1.078 | 43.946 | 29.817 | 0.170 | 76.671 | 68.721 | ||||||
80 | 19703 | 0.013 | 0.156 | 35.495 | 27.927 | 0.150 | 102.056 | 95.253 | ||||||
100 | 19707 | 0.000 | 0.078 | 16.624 | 16.501 | 0.001 | 89.858 | 86.698 | ||||||
130 | 19707 | 0.000 | 0.047 | 1.986 | 1.864 | 0.001 | 26.546 | 25.809 | ||||||
167 | 19707 | 0.000 | 0.016 | 0.016 | 22.379 | 22.225 | 0.859 | 22.314 | 22.133 | |||||
818 | 80 | 27945 | 0.070 | 0.203 | 840 | 57.835 | 27.423 | 4910 | 0.121 | 147.155 | 115.605 | |||
90 | 28519 | 0.138 | 1.141 | 114.145 | 45.536 | 0.177 | 128.574 | 99.585 | ||||||
100 | 28910 | 0.103 | 1.391 | 88.885 | 33.153 | 0.175 | 101.875 | 80.758 | ||||||
120 | 29165 | 0.002 | 1.234 | 55.710 | 31.180 | 0.117 | 141.246 | 115.434 | ||||||
140 | 29168 | 0.000 | 0.125 | 11.643 | 8.940 | 0.021 | 171.961 | 143.737 | ||||||
160 | 29168 | 0.000 | 0.062 | 9.738 | 7.598 | 0.878 | 39.343 | 36.244 | ||||||
200 | 29168 | 0.000 | 0.032 | 5.762 | 4.610 | 0.847 | 37.205 | 34.871 | ||||||
273 | 29168 | 0.000 | 0.031 | 0.207 | 24.698 | 18.505 | 2.904 | 25.282 | 23.772 |
Computation times for TSPLIB PCB3038 instance,
3038 | 17 | 125320 | 0.368 | 802.390 | 165579 | 843.838 | 235.541 | 291363 | 0.470 | 582.528 | 223.245 | |||
18 | 130004 | 0.517 | 10265.016 | 817.076 | 283.400 | 0.712 | 634.402 | 243.747 | ||||||
19 | 134262 | 0.605 | 20000.049 | 1483.237 | 598.653 | 0.793 | 576.821 | 222.087 | ||||||
20 | 139028 | 0.698 | 20000.156 | 1712.078 | 798.911 | 0.973 | 628.288 | 236.767 | ||||||
21 | 141279 | 0.853 | 20000.094 | 3117.174 | 1448.730 | 1.128 | 646.765 | 243.302 | ||||||
22 | 143809 | 1.196 | 20000.123 | 6656.267 | 3094.410 | 1.598 | 615.783 | 231.525 |
Comparing the values of
This paper presents a decomposition approach based on cluster partitioning to calculate improved upper bounds to the optimal solution of maximal covering location problems. The partitioning is based on the covering graph of potential facility locations that attend a same client. The corresponding coupling constraints are identified and some of them are relaxed in the Lagrangean way, resulting in subproblems that can be solved independently. Each subproblem represents a cluster smaller than the original problem and can be solved by exact methods in smaller computational times. Computational tests using real data and instances from the available literature were conducted and confirmed the effectiveness of the proposed approach.
An important characteristic of large-scale problems addressed by the proposed approach is the tradeoff between gap values and CPU times. Depending on the application, one may choose to sacrifice the quality of the bounds (increasing the number of clusters) in order to obtain shorter processing times. On the other hand, if quality is the issue, the processing times needed to solve instances with only a few clusters may be longer.
In this study, the number of clusters was fixed
The heuristic presented in this article can be used in a branch-and-bound exact method. As, in general, the upper bounds obtained with this heuristic are better than those obtained by the linear relaxation, one would expect many more nodes be pruned, with a possibly significant size reduction of the search tree.
Advances in applied mathematics and computer science have resulted in high-performance tools for mathematical programming, allowing tough optimization problems to be solved. However, as the problem size increases, the computational time may grow excessively, making the problem intractable even for the most efficient tools. In such a case an approach in which a large-scale problem is divided into a number of smaller-scale subproblems can be a nice solution possibility.
The authors acknowledge CNPq—