In this paper we study an inverse approach to the traveling salesman reoptimization problem. Namely, we consider the case of the addition of a new vertex to the initial TSP data and fix the simple “adaptation” algorithm: the new vertex is inserted into an edge of the optimal tour. In the paper we consider the conditions describing the vertexes that can be inserted by this algorithm without loss of optimality, study the properties of stability areas, and address several model applications.
1. Introduction
Stability of the traveling salesman problem (TSP) has been studied from the 1970s [1, 2] in case the weight matrix is subject to perturbation. This work was continued in a much more general way [3–9]. Another approach to the stability of the TSP is connected with the master tour property [10, page 435] with the key result [11]. At last there is the reoptimization [12, 13] approach to face the case where the weight matrix remains undisturbed while the number of cities varies (with the correspondent variation of the weight matrix size). Usually, reoptimization of NP-hard problems is NP-hard. Even if the optimal solution of the initial instance is known for free, the reoptimization remains NP-hard [14]. Moreover, the situation is not improved if all optimal solutions are known [15].
In this work we consider the following inverse reoptimization problem: we fix an “adaptation” algorithm and study what distortions of the initial data may be “adapted” by this algorithm. Specifically, we consider the conditions under which a new city may be inserted between two consequent cities of an optimal tour without loss of optimality. Sufficient condition is always polynomial; however necessary and sufficient conditions are only polynomial if the solution of the initial instance and the solutions of some “close” instances are provided by an oracle. These results sharpen the previous author’s work on TSP adaptive stability [16, 17] (in Russian) for “open-ended” tours which is a special case of the general approach to the adaptive stability of discrete optimization problems [18, chapter 1] (in Russian).
2. Designations and Definitions
Let us restrict all possible TSP cities to the elements of a finite set X and introduce a cost function d:X2→R. For each initial set of cities S={y1,…,yn}⊂X we fix the set of permutations M(S)={(yγ(1),…,yγ(n))∣γ:1,n¯↔1,n¯;γ(1)=1}. Henceforth we associate the elements of M(S) with the circular tours on S, supposing the existence of the additional edge (yγ(n),yγ(1)). The cost of a circular tour α=(x1,…,xn)∈M(S) is naturally defined as
(1)D(α)≜d(xn,x1)+∑i=1n-1d(xi,xi+1).
A tour α0∈M(S) is called optimal over M(S) if
(2)α0∈argminα∈M(S)D(α).
The insertion of a new city z∈X∖S into an existing tour α=(x1…,xn)∈M(S) after the city xi∈S gives the “disturbed” tour
(3)Ins(z,xi,α)≜(x1…,xi,z,xi+1,…,xn)∈M(S∪{z});
the insertion cost may be expressed as
(4)D(Ins(z,xi,α))=D(α)+Δ(z,xi,xi+1),
where for each (z,x,y)∈X3(5)Δ(z,x,y)≜d(x,z)+d(z,y)-d(x,y).
We say an optimal tour α0=(x1,…,xn)∈M(S) is adaptively stable to the addition of a city z∈X∖S if there exists an element xi of S such that Ins(z,xi,α0) is optimal over M(S∪{z}). In the following section we study the conditions for such stability. One can define another way to adopt an optimal tour to the addition of a new city and thus obtain other stability conditions. The only requirement for the adaptation algorithm is apparently its low complexity (in comparison with the complexity of finding α0).
Let us introduce the set of “potential pairs” for each α=(…,q,w,…)∈M(S),z∈X∖S(6)P(z,q,α)≜{(x,y)∈S2∣D(Ins(z,q,α))≥D(α0)+Δ(z,x,y)}.
Evidently P(z,q,α)≠⌀ because (q,w)∈P(z,q,α). At last, we introduce the minimum cost of tour over all tours of M(S) containing the fixed edge (x,y)∈S2:
(7)D(x,y)(S)=minα∈M(x,y)(S)D(α),whereM(x,y)(S)={(…,x,y,…)∈M(S)}.
3. Adaptive Stability Conditions
We start with the simple and sufficient condition that allows checking “easily” whether it is possible to insert a new city in a known optimal tour while preserving its optimality.
Theorem 1 (sufficient).
Let α0=(…,q1,q2,…)∈M(S) be an optimal tour and z∈X∖S. Tour Ins(z,q1,α0)∈M(S∪{z}) is optimal if
(8)Δ(z,q1,q2)=min(x,y)∈P(z,q1,α0)Δ(z,x,y).
Proof.
Let us consider an arbitrary tour β′=(…,b1,z,b2,…)∈M(S∪{z}) and the correspondent β∈M(S):Ins(z,b1,β)=β′. If (b1,b2)∈P(z,q1,α0), then
(9)D(β′)=D(β)+Δ(z,b1,b2)≥(4)D(α0)+Δ(z,q1,q2)=D(Ins(z,q1,α0));
else (b1,b2)∈S2∖P(z,q1,α0) and, thus,
(10)D(β′)=D(β)+Δ(z,b1,b2)≥D(α0)+Δ(z,b1,b2)>(3)D(Ins(z,q1,α0)).
The proposed condition is evidently polynomial in both time and space (|P(z,q1,α0)|≤|S2|). In Section 4 we study if it allows formulating the algorithms that solve particular TSP instances in polynomial time by consequent addition of new vertexes from the stability areas (see Algorithm 1 in Section 5).
Observation 1.
P(z,q1,α0) in (8) may be equivalently replaced by S2.
Proof.
Firstly, P(z,q1,α0)⊆S2 and thus
(11)min(x,y)∈P(z,q1,α0)Δ(z,x,y)≥min(x,y)∈S2Δ(z,x,y).
Conversely, for all (a,b)∈S2, (a) if (a,b)∈P(z,q1,α0), then by definition
(12)Δ(z,a,b)≥min(x,y)∈P(z,q1,α0)Δ(z,x,y);
(b) if (a,b)∉P(z,q1,α0), then D(α0)+Δ(z,a,b)>(3)D(α0)+Δ(z,q1,q2), so
(13)Δ(z,a,b)>Δ(z,q1,q2)≥min(x,y)∈P(z,q1,α0)Δ(z,x,y),
because (q1,q2)∈P(z,q1,α0). Thus we have the equivalent, but more clear and compact, condition
(14)Δ(z,q1,q2)=min(x,y)∈S2Δ(z,x,y).
The usage of (8) instead of (14) is reasonable if the set P(z,q1,α0) is known for free. Otherwise the computation of P(z,q1,α0) is comparable by complexity with the direct checking of condition (14).
In the following theorem we give a simple, necessary, and sufficient condition for TSP adaptive stability based on Bellman’s principle of optimality.
Theorem 2 (necessary and sufficient).
Let α0=(…,q1,q2,…)∈M(S) be an optimal tour and z∈X∖S. Tour Ins(z,q1,α0)∈M(S∪{z}) is optimal if and only if
(15)D(Ins(z,q1,α0))=min(x,y)∈P(z,q1,α0){D(x,y)(S)+Δ(z,x,y)}.
Proof.
First we will show that (15) leads to the optimality of Ins(z,q1,α0) and that this part is similar to the proof of Theorem 1. Let (*) denote the right-hand side of (15), consider an arbitrary tour β′=(…,b1,z,b2,…)∈M(S∪{z}), and introduce β∈M(S):Ins(z,b1,β)=β′. If (b1,b2)∈P(z,q1,α0), then
(16)D(β′)=D(β)+Δ(z,b1,b2)≥D(b1,b2)(S)+Δ(z,b1,b2)≥(*)=D(Ins(z,q1,α0));
else (b1,b2)∈S2∖P(z,q1,α0) and
(17)D(β′)=D(β)+Δ(z,b1,b2)≥D(α0)+Δ(z,b1,b2)>(3)D(Ins(z,q1,α0)).
Conversely, suppose that Ins(z,q1,α0)∈M(S∪{z}) is optimal. Let (x0,y0) minimize (*) and D(x0,y0)(S) is attained on some β0=(…,x0,y0,…)∈M(S)(18)(*)=D(x0,y0)(S)+Δ(z,x0,y0)=D(β0)+Δ(z,x0,y0)=D(Ins(z,x0,β0))≥D(Ins(z,q1,α0)).
The equality D(Ins(z,q1,α0))=(*) is achieved when (x,y)=(q1,q2), where evidently D(q1,q2)(S) is equal to D(α0) and (q1,q2)∈P(z,q1,α0).
The proposed condition requires the solution of |P(z,q1,α0)|≤|S2| NP-hard problems (computation of D(x,y)(S) for each (x,y)∈P(z,q1,α0)). Such a requirement is too “expensive” to check whether an optimal tour is adaptively stable to the single addition of z∈X∖S or not. But when such a test must be performed for a variety of possible additions z∈Z⊆X∖S (not consequent) and |Z|≫|S2|, the condition becomes less computationally expensive than the direct solution of |Z| TSP, because one needs to compute the values of D(x,y)(S) only once and is able to use them for every new z∈Z. Thus we solve less than |S2| NP-hard problems instead of |Z|.
Observation 2.
P(z,q1,α0) in (15) may be equivalently replaced by S2.
Proof.
Inclusion P(z,q1,α0)⊆S2 leads to
(19)min(x,y)∈P(z,q1,α0){D(x,y)(S)+Δ(z,x,y)}≥min(x,y)∈S2{D(x,y)(S)+Δ(z,x,y)}.
On the other hand, let (a,b)∈S2. If (a,b)∈P(z,q1,α0), then
(20)D(a,b)(S)+Δ(z,a,b)≥min(x,y)∈P(z,q1,α0){D(x,y)(S)+Δ(z,x,y)}.
Else (a,b)∈S2∖P(z,q1,α0) and
(21)D(a,b)(S)+Δ(z,a,b)≥D(α0)+Δ(z,a,b)>(3)D(Ins(z,q1,α0))=min(x,y)∈P(z,q1,α0){D(x,y)(S)+Δ(z,x,y)}.
Two opposite inequalities result in the equality
(22)D(Ins(z,q1,α0))=min(x,y)∈S2{D(x,y)(S)+Δ(z,x,y)}.
The usage of polynomially computed P(z,q1,α0) in (15) may help to avoid NP-hard computations of D(x,y)(S) for (x,y)∈S2∖P(z,q1,α0) in comparison with condition (22). If one wants to check the stability of α0∈M(S) for a variety of new elements z∈Z⊆X∖S, it is reasonable to construct as the first step the set
(23)P≜⋃z∈Z,q∈SP(z,q,α0)
and as the second step compute D(x,y)(S) only for (x,y)∈P. Nevertheless (22) may still be used for the sake of simplicity and clearness.
The described reiterated application of (15) and (22) for z∈Z is actually the construction of the stability areas (see Section 4). It is probably one of the most appropriate ways to use Theorem 2, where the computational time savings become apparent. The time complexity of TSP exact solution (2) is O(n22n) [20], where n=|S|. The time complexity of the application of condition (22) to each z∈X∖S may be expressed as the sum of O(n2n22n) operations (to prepare D(x,y)(S) for each (x,y)∈S2) and O(n2|X∖S|) operations (to check (22) for each z∈X∖S):
(24)O(n42n+n2|X|),
whereas the direct solution of TSP (2) over M(S∪{z}) for each inserting z∈X∖S would take
(25)O(n22n|X∖S|).
It is easy to see that if |X|≫n2, the application of Theorem 2 is reasonable, because the heaviest multipliers 2n and |X| are “separated.”
We finish the section with a brief mention of simple necessary conditions for TSP adaptive stability which are based on Bellman’s principle. Namely, if some “subtour” (consisting of the successive cities of an optimal tour) is not adaptively stable to the addition of a new city, then the complete tour is not adaptively stable to this addition too [18] (in Russian). “Subtour” may be “short” (3–5 edges) to obtain the computationally effective conditions or “long” (comparable to the complete tour) to get closer to the necessary and sufficient conditions.
The applicability of the necessary conditions in combination with various local search techniques deserves a particular investigation that mostly goes beyond the paper, except for one example with the cheapest insertion heuristic [21] (see Algorithm 3 in Section 5).
4. Adaptive Stability Areas
The Adaptive stability area (ASA) of a tour α0 is defined as the maximal (with respect to inclusion) subset of X∖S; each element of which may be inserted in at least one edge of α0 with an optimal tour as a result. Necessary and sufficient conditions (15) and (22) allow constructing ASA efficiently for a large finite X. Firstly, we introduce formally the edge ASA (eASA) for each single edge of a tour. Let E(α) be all the edges of a tour α; for all α∈M(S), for all (x,y)∈E(α)(26)A((x,y),α)≜{z∈X∖S∣Ins(z,x,α)isoptimaloverM(S∪{z})}.
The sufficient conditions (8) and (14) allow constructing the similar set, edge sufficient ASA (eSASA):(27)As((x,y),α)≜{min(a,b)∈S2z∈X∖S∣Δ(z,x,y)=min(a,b)∈S2Δ(z,a,b)}.
The examples of eASAs and eSASAs are given in Figure 2 for the uniform grids over the finite regions of the Euclidean plane. Finally, we introduce the stability areas associated with the necessary conditions. Let us define some auxiliary notations related to “noncyclic subtours” of a cyclic tour. Let α=(x1,…,xn)∈M(S) and, from now on, indexes outside the set 1,n¯ are defined cyclically, x0:=xn; i:=imodn. The insertion of a new city into “r-subtour” of a cyclic tour α is defined as (see Figure 1)
(28)Ins¯(z,xi,r,α)≜(xi-r,…,xi,z,xi+1,…,xi+r+1).
Let C(z,xi,r,α) be the set of unique elements of the sequence Ins¯(z,xi,r,α). The set of all possible noncyclic tours starting from xi-r, finishing at xi+r+1, and passing through all the cities from C(z,xi,r,α) is denoted as M(xi,xi+1)r(C(z,xi,r,α)). By analogy with the cyclic tours the cost of a noncyclic tour is the sum of the costs of its edges; optimal noncyclic tour is a tour of minimal cost.
Example of insertion of z into 2-subtour: Ins¯(z,2,2,(1,…,8))=(8,1,2,z,3,4,5).
Examples of adaptive stability areas for integer Euclidean TSP; X=1,400¯×1,400¯; (a) SASA and (b) ASA of the optimal tour passing through 50 uniformly distributed points; (c) ASA for the instance a280 (drilling problem) from TSPLIB [19].
The edge necessary ASA (eNASA) for α∈M(S),(x,y)∈E(α),r∈N is defined as
(29)An((x,y),r,α)≜{M(x,y)rz∈X∖S∣Ins¯(z,x,r,α)isoptimaloverM(x,y)r(C(z,x,r,α))}.
The unions of eASAs, eSASAs, or eNASAs over all (x,y)∈E(α) give the correspondent ASA, SASA, or NASA of α:
(30)A(α)≜⋃(x,y)∈E(α)A((x,y),α);As(α)≜⋃(x,y)∈E(α)As((x,y),α);An(r,α)≜⋃(x,y)∈E(α)An((x,y),r,α).
In consequence of the definitions and Bellman’s principle of optimality for all α∈M(S), for all r∈1,[|S|/2]¯(31)As((x,y),α)⊆A((x,y),α)⊆An((x,y),r,α)
and as a result
(32)As(α)⊆A(α)⊆An(r,α).
In all the examples of stability areas within the paper: (1) optimal TSP solutions are obtained by Concorde TSP solver [22] with the built-in QSopt LP solver [23]; (2) the distance function is integer Euclidean: for all a=(x1,y1)∈X, and for all b=(x2,y2)∈X let d(a,b)=[(x1-x2)2+(y1-y2)2]; (3) the eSASAs and eASAs within the correspondents SASAs and ASAs are marked off by different tones that serve the sole purpose to separate the neighbor edge areas.
For the continuous variant of X=[0,T]2⊂R2 the eASA, eSASA, and eNASA of any edge of any finite tour are closed (as the continuous preimages of closed sets). Supposedly the areas are also path-connected and simply connected.
Figure 3 shows the dependence of the relative size of ASA and SASA in a finite uniform grid over the Euclidean plane. It is interesting to note that while the relative area of SASA in Figure 3 seems to converge asymptotically to zero, the relative area of ASA seems to go to approximately 0.6 as |S|→∞. The pronounced reduction of SASA’s size in case of growing |S| is demonstrated by the examples in Figure 4. One can see that whereas the total relative area of the colored regions in the first picture of the first raw (|S|=25) is approximately 20%, the last two pictures (|S|=250 and 300) of the first raw contain the colored regions in which related areas do not exceed 3%.
Dependence of the relative size of ASA (|A(S)|/|X|) and SASA (|As(S)|/|X|) from the number of cities in an optimal tour. For each value of |S| from the set {5,…,21,30,40,50,60,70,80,90,100,150,200,250,300} 100 random (uniform) placements of |S| cities in the set X=1,100¯×1,100¯ were performed. The functions on the graph are linear interpolations of the corresponding mean values over each 100 experiments.
Examples of ASA and SASA for TSP instances with different numbers (the top line) of uniformly distributed cities, X=1,100¯×1,100¯.
5. Algorithms and Model Applications
Theorem 1 gives a natural instrument for the polynomial solution of some TSP instances. Suppose we need to solve problem (2), where |S|≥3.
Algorithm 1 (stable growth).
Consider the following.
Select 3 arbitrary points from S:{a,b,c}⊆S and construct the trivial tour β0:=(a,b,c) (evidently it is optimal over M({a,b,c})). Let C:={a,b,c}.
While S∖C≠⌀,
if ∃z*∈S∖C, ∃(x*,y*)∈B(β0):z∈As((x,y),β0), then
(33)β0:=Ins(z*,x*,β0);C:=C∪{z*};
else exit the algorithm and announce that optimal tour cannot be constructed.
Return β0 as the desired optimal tour.
Algorithm 1 is polynomial in both time and space. Unfortunately the more cities included into the intermediate tour β0 are, the smaller As(β0) is (see Figure 3), and thus the probability that Algorithm 1 will be interrupted at 2(b) increases. Step 2(b) may be replaced by the application of some heuristic, but in this case the obtained solution would not necessarily be optimal.
The conditions (15) and (22) are not polynomial and thus the analog of Algorithm 1 for the Theorem 2 does not make any practical sense. Namely, every addition of a new element z to the initial data S not only requires the solutions of NP-hard problems for finding D(z,y)(S) and D(x,z)(S), but may also change the “old” values of D(x,y)(S), so they should be recomputed.
In spite of the fact that Theorem 2 does not contribute to the solution process, it can be used for postoptimal analysis. Assume we have several optimal TSP solutions and the objective function does not allow selecting “the best.” Each TSP solution may reflect some underlying laws behind the initial data (e.g., a chain of events in history or evolution). From this contensive point of view, a “good” solution should not only minimize the objective function, but also stay structurally stable even in case of the addition of new data from the same data source. In the simplest case the solutions having the largest ASA among all optimal solutions may be considered the preferable. Here we come to the problem of constructing stability areas and, as it was noted before, condition (15) allows avoiding the solution of a new TSP for each new element tested for the stable insertion. In Algorithm 2 we consider a more general case, where the elements tested for the insertion are weighted by some function (it can be the probability of the appearance in the nearest future or the “importance” of the element in some sense).
Algorithm 2 (selection among equally optimal TSP solutions).
Consider the following.
Suppose we have a set of optimal tours
(34)Ω={α0∈M(S)∣α0∈argminα∈M(S)D(α)},|Ω|>1
and a weight function p:X→R.
For each α0∈Ω use (15) to construct ASA A(α0).
Return the set of preferable tours
(35)Θ:={θ∈Ω∣θ∈argmaxα0∈Ω∑z∈A(α0)p(z)}.
Let us consider a simple model example of the application of Algorithm 2 (Figure 5). Let, for clarity, some evolution process be described in terms of two dimensions (Euclidean plane), for example, the weight and the length of an animal. Suppose we know that the point 1 is the ancient ancestor and that the point 4 is the modern descendant. The points 2 and 3 should be ordered (it is “fixed start-free end” TSP that can be polynomially reduced to a cyclic TSP). The tours (1,2,3,4) and (1,3,2,4) have equal cost (length); thus we cannot order 2 and 3 according to the TSP solution. Let us suppose the existence of two complementary (may be even yet undiscovered) organisms, corresponding to points B1 and B2. If we know that the existence of the organism B2 is more “probable” than the existence of B1 (p(B2)>p(B1)), then we may prefer (1,3,2,4) as the tour that preserves its structure in case of probable insertion.
(a) and (b) A simple example of Algorithm 2 application; inequality p(B2)>p(B1) gives impetus to prefer the tour (1,3,2,4) to (1,2,3,4) (in spite of their equal cost) because the first one remains optimal in case of possible insertion of “important” B2; (c) and (d) an example of nontrivial function p; the brightness of the pixels on the canvas X=1,400¯×1,400¯ corresponds to the values of function p (the darker pixel (x,y)- the larger p(x,y)); here tour (1,3,2,4) is still preferable, because area Y exceeds area X in “summary darkness.”
In case of relatively large |X∖S| the computation time savings related with the application of (15) in ASA construction may be significant. Suppose we have 10 feature dimensions to describe each of 50 organisms (|S|=50). Let the values of each coordinate vary from 0 to 100; then X=(1,100¯)10. The purpose is to reconstruct the most probable evolution order over S, assuming, for example, that the origin organism is known. If there is more than one optimal solution, it is possible to compare them by Algorithm 2 applying some known weight function p. This method implies the multiple constructions of ASAs. According to (24) and (25) the direct construction of ASA will take ≈n22n|X|=50225010010≈3·1038 operations as far as the optimal TSP solutions over M(S∪{z}) for each new z∈X are needed. The application of Theorem 2 allows reducing it to ≈n2(n22n+|X|)=504250+50210010≈3·1023. This example allows recommending the condition (15) for postoptimal analysis of the TSP solutions in multidimensional feature spaces.
The last application of the adaptive stability we intend to concern in the paper is devoted to the usage of adaptive stability within local search. Such a combination seems to deserve a particular investigation which mostly goes beyond this paper. Here we give only a basic example of the application of NASAs to the decision-making process in the well-known cheapest insertion (CI) heuristic.
NASAs seem to be the most convenient choice to combine with the fast heuristics, because, as it was shown before, the ASAs are too hard to compute in case of problem size growing, while the SASAs are too “narrow” (see Figures 3 and 4).
Algorithm 3 (cheapest insertion with NASA).
Consider the following.
The algorithm has two parameters: r—the size of subtour, and m—the number of the first best insertions to be tested against NASA.
Use CI to construct an initial cycle β=(x1,…,xk), where k=2r+2. Set C:={x1,…,xk} (evidently β∈M(C)).
Repeat while S∖C≠⌀:
for each z∈S∖C construct w(z)≜min(x,y)∈E(β)Δ(z,x,y) and fix one of the edges (x(z),y(z))∈E(β), where the minimum is attained;
order z∈S∖C by ascending of w(z); select m top-ranked cities (z1,…,zm), so that Ins(z1,x(z1),β) is the cheapest insertion;
search for the smallest i∈1,m¯ for which zi∈An((x(zi),y(zi)),r,β);
if such i exists, then C:=C∪{zi}; β:=Ins(zi,x(zi),β);
else do the cheapest insertion C:=C∪{z1}; β:=Ins(z1,x(z1),β).
Return β as the desired tour.
The results of the empirical experiments comparing the application of Algorithm 3 with simple CI are given in Table 1. One can see that the most advantageous values of r lay in range 3,7¯. Relatively large subtour size (r>8) leads to the situation where Step 3(c)i of Algorithm 3 is rare in comparison with Step 3(c)(ii); thus Algorithm 3 tends to work exactly as CI.
The comparison of tour costs obtained by CI and by Algorithm 3 for different TSPLIB [19] instances; DOpt is the cost of the optimal solution; DCI is the cost of the CI-tour; the remaining columns present the values of D(β)/DCI (in percent), where β is obtained by Algorithm 3 with the correspondent r and m≡10.
Name
DOpt
DCI
r=2
r=3
r=4
r=5
r=6
r=7
r=8
berlin52
7542
9241
102.9
108.7
105.9
97.5
102.3
99.7
100.7
ch150
6528
7729
100.7
95.6
96.7
100.6
104.4
99.7
100
a280
2579
3052
101.1
94.5
102.3
96.2
96.3
98
101.5
rat575
6779
7693
103
101.9
99.5
99.1
98.5
99
100.3
6. Conclusion
Similar definitions, conditions, and adaptive stability areas may be constructed for the deletion or substitution of a city in a tour. Examples of such conditions are given in [16, 18] (in Russian). However, in practice, addition of a new city remains the most interesting case of the distortion.
The described approach to the stability of TSP may be applied to an arbitrary combinatorial optimization problem in a very general way: we fix an “adaptive algorithm” A that assigns each solution of the original problem an “adapted set” of solutions of the correspondent disturbed problem. If the “adapted set” constructed for the optimal solution of the original problem contains an optimal solution of the disturbed problem, then the original optimal solution is considered to be “adaptively stable” in the sense of the fixed “adaptive algorithm” (A-stable) [24] (in Russian).
Conflict of Interests
The author declares that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The work is supported by Russian Foundation for Basic Research (14-08-00419, 13-01-96022, 13-08-00643, 13-01-90414, and 13-04-00847).
Leont'evV. K.Stability of the travelling salesman problemLiburaM.Sensitivity analysis for minimum Hamiltonian path and traveling salesman problemsGordeevÈ. N.Leont'evV. K.A general approach to the study of the stability of solutions in discrete optimization problemsGurevskiĭE. E.EmelichevV. A.On five types of stability of a lexicographic version of the combinatorial bottleneck problemLiburaM.NikulinY.Stability and accuracy functions in multicriteria linear combinatorial optimization problemsDevyaterikovaM. V.KolokolovA. A.KosarevN. A.The analysis of the stability of some integer programming algorithms with respect to the objective functionDevyaterikovaM. V.KolokolovA. A.On the stability of some integer programming algorithmsEmelichevV. A.PlatonovA. A.Measure of quasistability of a vector integer linear programming problem with generalized principle of optimality in the Helder metricKozeratskayaL. N.LebedevaT. T.SergienkoI. V.Stability, parametric, and postoptimality analysis of discrete optimization problemsPapadimitriouC. M.DeĭnekoV. G.RudolfR.WoegingerG. J.Sometimes travelling is easy: the master tour problemArchettiC.BertazziL.SperanzaM. G.Reoptimizing the traveling salesman problemZychA.BockenhauerH. J.HromkovicJ.MomkeT.WidmayerP.BockenhauerH. J.HromkovicJ.SprockA.Knowing all optimal solutions does not help for tsp reoptimizationIvankoE.Sufficient conditions for tsp stabilityIvankoE.Criterion for tsp stability in case of a vertex addition, Herald of Udmurt UniversityIvankoE.Tsplib, 2014, https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/HeldM.KarpR. M.A dynamic programming approach to sequencing problemsRosenkrantzD. J.StearnsR. E.LewisP. M.Approximate algorithms for the traveling salesperson problemProceedings of the 15th Annual Symposium on Switching and Automata Theory19743342MR0424258Concorde tsp solver, 2014, http://www.math.uwaterloo.ca/tsp/concorde.htmlQsopt linear programming solver, 2014, http://www.math.uwaterloo.ca/~bico/qsopt/IvankoE.Adaptive stability in combinatorial optimization problems