This paper solves the dynamic traveling salesman problem (DTSP) using dynamic Gaussian Process Regression (DGPR) method. The problem of varying correlation tour is alleviated by the nonstationary covariance function interleaved with DGPR to generate a predictive distribution for DTSP tour. This approach is conjoined with Nearest Neighbor (NN) method and the iterated local search to track dynamic optima. Experimental results were obtained on DTSP instances. The comparisons were performed with Genetic Algorithm and Simulated Annealing. The proposed approach demonstrates superiority in finding good traveling salesman problem (TSP) tour and less computational time in nonstationary conditions.
1. Introduction
A bulk of research in optimization has carved a niche in solving stationary optimization problems. As a corollary, a flagrant gap has hitherto been created in finding solutions to problems whose landscape is dynamic, to the core. In many real-world optimization problems a wide range of uncertainties have to be taken into account [1]. These uncertainties have engendered a recent avalanche of research in dynamic optimization. Optimization in stochastic dynamic environments continues to crave for trailblazing solutions to problems whose nature is intrinsically mutable. Several concepts and techniques have been proposed for addressing dynamic optimization problems in literature. Branke et al. [2] delineate them through different stratifications, for example, those that ensure heterogeneity, sustenance of heterogeneity in the course of iterations, techniques that store solutions for later retrieval and those that use different multiple populations. The ramp up in significance of DTSP in stochastic dynamic landscapes has, up to the hilt, in the past two decades attracted a raft of computational methods, congenial to address the floating optima (Figure 1). An in-depth exposition is available in [3, 4]. The traveling salesman problem (TSP) [5], one of the most thoroughly studied NP-hard theory in combinatorial optimization, arguably remains a main research experiment that has hitherto been cast as an academic guinea pig, most notably in computer science. It is also a research factotum that intersects with a wide expanse of research areas; for example, it is widely studied and applied by mathematicians and operation researchers on a grand scale. TSP’s prominence ascribe to its flexibility and amenability to a copious range of problems. Gaussian process regression is touted as a sterling model on account of its stellar capacity to interpolate the observations, its probabilistic nature, versatility, practical and theoretical simplicity. This research lays bare a dynamic Gaussian process regression (DGPR) with a nonstationary covariance function to give foreknowledge of the best tour in a landscape that is subject to change. The research is in concert with the argumentation that optima are innately fluid, cognizant that size, nature, and position are potentially volatile in the lifespan of the optima. This skittish landscape, most notably in optimization, is a cue for fine-grained research to track the moving and evolving optima and provide a framework for solving a cartload of pent-up problems that are intrinsically dynamic. We colligate DGPR with nearest neighbor (NN) algorithm and the iterated local search, a medley whose purpose is to refine the solution. We have arranged the paper in four sections. Section 1 is limited to introduction, Section 2’s ambit includes review of all methods that form the mainspring of this work, which include Gaussian process, TSP, and DTSP. We elucidate DGPR for solving the TSP in Section 3. Section 4 discusses results obtained and draws conclusion.
Nonstationary optima [6].
2. The Traveling Salesman Problem (TSP)
The first researcher, in 1932, considered the traveling salesman problem [7]. Menger gives interesting ways of solving TSP. He lays bare the first approaches which were considered during the evolution of TSP solutions. An exposition on TSP history is available in [8–10].
Basic Definitions and Notations. It is imperative to note that in the gamut of TSP, both symmetric and asymmetric aspects are important threads in its fabric. We factor them into this work through the following expressions.
Basically, a salesman traverses across an expanse of cities culminating into a tour. The distance in terms of cost between cities is computed by minimizing the path length:
(1)f(π)=∑i=1n-1dπ(i),π(i+1)+dπ(n),π(1).
We provide a momentary storage, D for cost distance. The distances between n cities are stored in a distance matrix D. For brevity, the problem can also be situated as an optimization problem. We minimize the tour length (Figure 5):
(2)∑i=nndi,π(i).
The distance matrix of TSP has got certain features which come in handy in defining a set of classes for TSP [11]. If the city point, (xi,yi) in a tour is accentuated; then drawing from Euclidean distance expression [11], we present the matrix C between separate distances as
(3)cij=(xi-xj)2+(yi-yj)2.
Affixed to TSP are important aspects that we bring to the fore in this paper. We adumbrate a brief overview of symmetric traveling salesman problem (STSP) and asymmetric traveling salesman problem (ATSP) as follows.
STSP, akin to its name, ensures symmetry in length. The distances between points are equal for all directions while ATSP typifies different distance sizes of points in both directions. Dissecting ATSP gives us a handle to hash out solutions.
Let ATSP be expressed, subject to the distance matrix. In combinatorial optimization, an optimal value is sought, whereby in this case, we minimize using the following expression:
(4)wπ(n),π(1)+∑i=1n-1wπ(i),π(i+1).
Reference [12] formulates ATSP in integer programming n2-n zero-one variables xij or else it is defined as
(5)y=∑i=1n∑j=1nwijxij
such that
(6)∑i=1nxij=1,j[n],∑j=1nxij=1,i[n],∑i∈S∑j∈Sxij≤|S|-1,∀|S|<n,pppiixij=0or1,i≠j∈[n].
There are different rules affixed to ATSP, inter alia, to ensure a tour does not overstay its one-off visit to each vertex. The rules also ensure that standards are defined for subtours.
In the symmetry paradigm, the problem is postulated. For brevity, we present subsequent work with tautness:
(7)y=∑1≤i≤j≤nwijxij
such that
(8)∑i=1nxij=2,j∈[n],∑i∈S∑j≠Sxij≥2,∀3≤|S|≥n2,0≤xij≤1,i≠j∈[n],llllllllllllllxij∀i≠j∈[n].
TSP is equally amenable to the Hamiltonian cycle [11] and so we use graphs to ram home a different solution approach to the problem of traveling salesman. In this approach, we define G=(V,E) and (ei∈E)wi. This is indicative of the graph theory. The problem can be seen in the prism of a graph cycle challenge. Vertices and edges represent V and E, respectively.
It is also plausible to optimize TSP by adopting both an integer programming and linear programming approaches, pieced together in [13]:
(9)∑i=1n∑j=1ndijxij,∑i=1nxij=1,∑j=1nxij=1.
We can also view it with linear programming, for example,
(10)∑i=1mwixi=wTx.
Astounding ideas have sprouted, providing profound approaches in solving TSP. In this case, few parallel edges are interchanged. We use the Hamilton graph cycle [11] equality matrix
(11)∀i,jdij=dij′,∑i,j∈Hdij=α∑i,j∈Hdij′+β
subject to α>0, β∈ℝ. The common denominator of these methods is to solve city instances in a shortest time possible. A slew of approaches have been cobbled together extensively in optimization and other areas of scientific study. The last approach in this paper is to transpose asymmetric to symmetric. The early work of [14] explicates the concept. There is always a dummy city affixed to each city. The distances are the same between dummies and bona fide cities which makes distances symmetrical. The problem is then solved symmetrically thereby assuaging the complexities of NP-hard problems:
(12)[0d12d13d210d23d31d320]⟷[0∞∞-∞d21d31∞0∞d12-∞d31∞∞0d13d23-∞-∞d12d130∞∞d21-∞d23∞0∞d31d32-∞∞∞0].
2.1. Dynamic TSP
Different classifications of dynamic problems have been conscientiously expatiated in [15]. A wide array of dynamic stochastic optimization ontology ranges from a moving morphology to drifting landscapes. The dynamic optima exist owing to moving alleles in the natural realm. Nature remains the fount of artificial intelligence. Optimization mimics the whole enchilada including the intrinsic floating nature of alleles, which provides fascinating insights into solving dynamic problems. Dynamic encoding problems were proposed by [16].
DTSP was initially introduced in 1988 by [17, 18]. In the DTSP, a salesman starts his trip from a city and after a complete trip, he comes back to his own city again and passes each city for once. The salesman is behooved to reach every city in the itinerary. In DTSP, cities can be deleted or added [19] on account of varied conditions. The main purpose for this trip is traveling the smallest distance. Our goal is finding the shortest route for the round trip problem.
Consider a city population, n and e, as the problem at hand where in this case we want to find the shortest path for n with a single visit on each. The problem has been modeled in a raft of prisms. A graph (N,E) with graph nodes and edges denoting routes between cities. For purpose of elucidation, the Euclidean distance between cities is i and j is calculated as follows [19]:
(13)Di,j=(xi-xj)2+(yi-yj)2.
2.1.1. Objective Function
The predictive function for solving the dynamic TSP is defined as follows.
Given a set of different costs (P1,P2,…,Pn(t)), the distance matrix is contingent upon time. Due to the changing routes in the dynamic setting, time is pivotal. So, it is expressed as a function of distance cost. The distance matrix has also been lucidly defined in the antecedent sections. Let us use the supposition that D=dij(t), and i,j=1,2,…,n(t). Our interest is bounded on finding the least distance from Pj and dij(t)=dji(t). In this example, as aforementioned, time, t and of course, cost, d, play significant roles in the quality of the solution. DTSP is therefore minimized using the following expression:
(14)d(T(t))=∑i=1n(t)dTi,Ti+1(t).
From Figures 2, 3, and 4, DTSP initial route is constructed upon visiting requests carried by the traveling salesman {A,B,C,D,E} [20]. As the traveling salesman sets forth, different requests (X,Y) come about which compels the traveling salesman to change the itinerary to factor in the new trip layover demands, {A,B,C,D,X,E,Y}.
Initial request,A,B,C,D,E.
New requests for consideration.
Previous route changed to meet new requests given to the traveling salesman.
Minimum path generated by DGPR.
2.2. Gaussian Process Regression
In machine learning, the primacy of Gaussian process regression cannot be overstated. The methods of linear and locally weighted regression have been outmoded by Gaussian process regression in solving regression problems. Gold mining was the major motivation for this method where Krige, whom Kriging is his brainchild [21], postulated that using posteriori, the cooccurrence of gold is encapsulated as a function of space. With Krige’s interpolation mineral concentrations at different points can be predicted.
In Gaussian process, we find a set of random variables. The specifications include covariance function n(x,x′) and mean function p(x) that parameterize the Gaussian process. The covariance function determines the similarity of different variables. In this paper, we expand the ambit of study to nonstationary covariance:
(15)p(f(x)·f(x′))=N(μ,Σ).
In the equation, μ=(μ(x)μ(x′)) and Σ=(K(x·x)K(x·x′)K(x′·x)K(x′·x′)).
The matrices n×1 for μ and n×n for Σ are presented in (15).
GPR (Figure 6) has been extensively studied across the expanse of prediction. This has resulted into different expressions to corroborate the method preference. In this study we have a constellation of training set P=(xi,yi)i=1m. The GPR model [22] then becomes
(16)yi=h(xi)+ɛi
subject i=1 to m.
DGPR maintains superiority when juxtaposed with GPR and local search.
The probability density describes the likelihood for a certain value to be assumed by a variable. Given a set of observations bound by a number of parameters:
(17)p(y∣X,w)=∏i=1np(yi∣xi,w)~N(XTw,σn2I),
In this case, bias is denoted by w.
Gaussian process is analogous to Bayesian with a fractional difference [23]. In one of the computations by the Bayes’ rule [23], is the Bayesian linear model parameterized by covariance matrix and mean denoted by A-1 and w¯, respectively:
(18)p(w∣X,y)~N(w¯=σn-2A-1Xy,A-1),
where
(19)A=σn-2XXT+∑p-1.
Using posterior probability, the Gaussian posterior is presented as
(20)p(fs∣xs,X,y)~N(σm-2xsTA-1Xy,xsTA-1xs).
Also the predictive distribution, given the observed dataset, helps to model a probability distribution of an interval not estimating just a point:
(21)p(fs∣xs,X,y)~N(σm-2ϕsTA-1Φy,ϕsTA-1ϕs),
where by Φ=Φ(X), ϕs=ϕ(xs), and A=σn-2ΦΦT+∑p-1. If A-1 of size n×n is needed when n is large. We rewrite as
(22)NϕsT∑PΦ(K+σn2I)-1y,ϕsT∑Pϕs-ϕsT∑PΦ(K+σn2I)-1ΦT∑Pϕs.
The covariance matrix K is ΦT∑pΦ.
2.2.1. Covariance Function
In simple terms, the covariance defines the correlation of function variables at a given time. A host of covariance functions for GPR have been studied [24]. In this example,
(23)K(xixj)=v0exp-(xi-xjr)σ+v1+v2δij,
the parameters are v0 (signal variance), v1 (variance of bias), v2 (noise variance), r (length scale), and δ (roughness). However in finding solutions to dynamic problems, there is a mounting need for nonstationary covariance functions. The problem landscapes have increasingly become protean. The lodestar for this research is to use nonstationary covariance to provide an approach to dynamic problems.
A raft of functions have been studied. A simple form is described in [25]:
(24)CNS(xi,xj)=σ2|Σi|1/4|Σj|1/4|Σi+Σj2|-1/2exp(-ℚij),
With quadratic form,
(25)ℚij=(xi-xj)T(Σi+Σj2)-1(xi-xj),Σi denotes the matrix of the covariance function.
3. Materials and Methods
Gaussian process regression method was chosen in this work, owing to its capacity to interpolate observations, its probabilistic nature, and versatility [26]. Gaussian process regression has considerably been applied in machine learning and other fields [27–29]. It has pushed back the frontiers of prediction and provided solutions to a mound of problems, for instance, making it possible to forecast in arbitrary paths and providing astounding results in a wide range of prediction problems. GPR has also provided a foundation for state of the art in advancing research in multivariate Gaussian distributions.
A host of different notations for different concepts are used throughout this paper:
Ttypically denotes the vector transpose,
y^ denotes the estimation,
the roman letters typically denote what constitutes a matrix.
Our extrapolation is dependent on the training and testing datasets from the TSPLIB [30]. We adumbrate our approach as follows:
input distance matrix between cities,
invoke Nearest Neighbor method for tour construction,
tour encoding as binary for program interpretation,
as a drifting landscape, we set a threshold value θ∈𝕋, where 𝕋 is the tour, and the error rate ɛ∈𝕋 for the predicatability is
(26)∀1≤j≤n0<severityDT(Fij)≤θ,∀1≤j≤n0<predictDT,ɛ(Fij)≤θ,
get a cost sum,
determine the cost minimum and change to binary form,
present calculated total cost,
unitialize the hyperparameters (ℓ,σf2,σn2)
we use the nonstationary covariance function K(X-X′)=σo2+xx′. Constraints yi=f(xi+εi) realized in the TSP dataset, D=(xi,yi)i=1n, yi∈ℝ distances for different cities, xi∈ℝd,
calculate integrated likelihood in a dynamic regression,
output the predicted optimal path x^* and its length y^*,
implement the local search method x*,
estimate optimal tour x^*,
let the calculated route set the stage for iterations until no further need for refinement,
let the optimal value be stored and define the start for subsequent computations,
output optimal x^ and cost (y^*).
3.1. DTSP as a Nonlinear Regression Problem
DTSP is formulated as a nonlinear regression problem. The nonlinear regression is part of the nonstationary covariance functions for floating landscapes [18]:
(27)yi=f(xi+εi)
and D={(xi,yi)}i=1n where yi∈ℝ, xi∈ℝd Our purpose is to define p(y*∣x*,D).
3.1.1. Gaussian Approximation
The Gaussian approximation is premised on the kernel, an important element of GPR.
The supposition for this research is that once x is known, y can be determined. By rule of thumb, the aspects of a priori (when the truth is patent, without need for ascertainment) and posteriori (when there is empirical justification for the truth or the fact is buttressed by certain experiences) play a critical role in shaping an accurate estimation. The kernel determines the proximate between estimated and nonestimated.
Nonstationarity, on the other hand, means that the mean value of a dataset is not necessarily constant and/or the covariance is anisotropicvaries with direction and spatially variant, as seen in [31]. We have seen a host of nonstationary kernels in literature as discussed in previous chapters, for example in [32],
(28)CNS(xi,xj)=∫ℛ2Kxi(u)Kxj(u)du.
For (xi,xj,u)∈ℛ2,
(29)f(x)=∫ℛ2Kx(u)ψ(u)du,
For ℛp, p=1,2,…, we ensure a positive definite function between cities for dynamic landscapes:
(30)∑i=1n∑j=1nai,ajCNS(xi,xj)=∑i=1n∑j=1nai,aj∫ℛpKxi(u)Kxj(u)du=∫ℛp∑i=1naiKxi(u)∑j=1najKxj(u)du=∫ℛp(∑i=1naiKxi(u))2du≥0.
In mathematics, convolution knits two functions to form another one. This cross relation approach has been successfully applied myriadly in probability, differential equations, and statistics. In floating landscapes, we see convolution at play which produces [31]
(31)CNS(xi,xj)=σ2σ2|Σi|1/4|Σj|1/4|Σi+Σj2|-1/2exp(-𝔔ij).
In mathematics, a quadratic form reflects the homogeneous polynomial expressed in
(32)𝔔ij=(xi-xj)T(Σi+Σj2)-1(xi-xj).
A predictive distribution is then defined:(33)p(y*∣X*,D,θ)=∫∫p(y*∣X*,D,exp(ℓ*),exp(ℓ),θy)×p(ℓ*,ℓ∣X*,X,ℓ¯,X¯,θℓ)dℓdℓ*.
From the dataset, the most probable estimates are used, with the following equation:
(34)p(y*∣X*,D,θ)≈p(y*∣X*,exp(ℓ*),exp(ℓ),D,θy).
3.2. Hyperparameters in DGPR
Hyperparameters define the parameters for the prior probability distribution [6]. We use θ to denote the hyperparameters. From y, we get θ that optimizes the probability to the highest point:
(35)p(y∣X,θ)=∫p(y∣X,ℓ,θy)·p(ℓ∣X,ℓ¯,X¯,θℓ)dℓ.
From the hyperparameters p(y∣X,θ), we optimally define the marginal likelihood and introduce an objective function for floating matrix:
(36)logp(y∣X,exp(ℓ),θy)=-12yT(Kx,x+σn2I)-1y-12log{Kx,x+σn2I}-n2log(2π),
and |M| is the factor of M.
In this equation the objective function is expressed as
(37)L(θ)=logp(ℓ∣y,X,θ)=c1+c2·[yTA-1y+log|A|+log|B|]
and A is Kx,x+σn2I, B is Kx¯,x¯+σn-2I.
The nonstationary covariance Kx,x is defined as follows. ℓ represents the cost of X point:
(38)Kx,x=σf2·Pr1/4·Pc1/4·(12)-1/2Ps-1/2·E
with
(39)Pr=p·1nT,Pc=1nT·pT,P=ℓTℓ,Ps=Pr+Pc,E=exp[-s(X)]Ps,ℓ=exp[K¯x,x¯T[K¯x¯,x¯+σn-2I]-1ℓ¯].
After calculating the nonstationary covariance, we then make predictions [33]:
(40)Kx,x¯=σ¯f2·exp[-12s(σ-,ℓ-2X,σ-,ℓ-2X¯)].
4. Experimental Results
We use the Gaussian Processes for Machine Learning Matlab Toolbox. Its copious set of applicability dovetails with the purpose for this experiment. It was titivated to encompass all the functionalities associated with our study. We used Matlab due to its robust platform for scientific experiments and sterling environment for prediction [26]. 22-city data instance were gleaned from the TSP library [34].
From the Dell computer, we set initial parameters: ℓ=2,σf2=1,σn2. The dynamic regression is lumped with the local search method to banish the early global and local convergence issues.
For global method (GA), the following parameters are defined. Sample = 22, (pc) = 1, (pm) = 1.2, and 100 computations while SA parameters include Tint = 100, Tend = 0.025, and 200 computations.
The efficacy level is always observed by collating the estimated tour with the nonestimated [35–37]:
(41)deviation(%)=y^*-y*y*×100.
The percentage of difference between estimated solution and optimal solution = 16.64%, which is indicative of a comparable reduction with the existing methods (Table 1). The computational time by GPR is 4.6402 and distance summation of 253.000. The varied landscape dramatically changes the length of travel for the traveling salesman. The length drops a notch suggestive of a better method and an open sesame for the traveling salesman to perform his duties.
The extracted data is collated to show forth variations in different methods.
Method#
DTSP and DGPR collated data
Nodes#
Optimal#
T#
D#
GPR
22
253.00
4.64
0.42
DGPR
22
231.00
3.82
0.24
GA
22
288.817
5.20
0.40
SA
22
244.00
5.50
2.30
2-opt
22
240.00
4.20
0.30
The proposed DGPR (Figure 8) was fed with the sample TSP tours. The local search method constructs the initial route and 2-opt method used for interchanging edges. The local method defines starting point and all ports of call to painstakingly ensure that the loop goes to every vertex once and returns to the starting point. The 2-opt vertex interchange creates a new path through exchange of different vertices [38]. Our study is corroborated by less computation time and slumped distance when we subject TSP to predicting the optimal path. The Gaussian process runs on the shifting sands of landscape through dynamic instances. The nonstationary functions described before brings to bare the residual, the similitude, between actual and estimate. In the computations, path is interpreted as [log2n] and an ultimate route as n[log2n].
There are myriad methods over and above Simulated (Figure 10) Annealing and tabu search, set forth by the fecundity of researchers in optimization. The cost information determines the replacement of path in a floating turf. The lowest cost finds primacy over the highest cost. This process continues in pursuit of the best route (Figure 9) that reflects the lowest cost. In the dynamic setting, as the ports of call change, there is a new update on the cost of the path. The cost is always subject to change. The traveling salesman is desirous to travel the shortest distance which is the crux of this study (Figure 11). In the weave of this work, the dynamic facet of regression remains at the heartbeat of our contribution. The local methods are meshed together to ensure quality of the outcome. As a corollary our study has been improved with the integration of the Nearest Neighbor algorithm and the iterated 2-opt search method. We use the same number of cities; each tour is improved by 2-opt heuristics and the best result is selected.
In dynamic optimization, a complete solution of the problem at each time step is usually infeasible due to the floating optima. As a consequence, the search for exact global optima must be replaced again by the search for acceptable approximations. We generate a tour for the nonstationary fitness landscape in Figure 7.
Generated tour in a drifting landscape for the best optimal route.
DGPR is juxtaposed with all other comparing methods in an instance of 22 cities for 200 iterations.
An example of best solution in stationarity. A sample of 22 cities generates a best route. As seen in the figure, there is a difference in optimality and time with nonstationarity.
Optimal path is generated for Simulated Annealing in 22 cities.
High amount of time and distance cost are needed to complete the tour vis-a-vis when prediction is factored.
5. Conclusion
In this study, we use a nonstationary covariance function in GPR for the dynamic traveling salesman problem. We predict the optimal tour of 22 city dataset. In the dynamic traveling salesman problem where the optima shift due to environmental changes, a dynamic approach is implemented to alleviate the intrinsic maladies of perturbation. Dynamic traveling salesman problem (DTSP), as a case of dynamic combinatorial optimization problem, extends the classical traveling salesman problem and finds many practical importance in real-world applications, inter alia, traffic jams, network load-balance routing, transportation, telecommunications, and network designing. Our study produces a good optimal solution with less computational time in a dynamic environment. A slump in distance corroborates the argumentation that prediction brings forth a leap in efficacy in terms of overhead reduction, a robust solution born out of comparisons, that strengthen the quality of the outcome. This research foreshadows and gives interesting direction to solving problems whose optima are mutable. DTSP is calculated by the dynamic Gaussian process regression, cost predicted, local methods invoked, and comparisons made to refine and fossilize the optimal solution. MATLAB was chosen as the platform for the implementation, because development is straightforward with this language and MATLAB has many comfortable tools for data analysis. MATLAB also has an extensive cross-linking architecture and can interface directly with Java classes. The future of this research should be directed to design new nonstationary covariance functions to increase the ability to track dynamic optima. Also changes in size and evolution of optima should be factored in, over, and above changes in location.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
The authors would like to acknowledge W. Kongkaew and J. Pichitlamken for making their code accessible which became a springboard for this work. Special thanks also go to the Government of Uganda for bankrolling this research through the PhD grant from Statehouse of the Republic of Uganda. The authors also express their appreciation to the incognito reviewers whose efforts were telling in reinforcing the quality of this work.
SimõesA.CostaE.Prediction in evolutionary algorithms for dynamic environmentsBrankeJ.KausslerT.SchmeckH.SmidtC.LeungK. S.JinH. D.XuZ. B.An expanding self-organizing neural network for the traveling salesman problemDorigoM.GambardellaL. M.Ant colony system: a cooperative learning approach to the traveling salesman problemJarumasC.PichitlamkenJ.Solving the traveling salesman problem with gaussian process regressionProceedings of the International Conference on Computing and Information Technology2011WeickerK.MengerK.Das botenproblem. Ergebnisse Eines Mathematischen Kolloquiums, 1932GutinG.YeoA.ZverovichA.Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSPHoffmanA.WolfeP.PunnenA.The traveling salesman problem: aplications, formulations and variationsOzcanE.ErenturkM.ClarkeG.WrightJ.Scheduling of vehicles from a central depot to a number of delivery pointsMiliotisP.JonkerR.VolgenantT.Transforming asymmetric into symmetric traveling salesman problemsGharanS.SaberiA.CollardP.EscazutC.GasparA.Evolutionary approach for time dependent optimizationProceedings of the IEEE 8th International Conference on Tools with Artificial IntelligenceNovember 19962910.1109/TAI.1996.5603922-s2.0-0030402555PsaraftisH.Dynamic vehicle routing problemsYangM.LiC.KangL.A new approach to solving dynamic traveling salesman problemsRayS. S.BandyopadhyayS.PalS. K.Genetic operators for combinatorial optimization in TSP and microarray gene orderingOsabaE.CarballedoR.DiazF.PerallosA.Simulation tool based on a memetic algorithm to solve a real instance of a dynamic tspProceedings of the IASTED International Conference Applied Simulation and Modelling2012BattitiR.BrunatoM.ChuongD.KongkaewW.PichitlamkenJ.RasmussenC.WilliamsC.2006Cambridge, UKMIT PressPaciorekC. J.SchervishM. J.Spatial modelling using a new class of nonstationary covariance functionsRasmussenC. E.NickischH.Gaussian processes for machine learning (GPML) toolboxSinzF.CandelaJ.BakirG.RasmussenC.FranzK.Learning depth from stereoKoJ.KleinD. J.FoxD.HaehnelD.Gaussian processes and reinforcement learning for identification and control of an autonomous blimpProceedings of the IEEE International Conference on Robotics and Automation (ICRA '07)April 2007Roma , Italy7427472-s2.0-3634899715410.1109/ROBOT.2007.363075IdéT.KatoS.Travel-time prediction using gaussian process regression: a trajectory-based approachProceedings of the 9th SIAM International Conference on Data Mining 2009 (SDM '09)May 2009117711882-s2.0-73449142852ReineltG.Tsplib discrete and combinatorial optimization1995, https://www.iwr.uni-heidelberg.de/groups/comopt/software/TSPLIB95/PaciorekC.HigdonD.SwallJ.KernJ.PlagemannC.KerstingK.BurgardW.Nonstationary Gaussian process regression using point estimates of local smoothnessReineltG.The tsplib symmetric traveling salesman problem instances1995KirkJ.GengX.ChenZ.YangW.ShiD.ZhaoK.Solving the traveling salesman problem based on an adaptive simulated annealing algorithm with greedy searchSeshadriA.NuhogluM.Shortest path heuristics (nearest neighborhood, 2 opt, farthest and arbitrary insertion) for travelling salesman problem, 2007