A Comparative Performance Analysis of Computational Intelligence Techniques to Solve the Asymmetric Travelling Salesman Problem

This paper presents a comparative performance analysis of some metaheuristics such as the African Buffalo Optimization algorithm (ABO), Improved Extremal Optimization (IEO), Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), Cooperative Genetic Ant System (CGAS), and the heuristic, Randomized Insertion Algorithm (RAI) to solve the asymmetric Travelling Salesman Problem (ATSP). Quite unlike the symmetric Travelling Salesman Problem, there is a paucity of research studies on the asymmetric counterpart. This is quite disturbing because most real-life applications are actually asymmetric in nature. These six algorithms were chosen for their performance comparison because they have posted some of the best results in literature and they employ different search schemes in attempting solutions to the ATSP. The comparative algorithms in this study employ different techniques in their search for solutions to ATSP: the African Buffalo Optimization employs the modified Karp–Steele mechanism, Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO) employs the path construction with patching technique, Cooperative Genetic Ant System uses natural selection and ordering; Randomized Insertion Algorithm uses the random insertion approach, and the Improved Extremal Optimization uses the grid search strategy. After a number of experiments on the popular but difficult 15 out of the 19 ATSP instances in TSPLIB, the results show that the African Buffalo Optimization algorithm slightly outperformed the other algorithms in obtaining the optimal results and at a much faster speed.


Introduction
ere have been disagreements among computer science experts with regards to what constitutes artificial intelligence and computational intelligence [1]. Meanwhile, some researchers argue that artificial intelligence and computational intelligence are one and the same branch of knowledge, and other experts feel that computational science is a branch of the artificial intelligence [2,3]. Still another school of thought believes that artificial intelligence is a branch of computational intelligence [4]. For the purpose of this paper, the authors are in agreement with the school of thought such as the IEEE Computational Intelligence Society that believes that computational intelligence is a branch of artificial intelligence that focusses on the smartness of a computer machine in terms of performing functions usually attributed to human beings only [5]. Some of those functions include the exhibition of learning, reasoning, making of informed choices, and self-improvement. ese are achieved through the development of algorithmic techniques that deploys intelligent agents drawn from the simulation of nature to solve real-life problems. Computational intelligence methods have been applied to solve classification problems, time-series prediction problems, examination scheduling [6], stock market problems, weather forecasting, national economic forecasting, job scheduling [7], electric voltage regulation, agricultural production, industrial site location, vehicle-routing, etc. [8,9].
On the contrary, artificial intelligence refers to that computer software which enables robots or computer systems perform human tasks with exceptional abilities, accuracy, speed, and capacity. AI encompasses the design of algorithmic and nonalgorithmic methods that involve the use of robots, computer vision, graphics and human-computer interactions, language processing, etc. [10,11]. Major application areas of AI include the development of novel ways of interactions between man and machines such as the design of better-functioning business models, medical and bio-medical applications, and engineering designs [3,12].

Need for Computational Intelligence Methods.
In view of the enormous contributions of AI and computational intelligence (CI) to human development over the years as highlighted above, researchers have devoted much resources and time investigating this area with a viewing to unraveling the untapped potentials inherent in AI and CI, respectively [13]. Resulting from these research efforts is the realization that AI and CI rely a lot on the effectiveness and efficiency of the methods. is understanding has led to the development of CI methods. In the last three decades, the concentration of researchers on developing swarm and evolutionary optimization methods towards enhancing industrial, scientific, and engineering applications is worth investigating [14]. e constant need for faster, cheaper, and more efficient ways of solving industrial, commercial, engineering, and logistics problems has made optimization a favoured area of scientific investigations in the past few decades leading to the development of novel computational intelligence methods [15].
In recent times, several heuristics cum metaheuristic methods have been designed to solve the problem of optimization in science, engineering, industrial, and technological problems encountered in many practical fields of human endeavour. Some of these optimization techniques are deterministic while others are stochastic. Deterministic techniques/algorithms have in-built mechanisms that guarantee the exact solution to an optimization problem but many times run into serious problems as the search space gets larger [16]. Deterministic algorithms are quite inefficient in multimodal search environments [17]. Some of the deterministic optimization methods are the Finite Difference [18], Hooke-Jeeves pattern [19], Nelder-Mead simplex [20], and Newton-Raphson method [21] to mention but a few. Similarly, among the stochastic optimization techniques are the African Buffalo Optimization (ABO) [22], Max-Min Ant System (MMAS) [23], Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO) [24], Cooperative Genetic Ant System (CGAS) [25], Randomized Insertion Algorithm (RAI) [26], Improved Extremal Optimization (IEO) [27], etc.
Stochastic algorithms use search agent or agents in their search and obtain solutions iteratively without a guarantee of optimal results. However, stochastic algorithms are highly efficient in monomodal and multimodal search environments of any size. Presently, much attention is focused on the stochastic algorithms and, most recently, in hybridization of stochastic algorithms since they tend to be more successful in finding optimal or near-optimal solutions to some difficult real-life situations that require optimization for better results [28]. In this paper, our interest is in comparing the efficiency of different stochastic optimization methods to solving the Asymmetric Travelling Salesman Problems (ATSP).

Hybrid, Metaheuristic, and Heuristic Algorithms.
Hybrid algorithms are simply a combination of two or more algorithms in such a way that the algorithms are made to cooperate and jointly solve a problem. Hybridization of algorithms are done to harness the unique capabilities of the cooperating algorithms to enhance search efficiency and effectiveness in terms of obtaining optimal or near-optimal solutions, escaping stagnation and ensuring faster computational speed, etc. ere are several algorithm-hybridization architectures in literature ranging from master-slave, relay to peer-to-peer paradigms, etc. In all, algorithm-hybridization synergizes algorithms in such a way as to complement one another in order to ensure greater efficiency and effectiveness [29].
On the one hand, metaheuristics refer to a kind of high level, stochastic, problem-independent, and intelligent manipulators of heuristic information to achieve greater efficiency in their search enterprise [16,30]. To achieve this, sometimes, metaheuristics accept worsening moves, generate new starting solutions for the embedded local search component or introduce some kinds of memory or experiential biases to ensure the quick production of higher quality solutions, etc. [31]. Examples of metaheuristics are the Ant Colony Optimization [32], Artificial Bee Colony [33], Particle Swarm Optimization [34], African Buffalo Optimization [35], etc.
A heuristic, on the other hand, is an approximate, problem-dependent set of instructions, methods, or principles designed to solve a problem at a reasonable computational cost. Generally, heuristics are, relatively, simple mechanisms designed to determine the cheapest/best/most effective solution among a set of solutions. However, due to the prevalent use of greedy search strategy, heuristics have the problem of premature stagnation. Examples of heuristic algorithms are Lagrangian Relaxation [36], Randomized Insertion Algorithm [37], Greedy Search [38,39], etc. e primary difference between heuristics and metaheuristics is that, usually, heuristics are problem-dependent, but the metaheuristics are general-purpose algorithms. Again, metaheuristics have inherent memory capabilities that enable them learn while executing, thus enabling them adapt to any problem, unlike pure heuristic algorithms [40].
To the best of our knowledge, this is the first time the algorithms used in this comparative analysis are being compared together in one study. e analysis in this paper involves hybrid, metaheuristic, and heuristic algorithms.

Computational Intelligence and Neuroscience
Moreover, this study aims at adding to the body of knowledge in the ATSP literature which, we observed earlier, is not as researched into as its symmetric counterpart though ATSP has more real-life applications than the symmetric TSP. Moreover, it is hoped that it will be a useful tool in the hands of researchers having to carry out studies that involve the ATSP. e rest of this paper is organized as follows. Section 2 discusses the Travelling Salesman Problem. Section 3 introduces the six comparative algorithms, namely, the African Buffalo Optimization (ABO), Cooperative Genetic Ant System (CGAS), Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), the Improved Extremal Optimization (IEO), and the Randomized Insertion Algorithm (RAI), highlighting each algorithm's basic flow and search mechanisms for the ATSP. Section 4 concentrates on the experiments performed and discussion of the results obtained in comparing the first five algorithms which are population-based algorithms. Section 5 compares the performance of the ABO with the RAI. is is followed by the conclusions, acknowledgment of support for the study, and references.

Travelling Salesman Problem
e travelling Salesman Problem (TSP) is about the most studied problem among combinatorial optimization problems and is fast becoming the most reliable test bed for newly designed optimization methods [41]. e Travelling Salesman Problem which was designed, developed, and popularised by Eric Weisstein at Wolfram Research around 1930 [11], basically, is the problem of a particular salesman who has customers all over a large city requiring his services or products. To satisfy these customers, the salesman has to visit all of them and then return to the starting location using the cheapest route. e TSP, in a way, is comparable to a graph theory problem where the arcs represent the routes/ roads and the nodes represent the cities. TSP, therefore, is a Hamiltonian cycle where the cycle passes through all the nodes, at least, once in the graph [42]. e arcs in a TSP problem are assigned some weights which represent the costs/distances/travelling time between the nodes in order to help in determining the arc that has the cheapest cost [37].
ere are two types of TSP: asymmetric and symmetric. Usually, the symmetric TSP is easier to solve since both to and from journeys are the same in cost/length, as such optimization algorithms simply calculate one length of the journey across different nodes [43]. In the asymmetric TSP, there exists an instance, at least, where the cost/weight on an arc is not the same in either to or from a node in the travelling graph [44]. Meanwhile, in the symmetric TSP, the travelling cost/time/weight is the same on either way in all the graphs [45]. at is to say that, in the asymmetric TSP, the edges in the to and from direction can have different costs/time periods/weights/lengths. As such the problem should be modeled with the aid of a directed graph. In the symmetric, however, the distance between any pair of nodes is the same in either direction. TSP can be represented mathematically as TSP � (G, f, t), where G � (V, E), f is a function V × V ⟶ Z, and t ∈ Z. As such, G is a complete graph that fully represents the tour of the travelling salesman with the entire tour cost which should not exceed t.
In ATSP, there exists a set V of cities coupled with a cost function c that represents the weights between any pair of nodes, V × V ⟶ R+, with a requirement to find a minimum tour length/cost that ensures that every node is visited at least once. e constraint of visiting a city at least once as opposed to exactly once is relevant because, usually, the starting city is visited twice. us, the ATSP tour can be represented as (1) is way, the ATSP tour of any three cities u, v, w ∈ V satisfies the triangular inequality. is means that the following statement holds for the ATSP tour: (

2)
Available literature indicates that metaheuristic approaches used in solving the TSP with a little transformation are effective in providing solutions to the ATSP [46,47]. e same cannot be said of heuristics that are mostly problemdependent. Similarly, many researchers assert that the ATSP is more difficult to solve than its symmetric counterpart because it requires a reformulation as a symmetric TSP problem with some constraints [48,49].

e Need for Asymmetric TSP.
A critical review of literature on the Travelling Salesman Problem reveals that there are lots of studies on the symmetric Travelling Salesman Problem over the past several decades. However, it is rather ridiculous that there exists a paucity of literature on the asymmetric TSP [50]. is is rather puzzling because most day-to-day human activities are, indeed, asymmetric. Consider, for instance, a postmaster delivering mails to different locations within a large city or even to different geographical zones, a school bus driver picking up school children and returning the children at the end of school period, a taxi driver picking up passengers from the taxi station and returning for his next queue, and welfare officers taking food to home-bound persons [51]. In all of these cases, the most probable route would be asymmetric. is study is motivated by the indispensable nature of asymmetric challenges in virtually all aspects of human endeavour. It is hoped that the study will find wide practical applicability. Asymmetry in logistics and transportation in real-life settings could result from one-way traffic situations and road tolling, as well as other commercial and/or civil engineering considerations [51].

The Comparative Algorithms
is study specifically investigates the performance of six optimization algorithms in literature that have exhibited exceptional performances in solving the ATSP. ese algorithms are the African Buffalo Optimization (ABO),

Computational Intelligence and Neuroscience
Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), Cooperative Genetic Ant System (CGAS), Improved Extremal Optimization (IEO), and Randomized Insertion Algorithm (RAI). e choice of these algorithms for comparison is informed by their special characteristics, while the ABO and the IEO are standalone metaheuristic algorithms, the MMAS, MIMM-ACO, and CGAS are hybrid metaheuristics, and the RAI is a heuristic algorithm.

African Buffalo Optimization.
e ABO which was inspired by the marvelous organizational ability of herds of buffalos, which, sometimes, are upto 1000 individuals in a single herd, using two primary vocalizations: the waaa and the maaa [26,30], is a relatively new algorithm whose search capacity cum ability to obtain good results is very competitive. e ABO applying the lean metaheuristic design concept was designed to be fast in obtaining results, avoid stagnation, use few parameters, and be efficient and effective; hence, it is the choice for this comparative study. It was actually designed to complement the existing algorithms such as the Genetic Algorithm [52], Simulated Annealing [53], Ant Colony Optimization [54], and Particle Swarm Optimizations [55]. Using these vocalizations, the African buffalos organize themselves in their navigation through the African forests in search of lush green pastures to satisfy their huge appetite [35]. In this algorithm, each animal's location represents a solution in the search space. e ABO algorithm is presented in Figure 1.
e ABO applies the Modified Karp-Steele algorithm in its solution of the Asymmetric Travelling Salesman Problem [56]. It follows a simple solution step of first constructing a cycle factor F of the cheapest weight in the K graph. Next, it selects a pair of arcs taken from different cycles of the K graph and patch in a way that will result in a minimum cost. Patching is simply removing the selected arcs in the two cycle factors and then replacing them with cheaper arcs and, in this way, forming a larger cycle factor, thus reducing the number of cycle factors in graph K by one.
irdly, the second step is repeated until we arrive at a single cycle factor in the entire graph K [57,58].
So far, the observed limitations of the ABO lie in the fact that buffalos are parliamentary in decision-making. at is to say that the choice of majority population of the herd determines their next destination. In the standard ABO variant, the modeling process is not explicit leading to the generation of a new population when there is a case of stagnation occasioned by the decision of the majority of the herd. Another area of weakness is that the frequent reinitialization of the entire population has the tendency to limit the directional search capacity of the buffalos when the algorithm is faced with complex engineering challenges.
ese observed challenges necessitated the development of the Improved African Buffalo Optimization [59].

Cooperative Genetic Ant System.
e Cooperative Genetic Ant System (CGAS) [25] is a hybrid algorithm that combines the Genteic Algorithm (GA) with the Ant System (AS) in a concurrent and cooperative manner, thus harnessing the individual strengths of both algorithms in ensuring search efficiency. In solving the ATSP, CGAS selects the next node for an ant to visit based on natural ordering and selection. Resident in any node is a sorted list of a certain number adjoining nodes that are chosen through natural selection process in such a way that a node with higher probability is chosen for the next move. For any ant to move from one node to another, the ant will consult the sorted list C(i) to pick the nearest node in a process represented by is information exchange between GA and AS in the end of the current iteration enables the algorithm to choose the best solutions for the next iteration. Such cooperation helps the algorithm to arrive at the global optimal solution and ensures adequate exploration of the search space.
e Cooperative Genetic Ant System algorithm is presented in Figure 2.

Max-Min Ant System.
e Max-Min Ant System (MMAS) developed by Stuzzle and Hoos [60] is simply an extension of the classical Ant Colony System (ACS) algorithm by ensuring that only the best ant in each iteration or the global best ant (i.e., the ant with the best solution since the beginning of the search) is allowed to deposit pheromone along its own route. At the start of the algorithm, the pheromone trail is set to some maximum levels to ensure adequate exploration, but this is systematically reduced as the algorithm progresses.
is system is akin to the Ant System but is further extended by placing a limit to the quantity of pheromones depositable to a particular maximum and minimum (max, min) values on the chosen arcs/routes. is is to avoid the problem of stagnation observed in the Ant System and the Ant Colony Optimization where so much pheromone is deposited on some favourite arcs/route, thereby, diverting the attention of the ants from exploring other parts of the search space. e maximum and minimum pheromone trail values are selected in a problem-dependent manner, such that the more promising routes are given, the higher max-min values are obtained. By the end of each iteration, the evaporation factor reduces the strength of the pheromone trail by a given factor but making sure that the trail used by the best ant is given less evaluation. is ensures that the pheromone trail strength on less-promising arcs decreases, thus directing the ants to more promising arcs. e MMAS algorithm is presented in Figure 3.
To solve ATSP, first, the ants are placed on some randomly selected nodes/vertices and they start constructing their tours from an initial node deliberately exploiting the pheromone trails rkj associated with each edge (k, j). Such a tour is constructed by choosing the next vertice probabilistically using

Improved Extremal Optimization.
e Extremal Optimization algorithm [61] was inspired by the self-organised critically (SOC) system theory which is a combination of two models that use different extremal dynamics: the Bak-Sneppen (BS) model and BTW sand-pile model [62]. e Extremal Optimization (EO) is closely associated with the BS evolution model which is akin to natural biological evolution that favours species with higher fitness values. e EO algorithm differs from the other evolutionary algorithms in which it emphasizes cooperative co-evolution and extremal dynamics in the evolutionary process. In solving the ATSP, the nodes/cities are mapped into a multientity physical system in such a way that, for an ATSP problem with k cities, there exists k − 1 different states.
e entire anticipated solution that includes all nodes is deemed a state in the physical system. Next, the algorithm defines a local fitness function that evaluates the energy of each entity in the physical system. In every iteration, the algorithm moves through two major phases, namely, the extremal dynamics and the cooperative co-evolution phases which are combinations of greedy search and random walk. e EO algorithm is presented in Figure 4: ere has been a major modification of the classical Extremal Optimization algorithm: the Improved Extremal Optimization [27] which introduces the parameter τthat is adjustable, meaning that EO algorithm Step (2) is slightly changed in a way that the variable with the nth highest fitness is chosen with the probability Pj ∝ j − τ (1 ≤ j ≤ N) especially in a situation when there are N entities in the computational system.

Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO) [63] is a hybridization of the Max-Min Ant
System algorithm with Karp's Patching technique [64] and the Patch heuristics [65] resulting in two main adjustments to the classical Max-Min Ant System algorithm (MMAS). First, the static pheromone weighting system is replaced by a dynamic pheromone weighting mechanism. e dynamic weighting mechanism results from the partial path construction efforts of the ants. In this way, deliberate attempt is made to favour more promising edges (i.e., edges that have lower residual costs rather than just mere lower actual cost, thus eliminating the nonoptimal edges) of the search. Secondly, MIMM-ACO algorithm termination condition rather than being determined intuitively is determined analytically based on the present structure of the pheromone matrice in a particular iteration, thus enhancing the algorithm's searching capacity.
In solving the ATSP, the algorithm, first, analyzes the ATSP problem and then applies the MIMM-ACO searching system. Information obtained from this phase is then used to direct the search further towards more promising areas of the search space. e pseudocode code of MIMM-ACO is presented in Figure 5: From Figure 5, D is the cost matrix,Ĉ is the residual cost matrix, s gb is the best solution obtained so far, and τ represents the pheromone matrix.

e Randomized Insertion Algorithm.
e randomized algorithms make random instead of deterministic decisions through the extensive use of random bits as their input in its search process, thus leading to the generation of random variables. Randomized algorithms are usually faster and simpler than deterministic ones. e Randomized Insertion Algorithm (RAI) uses the arbitrary insertion mechanism which is very close to cheapest insertion strategy in its search for solution to the ATSP. e development of the RAI was borne out of a desire to provide a fast and simple solution to the ATSP. is algorithm starts by constructing an initial solution (see steps 1-4 in Figure 6) and then employing a series of systematic deletion cum insertion of arcs in the cheapest way possible as it constructs good solutions.
To solve the ATSP, the RAI randomly selects any initial node a and then links it with any two other nodes c and d in the cheapest possible way thereby forming cycle ac d. In the next iteration, the RAI selects any other cheap node/nodes within the neighborhood of the newly formed cycle which is not part of the already-formed tour and inserts such into the tour randomly. is process is repeated until all the nodes are inserted. Next, the algorithm keeps this tour and proceeds to the deletion phase (see steps 6-10 in Figure 6) where the algorithm randomly deletes some arcs while comparing the present solution with the previous, retaining the better, and discarding the worse. At the end of the construction steps, the algorithm calculates and outputs the best solution found.

Experiments and Discussion of Results
e experiments were performed using a desktop with the following configuration: Intel Duo Core ™ 2.00 Ghz, 2.00 Ghz, 1 GB RAM on a Window7 on 15 difficult but popular instances out of the 19 Asymetric Travelling Salesman Problem (ATSP) dataset ranging from 17 to 443 cities available in TSPLIB95 [66]. e experiments were coded in Matlab programming language and executed on Matlab2012b compiler.

Parameter Setting.
e details of the experimental parameters are available in Table 1.
e explanation of the symbols used in Table 1 is as follows. D * is the dimension of the problem, that is, the number of nodes; α represents the pheromone factor; Ǫ is the pheromone amount; β is the heuristic factor; D * is the size of population; qo is the exploitation ratio; ρ is the pheromone evaporation parameter; ϕ is the ratio of minimum to maximum pheromone value; w min is the minimum value of biased weight; θ is the termination condition parameter; N/A denotes Not Available; τ is the probability selection; N ITER is the maximum number of iterations. Please recall that m k and w k represent the exploitation and exploration moves, respectively, of the kth buffalo (k � 1, 2, . . . N); m k ′ represents a move from m k ; lp1 and lp2 are learning factors; bg is the herd's best fitness; bp is the individual buffalo's best location. e parameters are set after deliberate tuning. For the ABO specifically, since

Determine the configuration J and set J bes ← J;
2. For the present configuration J, 3. Determine λi for each variable xi, 4. Generate K with λ j ≥ λ i for all i, i.e., xi has the "worst" fitness, 5. Select randomly a J′ЄN(s) in a way that the "worst" xi changes its state, 6. if F(J′) < F(J best ), then set J best ← J′, 7.
End if 10. End for 11. Output J best and F(J best ).

Computational Intelligence and Neuroscience
it is a parameterless algorithm, the parameters are preset by the algorithm designers [57]. e parameters were obtained after careful parametertuning. e parameters used in this experiment are found to give the best results. Please note that, to ensure fairness of comparison among different algorithms, it is necessary to run the experiments in the same machine and using the same programming language.

Computational Results.
e comparative experiments were of two parts: the first compared the output of the metaheuristic algorithms in solving the ATSP, while the second compared the ABO performance with that of the RAI which is a heuristic algorithm. e results of the experiments involving the metaheuristic algorithms, namely, the Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), Improved Extremal Optimization (IEO), Cooperative Genetic Ant System (CGAS), and African Buffalo Optimization algorithm (ABO) are presented in Table 2.
Please note that the relative error was obtained by Rel. error � Best − opt values Opt values × 100.
In Table 2, the best performer is the MIMM-ACO. e algorithm obtained optimal result in all the 15 ATSP cases under consideration here. is excellent performance was closely followed by the IEO, CGAS, ABO, and MMAS in that order. As pointed out earlier, all these algorithms posted excellent results in solving the ATSP. In fact, to the best of our knowledge, they presently hold some of the best results in the literature in solving the ATSP, and this is the motivation for this comparative study.
On the whole, all the algorithms posted over 94.56% accuracy in solving the problems. ese are excellent performances, especially when one realises that these are metaheuristic algorithms, that is to say, they are generalpurpose algorithms that were not specifically designed for just the ATSP. One way to explain their exceptional performances could be that they are all hybrid algorithms, except, of course, the ABO. Hybrid algorithms post good performances since they exploit the strength of individual algorithms being hybridized. ABO's good result could be traceable to its use of less complicated calculation of fitness function coupled with the ability of the buffalos to search both globally and locally at the same time.
In terms of the computational cost which is judged by the amount of computational resources utilized in obtaining the solutions to the ATSP instances under investigation, this is where there is such a gulf in the algorithms performances. Here, the exceptional performer is the ABO. It took the ABO just 20.58 seconds to solve all the ATSP instances under investigation. e next best performer is the MIMM-ACO with 78.51 seconds. ese are commanding performances, especially when we consider that it took the other algorithms hundreds of seconds to solve the same number of problems. e excellent performance of the ABO could be due to its use of relatively few parameters. Basically, the algorithm uses two major parameters, the "waaa" and "maaa" vocalizations of the buffalos, to control its flow. MIMM-ACO good solutions could be traceable to the introduction of the limit parameters of the MMAS and the inherent search ability of the classical ACO in addition to the excellent constructive ability of the patching technique (see Table 2).
It could be observed from Table 2 that the other algorithms were, rather, very slow. Solving the same problems took IEO 392.03 seconds, MMAS 492.39 seconds, and CGAS 780.22 seconds. Overall, the ABO is over 3.8 times faster than the MIMM-ACO, 19.05 times faster than IEO, 23.93 times faster than MMAS, and over 37.91 times faster than CGAS. e slow speed of these algorithms could be due to common problems with algorithm hybridization. In most cases of hybridization, efficient exploration is either sacrificed for greater exploitation, speed for optimal solution, or vice versa. Moreover, due to the complicated hybrid algorithm architecture, more parameters that require tuning and more complex implementational skill requirement from hybridization, and hybridization poses a threat to efficiency [67]. Since efficiency (speed) and trustworthiness (accuracy) are two of the major hallmarks of a good algorithm, the others being versatility and ease of use [68], it is safe to conclude that the ABO having obtained over 98.5% of the optimal results of all the ATSP instances under investigation

African Buffalo Optimization and Randomized Insertion Algorithm
e previous analysis of the performance of the metaheuristic algorithms shows that the ABO has an edge over the other metaheuristics. is section is concerned with the comparative assessment of the performance of the ABO with the RAI heuristics in solving the ATSP. e RAI heuristics was especially designed to provide solutions to asymmetric TSP instances. e experimental results are presented in Table 3: e two algorithms, under investigation in Table 3, the ABO and the RAI, posted very commanding performances. While the ABO obtained over 98.6% accuracy in all 15 ATSP instances, the RAI obtained over 99.05% accuracy. Moreover, it can be observed that the ABO obtained the optimal solution in five instances to RAI's 13 accurate performance. e difference in performance here can be traceable to their use of two different techniques in obtaining results. While the RAI uses the random insertion strategy, ABO uses the modified Karp-Steele method. Nonetheless, it is a competitive performance.  Computational Intelligence and Neuroscience e excellent performance of both algorithms is further highlighted by the calculation of their cumulative relative errors which is a measure of deviation from the optimal solutions. e cumulative relative error is obtained by summing up the values of the relative errors for each ATSP instance. e cumulative relative error of the ABO is 1.32% and that of RAI is 0.94%.
is is also a commendable performance by the ABO in view of the fact that the RAI is a pure heuristic designed primarily to solve the ATSP.
In evaluating the cost implications of obtaining results, the uncommon strength of the ABO becomes outstanding in all instances. It was only in Br17 that the RAI executed slightly faster in 0.027 seconds to ABO's 0.028 seconds. In the remaining 14 instances, the ABO clearly outperformed the RAI. For instance, while it took ABO 0.037 seconds to obtain result in Ry48p, the RAI used 1.598 seconds. is means that the ABO was over 43.18% faster.
is trend continues throughout the remaining ATSP instances under investigation. In fact, the ABO gets progressively faster as the number of ATSP cities increases. Take, for instance, the two largest city instances here which are Rbg403 and Rbg443, while ABO used 4.741 and 10.377 seconds, respectively, the RAI used 11137 and 17126 seconds, respectively. is shows the ABO being over 2,349 and 1,650 times faster, respectively.
As was the case in the comparative performance of the metaheuristics, it can be seen that the ABO outperformed the heuristic algorithm, RAI. Someone may have observed that speed is a function of the hardware configuration, the programmers' expertise, and a few other factors; nevertheless, an algorithm that has such a straightforward calculation of fitness function and uses very few parameters will undoubtedly obtain results faster than most other algorithms. In all, aside from ABO's capacity to obtain over 98.5% accuracy to RAI's 99.06%, it took ABO a total of 20.582 seconds to to RAI's 38979.448 seconds to execute all the 15 instances under investigation.

Conclusion
is study examined the solutions to the asymmetric Travelling Salesman Problems using computational intelligence techniques. e computational intelligence techniques used include African Buffalo Optimization algorithm (ABO), Improved Extremal Optimization (IEO), Model-Induced Max-Min Ant Colony Optimization (MIMM-ACO), Max-Min Ant System (MMAS), and Cooperative Genetic Ant System (CGAS), as well as the heuristic and Randomized Insertion Algorithm. Experimental results obtained from using these algorithms to solve the ATSP reveal that the MIMM-ACO performed excellently obtaining the optimal solutions to all test instances. However, it was discovered that, to obtain such an excellent result, the MIMM-ACO sacrificed speed. It took the MIMM-ACO 78.51 seconds to solve the 15 ATSP instance, while another algorithm, the African Buffalo Optimization (ABO), obtained 98.6% accuracy at 20.582 seconds. e study, therefore, concludes that since efficiency (speed), trustworthiness (accuracy), versatility, and ease of use are hallmarks of a good computational intelligence methods [68] and a number of experimental evaluations with focus on the first two criteria, the ABO is adjudged a better algorithm for solving the ATSP instances, followed by MIMM-ACO. e excellent performance of the MIMM-ACO is traceable to two main factors. First, the algorithm's ability to replace static biased costs/weights in an ATSP with dynamic ones is something other algorithms struggle to do. is ability stems from the algorithm's use of partial solutions sampling that each ant has constructed in course of the search and then discarding less fruitful results while holding on to the very best. Moreover, the MIMM-ACO's use of the assignment problem technique in discarding the nonoptimal solutions from the list of available solutions is a major advantage. Second, MIMM-ACO determines the final output based on the most recent state of the pheromone matrix and combines this using the patch algorithm to micromanage the solutions obtained by the assignment problem. Other algorithms find it hard to outperform the MIMM-ACO's hybridization of the assignment problem with the patch. is basically explains why the MIMM-ACO results in ATSP remain one of the best over the years [63].
It is recommended that the other algorithms should be fine-tuned to make them faster. Moreover, the authors recommend the comparison of the performance of ABO with other state-of-the-art algorithms in providing solutions to other optimization problems such as knapsack problem, graph coloring, and urban transport challenges in major cities. Finally, in view of the relevance of the ATSP to our every day activities, it is recommended that more research efforts should be directed towards solving ATSP and its practical applications in transportation, logistics, national security architectural challenges, etc.
6.1. reats to Validity. As much as the algorithms in this comparative study performed excellently well, it must, however, be observed that good results are a function of the programming language used for the study as well as the machine used for the experiments. Moreover, the choice of benchmark test cases could be a threat that the algorithms performed well in these chosen benchmarks may not be a guarantee that they will do well in other benchmarks. Again, the choice of the comparative algorithms could be a threat that these algorithms performed well against one another may not guarantee their exceptional performance when compared with other newer algorithms.
Finally, it is possible that the programming expertise of our programmer as well as the programming language used in implementing this study could have influenced our experimental output.

Data Availability
e data used to support the findings of the study are available within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest regarding the publication of this research article. 10 Computational Intelligence and Neuroscience