Adaptive Cat Swarm Optimization Algorithm and Its Applications in Vehicle Routing Problems

College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266590, China College of Information Science Technology, Dalian Maritime University, Dalian 116026, China College of Science and Engineering, Flinders University, Sturt Rd, Bedford Park SA 5042, Adelaide, Australia School of Software, Nanyang Institute of Technology, Nanyang 473004, China Fuzhou Survey Institute, Fuzhou 350108, China

PSO is considered to have the following advantages: few control times, easy to implement, and convenient to use. However, it is simple to fall into a local maximum stagnation in terms of convergence and search earlier than anticipated. erefore, avoiding local optimal solutions and accelerating the rate of convergence are two important issues for intelligent algorithms. Later, many variants of PSO were derived. One of them is proposed by Zhan et al. [32], which sets four states according to the distance between particles, and adaptive adjusts parameters.
CSO has two submodes which are only suitable for small-scale population optimization. When the population size increases, the convergence rate will be slower. In order to alleviate the previously mentioned disadvantages, it is a crucial balance between population diversity and convergence speed. rough the improved cat swarm algorithm, the parameter adaptability of ACSO can increase the diversity and flexibility of the population.
ACSO is an optimization algorithm which aims to improve their convergence and search capabilities for cat swarm and particle swarm algorithms. According to the experimental results, based on the benefits and disadvantages of the APSO and the CSO, ACSO has improved. e CSO is applicable to smaller populations. Due to the evolutionary state strategy, the APSO is suitable to avoid getting trapped in a local optimum. Compared with other evolutionary algorithms, the ACSO shows unparalleled advantages. e paper's structure is detailed below. Section 2 introduces related research works: PSO, APSO, and CSO. ACSO's adaptive parameter settings are discussed in Section 3. en, the performances of ACSO, PSO, APSO, and CSO algorithms are verified by using typical 23 benchmark functions.
is is shown in Section 4 while applying the algorithm to VRP as described in Section 5. Finally, Section 6 summarizes the work of this paper, and the suggestions are described for future work.

Related Research Works
In this section, the basic theory of many traditional algorithms will be reviewed briefly, namely, Cat Swarm Optimization (CSO) and Particle Swarm Optimization (PSO) and the deformation of PSO, named Adaptive Particle Swarm Optimization (APSO). APSO has greatly improved convergence speed and search capabilities. By reviewing the APSO and CSO algorithms, we wondered if we could combine the two algorithms to upgrade CSO.

Particle Swarm Optimization.
Originally devised in 1995 by Kennedy and Eberhart [6,33], PSO is inspired by the behaviour socially of swarms of fish and flocks of birds. It has been widely concerned by people and has good development prospects. In the PSO algorithm, the solution in each optimization space is considered as "particles" with neither volume nor mass. All the particles are based on their own cognitive learning, solo flight experience, and companion flight experience. erefore, when particles seek the best position in the optimized space, their flights are adjusted through information exchange.
Population is randomly initialized at first, and then each particle follows the current individual local optimum and the group global optimum to succeed in finding the optimal solution space. While the program is running, each particle implies a point in the decision space and contains two basic information, position and velocity. Particles update velocity and displacement through the following state transition equations: e primary PSO is effortless with simply a few adjusted parameters: v d i (t) indicates the velocity value of the i th particle in the D-dimension before update; x d i (t) denotes the position of the i th particle before the update; p d best (t) represents the position of the particle in the particle swarm that currently has a locally optimal solution; g d best (t) is the global optimal position of the i th particle; ω is the inertial weight of the particle; c 1 and c 2 also called the acceleration coefficients, for extending the velocity of the cat to move in the solution space and usually set to 2.05; r 1 and r 2 are two uniform random values uniformly generated in the range of [0, 1].

Adaptive Particle Swarm Optimization.
Traditional PSO still has some shortcomings in global search and convergence. e algorithm has attracted the strong interest of many scholars and is committed to improving the performance of the algorithm. APSO, which is suggested by Zhan et al. [32], accelerates the convergence speed even more. e population distribution state of the algorithm includes four evolutionary states, namely, detection, development, convergence, and bounce. In the process of operation, the inertia weight, acceleration factor, and other parameters can be automatically controlled. erefore, the search efficiency and the convergence speed are effectively improved. As the evolution progresses, particles may cluster together and converge to a local optimum. At this time, an elite learning strategy guides the global optimal particles to escape from local optimum. is algorithm breaks through the PSO, by detecting the distribution information of different populations and using this information to evaluate the evolutionary state, and the steps are shown below: Step 1: the distribution information can use Euclidean distance to describe the average distance between each particle i and other particles: where n and d, respectively, represent the population size and dimension.
Step 2: compare the distances of globally optimal particles d g to other particles and calculate the maximum d max and minimum d min distances. e "evolutionary factor" f is denoted as follows: In the exploration phase, the f value is large; in the exploitation phase, the f value decreases rapidly; after the environment changes, it will reach the convergence phase; as the number of iterations continues to increase, the particles will jump out, causing the f value to become larger. e cycle then repeats itself.
Step 3: the four states S1, S2, S3, and S4 are divided by f. With regard to the order, they represent the circumstances of exploration, exploitation, convergence, and jumpingout, respectively. Generally, a larger adaptive weight ω value is set in exploration, and a smaller value is set in exploitation. However, the evolution factor f also shares some characteristics of the inertial weight ω, so ω can follow f change. Four strategies are summarized in Table 1.
At the same time, when the denominator is greater than 3.0, the values of c 1 and c 2 are standardized:

Cat Swarm
Optimization. CSO is a heuristic global optimization method which was first presented by Taiwan scholar Chu et al. in 2006 [9]. CSO was proposed based on imitating the behaviour of cat. It has been analyzed that cats always spend most of their time to observe the surrounding environments first instead of hunting. Before the hunt, they alternate between moving slowly and staying at a location in a stationary state. is is called the seeking mode. Another submode is called the tracing mode. e CSO algorithm relies on the cooperation of these two states to obtain the optimal solution. Similar to PSO, every cat has its own velocity and position. MR defines how many cats in the overall cat group enter the seeking mode and how many cats enter the tracing mode. Flag is identifying which mode the cat is in. e optimal solution is a fitness value FS, representing the cat's accommodation to the function of fitness and the best position of the cat that has been obtained. e algorithm's specific stages are as follows: (i) Initial population, each cat has D-dimensional coordinate values (ii) Initialize the speed for randomizing the position of each dimension (iii) According to the mixture ratio MR, the population is randomly divided into seeking and tracing modes (iv) On the basis of the cat's flag bit, perform the corresponding position update on the cat (v) Evaluate and record the fitness function value of each cat and keep the cat with the best fitness (vi) Terminate the algorithm if the conditions are met; otherwise, return to step three e mode in which cats look around to find targets is called the seeking mode. ere are four key parameters: Seeking Memory Pool (SMP) defines the search memory size of each cat, which is representative of the position features that the cat has sought out. Seeking Range of the Selected Dimension (SRD) represents the change rate of the selected area, and the change range of each dimension is determined by the SRD change domain. Counts of Dimension to Change (CDC) refers to the number of dimensions that a single cat will mutate in the future. Its value is a random value between 0 and the maximum dimension. Self-Position Consideration (SPC) is a Boolean valued variable, which indicates whether the position of the cat is about to move includes the position that has passed. e process is described below: (i) Make j � SMP, which represents copy the cat's current position. In case, the value of SPC is true, let j � SMP− 1, and then return to the current position as a candidate solution.
If the fitness function finds the minimum solution of the problem, let FS b � FS max ; otherwise, FS b � FS min .

Tracing Model.
e state mode when tracing the target after finding it is called the tracing mode. is action can be briefly described in three steps. e process is as follows: (i) Update the velocities of each dimension according to (6), the best position that the entire cat group passes, that is, the optimal solution currently found: (ii) Check whether the velocities are within the maximum velocity range. (iii) According to (7), update the cat's position:

Adaptive Cat Swarm Optimization
is paper proposes a new cat swarm algorithm with adaptive strategy based on the traditional APSO and CSO algorithms. is improvement does not only advance the efficiency and convergence of the algorithm but also maintains an understanding of the uniformity of the distribution.
e specific content and innovations can be summarized as follows.

Increase Adaptive Parameters.
Based on the parameter self-adaptation, the operation process of random numbers is adjust, and an adaptive strategy is add. In the early stage of iteration, the cat swarm can obtain strong global optimization ability: Make the particle swarm converge to the optimal position later in the iteration: When the initial values of c 1 and c 2 are set relatively small, add them to limit the generation of negative values, in order to avoid negative numbers: Change equations (8)-(10) are beneficial to the global search ability at the early stage of particle iteration, local refinement in later iterations, and to improve the accuracy of the solution.

A Radius Range is Added to the Search Position.
When the distance between x d i and g d best is less than the radius, then toward the individual x d i to g d best . However, when the distance between x d i and p d best is less than the radius, just deviate it from p d best . e value of the elements is defined by the following equations: where f and e show the weights attracted toward the global optimal solution and the weights away from the local optimal solution, respectively. F i and E i indicate the food sources of the i th individual and the enemy of the i th individual.

Increase Memory Factor y.
Particles learn from each other to obtain the most informative information in their respective fields. For each searcher, a memory factor y is added, and each particle gives a lower memory weight to the previous position. For the purpose of updating the historical best position of each searcher, higher memory weight is referenced to update the current position: To better explain the process of ACSO, the complete flow chart is shown in Figure 1. Firstly, randomly initialize each cat. en, calculate the fitness value. Finally, make its parameters adaptive adjusted in the tracing mode. e pseudocode of ACSO seeking mode function is exhibited in Algorithm 1.
e pseudocode of ACSO tracing mode function is exhibited in Algorithm 2.

Experiment and Result Analysis
is segment is predominantly to verify the efficient performance of the projected algorithm, and 23 mathematical optimization functions were performed for the purpose of comparing the ACSO with PSO, APSO, and CSO. ese typical test equations are listed in Tables 2-4 used by many scholars [34]. ree of these elements need to be declared, Space, D im , and F min , which denotes the boundary of function's search space, the dimension of the function, and the optimal solution, respectively.

Experimental Results.
For verifying the results, CSO, APSO, and PSO are used to compare with the proposed ACSO algorithm. 23 benchmark functions are used to evaluate the performance of ACSO for real-parameter optimization. Usually, the benchmark function is also described as a mathematical test function. e mathematical test function used illustrates its 2D version in Figures 2-4. e relevant test parameters are listed in Table 5. In order to achieve a fair competition, for each test function, we tested 10 times for each optimization algorithm to get the average and standard deviation. e population size of each algorithm is set to 100, and the maximum number of iterations is 500. Subsequently, Table 6 illustrates the comparison of four algorithms on average (Ave) and standard deviation (Std). e multimodal function is a function that contains multiple locally or globally optimal solutions, the purpose of which is to detect whether the test algorithm avoids local  optimums. Relatively speaking, the improved algorithm is better than other algorithms, as shown in Figure 6. Premature stagnation of the optimal solution appears in F 9 and F 11 . ACSO is better than other algorithms in the benchmark function of F 10 and F 13 . However, the PSO algorithm is known to have the shortcoming of premature convergence in dealing with multimodal optimization problems, owing to the lack of enough momentum for particles to do exploration or exploitation when the algorithm is nearing its end, so it is the worst solution curve.

Experiment Analysis. From
In the fixed-dimensional multimodal function, Figure 7 shows the curves of simulation results by all four algorithms, where the curves of APSO, CSO, and ACSO are almost overlapped in F 17 . From Table 6 and Figures 5-7, observing ACSO, not only can they avoid getting trapped in a local optimum but they can also converge quickly and eventually finding a global optimal solution. erefore, we conclude that the proposed algorithms are noticeably superior to other comparison algorithms.

Adaptive Cat Swarm Algorithm Application in Vehicle Routing Problem
In this segment, the projected algorithm is connected to Vehicle Routing Problem (VRP). One of the fundamental problems in logistics has always been VRP, with the core link being cargo distribution. It was originally proposed by Dantzig and Ramser in 1959 [35]. Traditionally, VRP refers   10,100,4) [− 50, 50] 30 0 to the location of the distribution center that is known, the coordinate requirements and position of every customer, and finding the best route under precise constraints for visiting each customer, with the requirement of the lowest cost in transportation. e VRP has been studied for many years in the fields of mathematics and computer science. It also has the characteristics of nonlinearity, nonconvexity, complexity, and constraints and is difficult to coordinate with each other. In reality, its procedure of solution is quite complicated; therefore, there are certain advantages to solving the issue using an intelligent heuristic algorithm. Many scholars have devoted themselves to solving this problem and its derivative problems. e best known results for VRP have been obtained using Tabu Search (TS) [36,37] or Simulated Annealing (SA) [38,39]. In the present paper, VRP is chosen as the application objective of the ACSO algorithm to validate the algorithm's practicability further.

Description of Constraints
(i) e total cargo carried by each vehicle must meet its maximum load limit (ii) Each vehicle may serve multiple customers, but each customer can only be served by one vehicle an edge set, and c ij represents the attribute value of each edge.
(i) N � 1, 2, 3, . . . , n { } is the collection of all customers (ii) K � 1, 2, 3, . . . , k { } is the collection of all delivery vehicles (iii) c 0 is the unit distance cost (iv) d ij is the distance between the two points i and j (v) c ij is the transportation cost from point i to point j and c ij � c 0 d ij (vi) r i is the customer demands for goods (vii) W is the maximum load capacity (viii) S k is the set of customer points for vehicle k service 0, otherwise, e objectives and restrictions of the VRP are then described as follows: i,j∈N In VRP problems, as can be seen from the above model, equation (15) is a multimode function that belongs to the test function, aiming to minimize the objective function with least vehicle distribution cost. e level of distribution costs is the basic requirement for whether the economic benefits of the distribution process are maximized.
In a process with supply and demand in existence, formulas (16)-(19) denote the following: equation (16) demonstrates that this condition of constraint ensures a visit to the customer. Only one vehicle is allowed to supply each customer; equation (17) is a cargo flow constraint, requiring the load of the vehicle that cannot exceed the maximum load; equation (18) indicates that the continuity of the vehicle allocation process is constrained; that is, it must start from this point after serving customer i; equation (19) represents that each vehicle is restricted from having no subloops in the path.

Analysis of Experimental Results.
In order to verify the effectiveness of ACSO algorithm to solve VRP, experiments were performed using MATLAB R2018b software. e simulation environment is the processor Inter (R) Core (TM) i7-8550U CPU @1.80 GHz 2.00 GHz, PC with Win10 operating system. e iteration is set as 300, and the statistical results of the algorithms used to solve VRP and the three algorithms of CSO, APSO, and PSO are recorded in Table 7.
e case column denotes various calculation examples, the Best Cost column represents the optimal value solution, and the Time/s column is representative of the whole time of the running of the algorithm. rough the comparison of the above experiments, it can be seen intuitively, compared with CSO, APSO, and PSO algorithms, the ACSO algorithm has achieved better results and has certain advantages. According to the results of this ACSO algorithm, however, as the number of nodes in the transmission network increases, it will lead to an increase in the time consumed during operation. To a certain extent, it shows that the algorithm still has room for improvement. Table 8 presents the optimal consequence of ACSO according to the case n20-k7. Seven vehicles set off from the distribution center, each of them found an optimal path under specific constraints, traversing twenty customers to meet the customer's cargo needs. Here, zero point represents the distribution center, and numbers from one to twenty denote the customer number. Simultaneously, the optimal path graph is shown in Figure 8. "Star" in the figure   represents the distribution center, and "Point" indicates the coordinates of the location of the customer point.

Conclusions
In the study, based on the benefits of CSO and APSO, an Adaptive Cat Swarm Optimization (ACSO) algorithm is proposed. rough the cat swarm behaviour in the tracing mode, there is an adaptive adjustment to its parameters. e effectiveness of it has been tested through 23 benchmark functions. is experimental result indicates that ACSO has excellent performance than other existing heuristics in the process of exploration and exploitation. In the end, ACSO is applied to VRP. Numerical assessments on four algorithms (ACSO, CSO, APSO, and PSO) reveal that the best result comes from the proposed ACSO algorithm, which further confirms the practicability and effectiveness of the algorithm. However, during the course of the algorithm, because the evolutionary state needs to be evaluated based on the distance to adjust the adaptive parameters in the tracing mode, it requires much more processing time to make related parameter adjustments. erefore, the future work is to reduce the running time of the algorithm more reasonably without affecting the group to find the optimal solution.

Data Availability
All data are included within the tables of this article.