A Modified Reptile Search Algorithm for Numerical Optimization Problems

The reptile search algorithm (RSA) is a swarm-based metaheuristic algorithm inspired by the encirclement and hunt mechanisms of crocodiles. Compared with other algorithms, RSA is competitive but still suffers from low population diversity, unbalanced exploitation and exploration, and the tendency to fall into local optima. To overcome these shortcomings, a modified variant of RSA, named MRSA, is proposed in this paper. First, an adaptive chaotic reverse learning strategy is employed to enhance the population diversity. Second, an elite alternative pooling strategy is proposed to balance exploitation and exploration. Finally, a shifted distribution estimation strategy is used to correct the evolutionary direction and improve the algorithm performance. Subsequently, the superiority of MRSA is verified using 23 benchmark functions, IEEE CEC2017 benchmark functions, and robot path planning problems. The Friedman test, the Wilcoxon signed-rank test, and simulation results show that the proposed MRSA outperforms other comparative algorithms in terms of convergence accuracy, convergence speed, and stability.


Introduction
e rapid advancement of technology has generated a large number of optimization problems that require solving. ese optimization problems arise in various fields, such as finance, chemicals, electronics, machinery, and materials. Real-world optimization problems are often mixed with various unknown factors and have very complex solution spaces. ese problems frequently have substantial computational efforts, complex nonlinear constraints, and large numbers of variables and constraints [1][2][3][4][5][6]. Traditional optimization methods have difficulty solving these nonproductivity discontinuity problems effectively because they cannot strike a balance between accuracy and time cost [7]. Metaheuristic optimization algorithms have demonstrated better performance in balancing the solution quality and time cost [8]. Due to a simple structure and no requirement for a problem to be continuously derivable, metaheuristic optimization algorithms have been widely used to solve complex optimization problems in natural and engineering fields [9][10][11][12][13].
In recent decades, metaheuristic algorithms have made great progress in memetic computing manner, balance of exploitation and exploration, self-adaption of hyperparameters, population structure evolution, and theoretical analysis of the search dynamics [14]. Memetic computing manner improves algorithm performance through metaheuristic algorithms incorporated with local search operator. Charin et al. used particle swarm optimization (PSO) combined with levy flight optimization (LFO) to track the maximum power point of a photovoltaic system [15]. Yu et al. showed that the combination of chaotic local search (CLS) and brain storm optimization (BSO) can significantly improve the performance of BSO [16]. How to balance the exploration and exploitation of the algorithm to improve the performance is a research hotspot of the metaheuristic algorithms. Many researchers use various operators or change the algorithm parameters to balance [17]. Cai et al. proposed an alternate search pattern strategy to balance the exploration and exploitation of BSO [18]. In the optimization process, the search performance of some metaheuristic algorithms is greatly affected by adjustable parameters such as crossover rate, mutation rate, and population size. In order to solve the problem of parameter value control at different stages in the optimization process, adaptive parameter control has been extensively studied by researchers [19]. Lei et al. proposed a variant of gravitational search algorithm (GSA) with a self-adaptive gravitational constant called ALGSA, which greatly improved the search performance of GSA [20]. Population structure evolution has great influence on the search performance of the metaheuristic algorithms. Zhong et al. proposed a variant to improve the performance of differential evolution (DE) algorithm called EHDE by incorporating elite elements into the hierarchical population structure [21]. Inspired by the two-layered structure GSA, Wang et al. proposed a four-layered GSA variant with stronger search capability called MLGSA [22]. In addition to the above factors, theoretical analysis of the search dynamics has recently attracted a great deal of attention from researchers [23].
In general, metaheuristic optimization algorithms can be classified into three categories [24]: evolutionary-based algorithms, physical-based algorithms, and swarm-based algorithms.
Evolutionary-based algorithms are inspired by the laws of natural evolution. Genetic algorithms are a typical example and their proposal was inspired by Darwinian evolutionary theory [25]. Genetic algorithms provide solutions through the concept of crossover and mutation of species in nature. In addition, other evolutionary-based algorithms have been proposed, including DE [26], evolutionary programming [27], and evolutionary strategies [28]. e second category is physics-based algorithms, which originate from natural physics laws. Simulated annealing [29] and GSA [30] are two common physics-based algorithms. ey utilize the laws of thermodynamics and gravity for optimization. In addition, researchers have proposed other physics-based algorithms. Wei et al. proposed a nuclear reaction optimizer using the phenomenon of atomic nuclear reactions [31]. Inspired by the sine and cosine laws of mathematics, Mirjalili proposed the sine cosine algorithm [32]. Eskandar et al. proposed a water cycle algorithm based on the natural water cycle phenomena [33]. e third category is swarm-based algorithms, which build optimization models by emulating the social behavior of animal groups. PSO [34] and the ant colony algorithm [35] are two of the most common swarmbased algorithms. ey provide solutions by sharing information about all individuals in the optimization process. Others include the grey wolf optimizer (GWO) [36], the whale optimization algorithm (WOA) [37], the butterfly optimization algorithm (BOA) [38], the firefly algorithm (FA) [39], the artificial bee colony (ABC) algorithm [40], the reptile search algorithm (RSA) [41], the Harris hawks optimizer (HHO) [42], the equilibrium optimizer (EO) [43], the tunicate swarm algorithm (TSA) [44], the salp swarm algorithm (SSA) [45], the Tasmanian devil optimization (TDO) [46], the arithmetic optimization algorithm (AOA) [47], and the pathfinder algorithm (PFA) [48]. e reptile search algorithm (RSA) is a novel swarmbased algorithm proposed by Abualigah. RSA is inspired by the encircling mechanism, hunting mechanism, and social behavior of crocodiles [41]. RSA has good performance but also has disadvantages such as diminished population diversity and unbalanced exploitation and exploration capabilities. To improve the performance of RSA and enhance the search capability, this paper proposes a modified variant of RSA, named MRSA. To improve the population diversity, an adaptive chaotic reverse learning strategy is proposed to optimize from the initialization and in each iteration update. To balance exploitation and exploration, an elite alternative pool strategy was developed. A shifted distribution estimation strategy was used to modify all the individuals and guide the evolutionary direction. To fully validate the performance of MRSA, 23 benchmark functions, IEEE CEC2017 benchmark functions, and robot path planning problems are used for testing. e superiority of the proposed algorithm is demonstrated by a convergence analysis, stability analysis, and statistical tests. e rest of this paper is organized as follows. Section 2 provides a review of the basic RSA. e proposed MRSA is described in detail in Section 3. In Section 4, the effectiveness of the proposed improved strategy and the superiority of the modified algorithm are verified using classical test functions, IEEE CEC2017 benchmark functions, and robot path planning problems. Finally, Section 5 provides the conclusion and discusses future work.

Reptile Search Algorithm
In this section, the basic procedures of RSA are presented. RSA is a swarm-based metaheuristic algorithm inspired by the enveloping mechanism, hunting mechanism, and social behavior of crocodiles.

Initialization Phase.
RSA is similar to other metaheuristics in that the initial solution is generated randomly in the solution space. e initialization formula is as follows: where X 1 i is the i th initial individual and LB and UB are the upper and lower boundaries of the search space, respectively.

Encircling Phase (Exploration).
Crocodiles perform high and sprawl walks during the global search phase. In RSA, the search strategy is determined by the number of current iterations. When t ≤ 0.25T, RSA performs a high walk. When t ≤ 0.5T and t > 0.25T, the RSA performs a sprawl walk. e specific mathematical models of the mechanism are described as follows: where X t best is the current best solution, t is the current number of iterations, T is the maximum number of iterations, β is a constant taking the value of 0.1 to control the speed of exploration, X t ran d is a randomly chosen individual, ES is a random value decreasing in the interval [− 2, 2], ε is a minimal value to ensure that the denominator is not equal to 0, r 1 is a random number in the interval [− 1, 1], α is a constant taking the value of 0.1, and rand is a random number with values from 0 to 1.

Hunting Phase (Exploitation).
In RSA, crocodiles use two strategies for foraging: hunting coordination and cooperation. When t < 0.75T and t ≥ 0.5T, the RSA performs hunting coordination. When t < T and t ≥ 0.75T, a hunting cooperation strategy is employed by the RSA. e formula for position updating in the hunting phase is as follows: (1) Initialize RSA parameters and generate initial population randomly (2) While t < T (3) Calculate the Fitness of each solution (4) Find the Best solution so far (5) Update the ES using equation (5) (6) For each crocodile X i do (7) Update the η, R, and values using equations (3), (4), and (6), respectively (8) If t < 0.25T (9) Calculate the new position X i using equation (2) (10) Else if t ≤ 0.5T and t > 0.25T (11) Calculate the new position X i using equation (2) (12) Else if t ≤ 0.75T and t > 0.5T (13) Calculate the new position X i using equation (7  Chebyshev map Circle map Iterative map Pricewise map Singer map Tent map Computational Intelligence and Neuroscience 3 RSA generates the initial population randomly in the search space first and then chooses different search strategies depending on the number of iterations. e pseudocode for the RSA is shown in Algorithm 1.

The Proposed RSA Variant
To enhance the performance of the basic RSA, three improvement strategies are proposed in this paper. An adaptive chaotic reverse learning strategy is first introduced to enhance the population diversity of RSA using the characteristics of chaotic mapping and reverse learning. Second, an elite alternative pooling strategy is used to balance the development and exploration of RSA. In addition, a distribution estimation strategy is used to modify the evolutionary direction. By sampling the dominant population information, the population direction is better guided, thus improving the algorithm's convergence efficiency. e three improvement strategies are described in detail in the following.

Adaptive Chaotic Reverse Learning Strategy.
One of the shortcomings of the metaheuristic algorithm is that the diversity of the algorithmic population continues to diminish as the optimization proceeds. To enhance the diversity of the algorithms, the researchers employ different approaches. e reverse learning strategy is a new technique that is widely used to improve population diversity. e reason for the popularity of reverse learning is that extensive literature shows that the probability of a reverse solution Step US 30 [− 100, 100] 0      Computational Intelligence and Neuroscience   Computational Intelligence and Neuroscience approximating the global optimum is approximately fifty percent higher than the current original solution, and reverse learning strategies have been used to improve other algorithms with success [49][50][51]. e mathematical model of the reverse learning strategy is described as follows: where X o i is the inverse solution corresponding to X t i . e population diversity is related to the distribution of individuals in the search space. e more uniform the distribution of the individuals, the better the diversity. Chaotic mappings are characterized by random selection and ergodicity, which can help RSA generate new solutions and avoid premature convergence, and chaotic mappings have been successfully used to improve other algorithms [52]. erefore, this paper combines a reverse learning strategy with chaotic mappings, called the chaotic reverse learning strategy, and it is given as follows: where X co i denotes the solution generated by the chaotic reverse learning mechanism corresponding to the i th individual in the population. λ i is the corresponding chaotic mapping value.
ere are ten common chaotic mappings, with the formulas and numerical distributions shown in Table 1.
For swarm-based algorithms, the quality of the initial population has a significant impact on the algorithm's performance. erefore, the initial population is first generated using COBL to improve the population quality and to increase the algorithm's convergence accuracy. Second, during each iteration, the corresponding reverse population is generated using COBL and evaluated separately to retain the dominant individuals in the next generation.
In addition, as the algorithm proceeds, there will be many useless searches using the chaotic inverse learning strategy for all the individuals, which increases the computational cost and is not conducive to the convergence of the algorithm, so this paper proposes using the linear decreasing population strategy. As the iteration proceeds, the number of individuals using the chaotic inverse learning strategy is gradually reduced, and the specific mathematical formula is as follows: where Pop denotes the number of populations using the chaotic backward learning strategy and pop max and pop min denote the maximum number and minimum number of populations, respectively.

Elite Alternative Pool
Strategy. RSA performs position updates by following the best individual. is facilitates a faster convergence of the algorithm but diminishes population diversity and tends to trap local optimums. To maintain a balance between the exploitation and exploration of the algorithm, an elite alternative pooling strategy is proposed in this section. We place the current best three individuals into a pool as shown in the following equation: where X eap1 , X eap2 , and X eap3 are the three best individuals in the population thus far. e food source is chosen randomly from these three individuals each time. By using the elite alternative pooling strategy, the position of the food source changes from the best individual to one of the best three individuals. is goes some way to avoiding the premature convergence of the algorithm due to the best individual falling into a local optimum. To better balance the development and exploration of the algorithm, we also put the globally optimal individuals into the elite alternative pool to ensure that each individual has the opportunity to move closer to the optimal individual and ensure the convergence efficiency of the algorithm.
us, the final mathematical model of the elite alternative pooling strategy is described as follows: 3.3. Shifted Distribution Estimation Strategy. RSA searches by following the optimal individuals, ignoring valid information from other individuals. To make full use of the position information of the dominant population, some scholars use a distribution estimation strategy for implementation [53,54]. is strategy uses the current dominant population to calculate a probability distribution model, generates a new offspring population based on the sampling of the probability distribution model, and eventually obtains the optimal solution through continuous iteration. In addition to using the dominant population, this paper considers a modification of it by introducing information about the optimal individual and its own position and proposes a shifted distribution estimation strategy. e mathematical model is as follows: Algorithm Parameters HHO [42] β � 1.5, E 0 ∈ [− 1, 1] EO [43] a 1 � 2, a 2 � 1 TSA [44] P max � 4, P min � 1 GWO [36] a � 2 (linearly decreased over iterations) SSA [45] c 1 � rand, c 2 � rand WOA [37] a 1 � 2 (linearly decreased over iterations) RSA [41] α  Figure 1 and its pseudocode is in Algorithm 2.

Time Complexity.
e time complexity determines the operating efficiency of the algorithm. In RSA, the computational complexity of the initialization process is O(N), where N is the population size. e computational complexity of the update process is O( where D is the problem's dimensionality and T is the    1)). e computational complexity of MRSA is determined by six main factors (initialization process, solution update, number of fitness evaluations, chaotic reverse learning strategy, elite alternative pooling strategy, and shifted distribution estimation strategy). e computational complexity of the MRSA initialization process is O(N). e computational complexity of the update process is erefore, the computational complexity of MRSA is O(N × (2 × T × D + T + 1)). e introduction of three improved strategies causes the computational complexity of MRSA to increase slightly compared to RSA. RSA and MRSA can be considered to have similar levels of operating efficiency.

Experimental Results and Discussion
In this section, we first evaluate various chaotic mapping combination algorithms using benchmark test functions and then determine which chaotic mapping sequence to use in combination with the adaptive reverse learning strategy. e performance of MRSA is then evaluated, and 23 benchmark functions, IEEE CEC2017 benchmark functions, and robot path planning problems are compared with other state-ofthe-art algorithms.

Benchmark Test Functions.
is section uses 23 benchmark test functions that are commonly found in the literature.
ese benchmark test functions include seven unimodal functions, six multimodal functions, and ten fixed dimensional functions [55]. Unimodal functions F1-F7 have only one global optimum and are primarily used to test the local exploitation capabilities of the algorithms. e multimodal functions have multiple local minimums and can be used to check the global exploration capability and local optimum avoidance capability of the algorithm. Details of the benchmark test functions are shown in Table 2.

Chaos Mapping Selection Test.
e adaptive chaotic reverse learning strategy proposed in this paper combines a chaotic mapping and a reverse learning mechanism. To verify which chaotic mapping is employed, each of the 10 chaotic mappings is combined with a reverse learning mechanism. e MRSA using the chaotic mapping with ID 1 is called MRSA-C1. e rest of the MRSA algorithms using chaotic mappings are named similarly. For a fair comparison, the number of populations was set to 50, and the maximum number of iterations was set to 300 on the same experimental platform. All the algorithms were programmed using MATLAB R2016b, the computer operating system was Windows 10, and the processor was AMD R5 3600 × 16 GB. Table 3 shows the statistical results of each algorithm run  independently 30 times. In presenting the simulation results, "avg" is the average of the best candidate solutions obtained, and "std" is the standard deviation of these values. As shown in Table 3  Remarkably, all the improved algorithms perform no worse than RSA in at least 22 functions, indicating that the improved strategies proposed in this paper effectively improve the algorithm's performance. Furthermore, the best of the ten chaotic mapping combination algorithms is MRSA-C10. erefore, the MRSA-C10 algorithm was used to participate in the tests in the comparison that followed.

Performance Comparison Tests of MRSA with Other Advanced Algorithms on 23 Benchmark Functions.
To verify the performance of the MRSA algorithm, the modified algorithm was compared with the original RSA [41], HHO [42], EO [43], TSA [44], GWO [36], SSA [45], and WOA [37]. e parameters of all the algorithms were set according to the original paper to ensure the performance of the comparison algorithms, as shown in Table 4. Given that F1-F13 are the multidimensional functions used in this section, the thirteen functions were solved under Dim � 30, 100, 500, and 1000. e means obtained by these algorithms are recorded, as shown in Tables 5-8   a significant decrease in performance as the dimensionality increases, which indicates that the improvement strategy proposed in this paper greatly improves the development capability of RSA. For the variable dimensional multimodal functions F8-F13, MRSA, RSA, HHO, and EO consistently achieve their respective optimal solutions in different dimensions when solving F9-F11. HHO and WOA outperform MRSA in solving F8. MRSA achieves a stable optimal value of 0 when solving for F9 and F11 and remains so as the dimensionality increases. MRSA shows the best performance in solving F12 and F13, outperforming all the compared algorithms. It is worth noting that MRSA does not perform any less than RSA in all the multimodal functions in the different dimensions and has significant improvements in three of the six variable dimensional multimodal functions, which indicates that MRSA has a better global search capability, and the improvement strategy proposed in this paper is well suited to enhance the population diversity and to expand the search range of the population, thus improving the exploration capability of the algorithm. Table 9 presents the test results when different algorithms solve the fixed dimensional multimodal function. e comparison shows that HHO, EO, and SSA outperform MRSA on F14. For F15-F23, MRSA performs best in all the tested functions. In particular, MRSA provides better solutions in all the test functions compared to RSA. Since fixed-dimension functions are usually used to test the ability of an algorithm to maintain a balance between development and exploration, the above analysis shows that the MRSA proposed in this paper is able to balance the development and exploration capabilities effectively and has a strong local optimum avoidance capability. e convergence speed and convergence accuracy are important indicators of the performance of the algorithm. Figure 2 shows the mean convergence curves of MRSA and RSA when solving the test functions in different dimensions. It can be seen that MRSA has a faster convergence speed and a better convergence accuracy in different dimensions. Moreover, the convergence speed and convergence accuracy of MRSA do not decrease much with increasing  [43] 0.001871 Yes MRSA versus TSA [44] 0.001306 Yes MRSA versus GWO [36] 0.001306 Yes MRSA versus SSA [45] 0.001306 Yes MRSA versus WOA [37] 0.02537 Yes MRSA versus RSA [41] 0.002873 Yes F1-F13 (Dim � 100) MRSA versus HHO [42] 0.209427 No MRSA versus EO [43] 0.001871 Yes MRSA versus TSA [44] 0.001306 Yes MRSA versus GWO [36] 0.001306 Yes MRSA versus SSA [45] 0.001306 Yes MRSA versus WOA [37] 0.017496 Yes MRSA versus RSA [41] 0.002873 Yes MRSA versus HHO [42] 0.182338 No MRSA versus EO [43] 0.001871 Yes MRSA versus TSA [44] 0.001306 Yes MRSA versus GWO [36] 0.001306 Yes MRSA versus SSA [45] 0.001306 Yes MRSA versus WOA [37] 0.02313 Yes MRSA versus RSA [41] 0.002873 Yes F1-F13 (Dim � 1000) MRSA versus HHO [42] 0.157939 No MRSA versus EO [43] 0.001944 Yes MRSA versus TSA [44] 0.001306 Yes MRSA versus GWO [36] 0.001306 Yes MRSA versus SSA [45] 0.001306 Yes MRSA versus WOA [37] 0.017496 Yes MRSA versus RSA [41] 0.002873 Yes
To analyze the distribution characteristics of each algorithm in the fixed dimensional test function, box plots were drawn based on the results obtained by solving F14-F23, as shown in Figure 3. For each algorithm, the center mark of each box indicates the median of the results of 30 runs, and the bottom and top edges of each box indicate the trivial and quartiles, respectively. " e "+" sign indicates bad values that are not inside the box. As seen from Figure 3, MRSA has no bad values for F17 and F21-F23, which indicates that the distribution of the solutions obtained by MRSA is more concentrated and MRSA is more stable. For the other test functions with some bad values, MRSA outperforms the comparison algorithm in terms of maximum, minimum, and median values, and the distribution of the solutions obtained by MRSA is more concentrated compared to the comparison algorithm. erefore, MRSA solves the test function with better stability compared to the other comparison algorithms.
Apart from the convergence and stability analysis, to further analyze the experimental results, the Friedman test and Wilcoxon's signed-rank test were used for multiple comparisons in this paper. Table 10 is the Friedman test showing the average ranking results of each algorithm. e overall ranking value of MRSA is 1.59, which ranks first among all the algorithms. e remaining seven algorithms are ranked as follows: RSA, HHO, EO, WOA, GWO, SSA, and TSA. In solving F1-F13 in different dimensions, MRSA is ranked first, and HHO and RSA are ranked second and third, respectively. For fixed dimensions F14-F23, MRSA, EO, and RSA ranked in the top three. In either case, MRSA ranks better than RSA. e results of Wilcoxon's signedrank test are shown in Table 11. In the case of F1-F13 (Dim � 30, 100, 500, and 1000), MRSA outperformed EO, TSA, GWO, SSA, WOA, and RSA at the 0.05 significance level, but there was no significant difference between MRSA and HHO. In the case of F14-F23, MRSA outperformed TSA, GWO, WOA, and RSA at the 0.05 significance level, but there was no significant difference between MRSA and HHO, EO, or SSA, which statistically proves that the improvement strategy proposed in this paper can effectively help MRSA balance the exploitation and exploration capabilities and has a better local optimal avoidance ability. Shifted and rotated Rastrigin's function 500 6 Shifted and rotated expanded Scaffer's F6 function 600 7 Shifted and rotated Lunacek bi-Rastrigin function 700 8 Shifted and rotated noncontinuous Rastrigin's function 800 9 Shifted and rotated Levy function 900 10 Shifted and rotated Schwefel's function 1000   [45] c 1 � rand, c 2 � rand PFA [48] u 1 � − 1 + 2rand, u 2 � − 1 + 2rand TDO [46] ∼ 14 Computational Intelligence and Neuroscience  [42], AOA [47], SSA [45], PFA [48], and TDO [46]. For a fair comparison, all the algorithm parameters are set the same as those used by the authors of the original literature, as shown in Table 13. e dimension of the CEC2017 benchmark functions was set to 30 on the same experimental platform. Table 14 shows the statistical results of each algorithm run independently 51 times.
From the analysis in Table 14, we know that, for the unimodal test function F3, MRSA outperformed all the comparison algorithms, and although MRSA could not stably obtain the optimal solution, it performed the best among all the comparison algorithms, indicating that MRSA has a stronger exploitation ability. For the multipeaked test functions F4-F10, MRSA performs best among the four test functions (F4, F5, F8, and F10), while PFA achieves the best results on F6, F7, and F9, with MRSA ranking second in all cases. e performance of MRSA on 2.2143 HHO [42] 3.5714 TDO [46] 5.2857 AOA [47] 5.6429 BOA [38] 6 SSA [45] 6.6071  [38] 0.000004 YES MRSA versus HHO [42] 0.000004 YES MRSA versus AOA [47] 0.000004 YES MRSA versus SSA [45] 0.000004 YES MRSA versus PFA [48] 0.039321 YES MRSA versus TDO [46] 0.000004 YES To perform a statistical analysis on the performance of MRSA and the six competing algorithms, the Friedman test and Wilcoxon's signed-rank test were used for multiple comparisons in this paper. Table 15 is the Friedman test results showing the average ranking of each algorithm. e overall ranking value of MRSA is 1.3929, which ranks first among all the algorithms. e remaining six algorithms are ranked as follows: PFA, HHO, TDO, AOA, BOA, and SSA. e results of Wilcoxon's signed-rank test are shown in Table 16. MRSA outperformed BOA, AOA, SSA, PFA, and TDO at the 0.05 significance level, which statistically proves that the improvement strategy proposed in this paper can effectively help MRSA balance the exploitation and exploration capabilities and has a better local optimal avoidance ability.

Robot Path Planning Based on MRSA.
To verify the performance of the improved strategy, MRSA is applied to solve the robot path planning in this paper. Each crocodile represents a possible path. It is assumed that there are N possible paths, and the dimension D is determined by the number of connections from the starting point to the destination point. e environment is modeled using the raster method, and the raster values are used to equate to the obstacles at the location. e robot's working environment is equated to a plane, similar to a lattice effect, and then the feasible and obstacle zones are determined based on the raster values. e grid number 0 is defined as the feasible area, and 1 is defined as the obstacle area. e robot can walk ��������������������� � x j+1 + x j 2 + y j+1 + y j 2 , where j denotes the j th dimension of each crocodile. In robot path planning, the population size is 100, and the number of iterations is 20. RSA [41], HHO [42], and EO [43] are used as competitors. Each algorithm works in a 10 × 10 model, and the optimal route is shown in Figure 4. To eliminate chance, each algorithm was run 10 times, and the mean, optimal, and worst values of each algorithm were recorded. e statistical results of each algorithm are shown in Table 17. As shown in Figure 4, MRSA has the shortest route, followed by HHO, while EO and RSA are clearly trapped in a local optimum. As seen from Table 17, MRSA is the best among all the algorithms in terms of best cost, mean cost, and worst cost. is indicates that MRSA can consistently provide excellent solutions. Figure 5 shows the convergence curves of the four algorithms. MRSA has the fastest convergence speed and a higher convergence accuracy. erefore, the introduction of multiple strategies makes the algorithm more comprehensive in its search, which greatly improves the search capability of MRSA and plans the least costly route.

Conclusion
is paper proposes a novel variant of the reptile search algorithm, called MRSA. First, the adaptive chaotic reverse learning strategy combines the advantages of the reverse learning mechanism and chaotic mapping to enhance the population diversity. Second, the elite alternative pool strategy balances the exploitation and exploration capabilities by controlling the reference points followed by the population. Finally, the shifted distribution estimation strategy makes full use of the dominant population information to guide the direction of individual evolution, thus improving the performance of RSA. e superiority of MRSA was verified in 23 benchmark functions, IEEE CEC2017 benchmark functions, and robot path planning problems.
e experimental results show that the adaptive chaotic reverse learning strategy can effectively improve the population diversity, among which tent mapping is the most effective. e MRSA outperforms the comparison algorithm in terms of convergence accuracy, convergence speed, and stability. e results of the multimodal functions F8-F23 among the 23 benchmark functions show that the elite alternative pool strategy balances algorithm exploitation and exploration effectively and prevents the algorithm from falling into a local optimum. e adaptive chaotic reverse learning strategy enhances the population diversity. e shifted distribution estimation strategy enhances the convergence speed and convergence accuracy of the algorithm by learning information about the dominant populations. In addition, the test results were analyzed using the Friedman test and the Wilcoxon signed-rank test. e statistical results show that MRSA is significantly more effective than the comparison algorithm.
In a subsequent study, we plan to examine the following issues: First, the shifted distribution estimation strategy increases the computational cost of MRSA. Optimizing the algorithm structure and performance needs further investigation and discussion. Second, the capacity and composition of the elite replacement pool need to be further analyzed. Additionally, MRSA can be extended to multiobjective and binary versions. We will consider solving problems in image processing, industry, neural networks, text, and data mining as real-world optimization problems.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.

18
Computational Intelligence and Neuroscience