Cuckoo Algorithm Based on Global Feedback

This article proposes a cuckoo algorithm (GFCS) based on the global feedback strategy and innovatively introduces a “re-fly” mechanism. In GFCS, the process of the algorithm is adjusted and controlled by a dynamic global variable, and the dynamic global parameter also serves as an indicator of whether the algorithm has fallen into a local optimum. According to the change of the global optimum value of the algorithm in each round, the dynamic global variable value is adjusted to optimize the algorithm. In addition, we set new formulas for the other main parameters, which are also adjusted by the dynamic global variable as the algorithm progresses. When the algorithm converges prematurely and falls into a local optimum, the current optimum is retained, and the algorithm is initialized and re-executed to find a better value. We define the previous process as “re-fly.” To verify the effectiveness of GFCS, we conducted extensive experiments on the CEC2013 test suite. The experimental results show that the GFCS algorithm has better performance compared to other algorithms when considering the quality of the obtained solution.


Introduction
Swarm intelligence algorithm is an algorithm designed to simulate the behavior of natural biological groups, which have been extensively applied for solving complex and highly nonlinear optimization problems. As an emerging optimization algorithm, swarm intelligence algorithm has become one of the focuses of increasingly researchers. Researchers have proposed a variety of algorithms, such as ant colony algorithm (ACO) [1], diferential evolution (DE) [2], particle swarm optimization algorithm [3] (PSO), artifcial bee colony algorithm [4] (ABC), frefy algorithm [5] (FA), and cuckoo search algorithm [6] (CS). At present, these algorithms have been applied to a variety of engineering optimization problems and have a potential research value. Hence, it is still a promising domain to develop more effective swarm intelligence algorithms.
Te CS algorithm, inspired by the parasitic brooding behavior of cuckoos, was proposed by Yang and Deb et al. in 2009. Tis parasitic behavior has become a breeding strategy for cuckoos, and in most cases, they lay their eggs in the nests of other bird species. Terefore, the host bird may discover that the egg is not its own, at which point it either throws away the foreign egg or abandons the nest and builds a new one. In addition, the CS algorithm employs methods such as greedy selection, random walk, and Lévy fight [7] to solve the global optimal solution. Compared with the uniform distribution and Gaussian distribution algorithm, the longhop mode algorithm provided by the Lévy fight can search the solution domain better. Te combination of Lévy fight advantage and local search ability makes the CS algorithm one of the most efective optimization algorithms. Compared with other swarm intelligence algorithms, CS has the advantages of fewer parameters, simple operation, and strong optimization ability, and it is more efective in solving optimization problems. However, on the contrary, there are also the defects of unbalanced exploration ability and mining ability, and it is easy to fall into the local optimal solution.
Since similar search strategies, Lévy fight and random walk strategies, are adopted in most CSs, the search behaviors of cuckoos are similar, which can easily lead the algorithm to fall into a local optimum and enter premature convergence. Sometimes, the algorithm converges to a local optimum at a very early stage, but the whole algorithm ends without obtaining a better ftness value. Under these circumstances, it not only is difcult to obtain a better value but also wastes subsequent computing resources.
Based on this situation, a new type of cuckoo algorithm (GFCS) is proposed. GFCS dynamically adjusts the parameters of the algorithm according to whether each round of the algorithm iteration produces a better ftness value. In the case where the ftness value remains unchanged for a long time, the current optimal value is retained; then, the algorithm will be reset, resulting in better algorithm performance for the same computational generation. Briefy, the core idea of this work is as follows: Te GFCS algorithm is a CS algorithm which employs random walk and Lévy fight to search for the global optimum. We have proposed three innovations based on the original CS algorithm: (i) We introduce the concept of global feedback to adjust the dynamic global variables by the current round of iterations and determine whether the algorithm falls into a local optimum. (ii) Te fxed parameter pattern of the original CS algorithm is optimized. We set the parameter formulas that vary with the number of iteration rounds and is controlled by the dynamic global variables. (iii) We introduced the "re-fy" mechanism. When the algorithm falls into the local optimum, the algorithm can save the current global optimum value and the algorithm will be initialized and re-executed to fnd a better value.
Te article is organized as follows. Section 2 reviews the original CS and its technical details. In Section 3, the literature on CS and its application to optimization problems are presented. Section 4 elaborates on the proposed algorithm. A comparative analysis of numerical experiments between GFCS and CS, multiple CS variants, and several other state-of-the-art algorithms is presented in Section 5. Finally, in Section 6, we summarize the proposed algorithm.

Basic Cuckoo Search Algorithm
Te cuckoo search algorithm (CS) is a swarm intelligence algorithm inspired by the natural behavior of some cuckoo species laying their eggs in other birds' nests. Diferent from other algorithms, the search process of CS is divided into two stages: global search and local search, corresponding to exploration and exploitation, respectively. Te global stage is carried out by the Lévy fight, as the Lévy distribution has infnite mean and variance, which helps to explore the solution space efciently. Te local phase is executed by using the biased random walk.
In the CS algorithm, the number of hosts available is constant. In each iteration, each cuckoo lays only one egg and then randomly places the egg into the host's nest. Each egg is used as a solution to the problem. Te host bird has a probability of Pa (Pa ∈ [0, 1]) to fnd the cuckoo laying eggs in its nest. When this happens, the laid eggs are thrown away or the host bird simply abandons the nest to make a new one.
It is assumed that N is the number of cuckoos and D represents the dimension of problem, the position of i th cuckoo is denoted as Xi, and t represents the current iteration. Ten, the new position can be generated by the following equations: where α is the step size, which should be related to the scale of the problem, and the product ⊕ represents the multiplication of the corresponding position of the vector. Terefore, the formula for the Lévy fight is as follows: Lévy(s, λ) � λΓ(λ) · sin (πλ/2) π 1 s 1+λ , where where x t best represents the optimal solution of the tth generation, Lévy(s, λ) represents the feature scale, λ represents the power coefcient (1 < λ < 3), and Γ represents the gamma function. In addition, α 0 represents the scaling factor, which controls the size of the step.
In formula (1) and formula (2), s represents the step size of the Lévy fight, and it was designed by Mantegna's algorithm [8] as follows: where μ and ] are random numbers drawn from a normal distribution: where the value of σ ] is usually set to 1, and the formula for σ μ is shown as follows: Ten, the formula for the local random walk can be expressed as where r is a scaling factor uniformly distributed in the range [0, 1] and x t j and x t k represent two diferent solutions randomly selected in the population.
Based on the previous introduction, the original CS algorithm framework is shown in Algorithm 1.

Related Works
Te main advantages of the CS algorithm are few parameters, simple operation, easy implementation, optimal random search path, and strong search ability. At present, scholars at home and abroad have also proposed many improvement strategies for the cuckoo algorithm. Te main 2 Computational Intelligence and Neuroscience research directions in previous years include improving the step size of the Lévy fight and the random walk algorithms, or adjusting the parameter Pa by introducing a new Pa formula, or setting a new step size adjustment formula to adjust the performance of the algorithm. Valian et al. proposed an improved cuckoo algorithm [9] for reliability optimization problems. Te optimization of the step size of the Lévy fight was introduced into the algorithm, and the probability of cuckoo eggs being found Pa was adjusted. With the change of the number of iterations, the step size alpha and Pa were gradually reduced according to a certain formula. Naik et al. proposed a new cuckoo algorithm [10] that abandoned the Lévy fight by using a step size based on the number of iterations, the contemporary optimal nest, the contemporary worst nest, and the average nest ftness value. Ong proposed an adaptive cuckoo algorithm [11], which compared the current ftness value with the average ftness value and used diferent step size algorithms according to the comparison results to ensure that the algorithm had a faster convergence speed in the early stage and a large convergence accuracy in the later stage. Wang et al. proposed a cuckoo algorithm with diferent scale factors [12]. During the iteration process, random numbers were introduced to make fuctuations at each step, which improved the performance of the algorithm but reduced the stability of the algorithm. Li and Yin proposed a modifed cuckoo search algorithm with a self-adaptive parameter method [13]. A linear change of parameters was achieved by introducing the ratio of the current algebra to the total algebra, and according to the success rate of the evolution of the previous generation, diferent schemes of Pa were selected. Huang et al. proposed a cuckoo search (CS) algorithm using an elite opposition-based strategy [14], in which the proposed algorithm generated the opposite solutions of elite individuals in the population by an opposition-based strategy. Te algorithm was guided to explore the optimal solution by simultaneously evaluating the current population and the opposite population. Based on the elite opposition-based strategy mentioned previously, Baset et al. proposed a new cuckoo search algorithm [15] for solving integer programming problems, which had faster convergence and higher computational accuracy and was more efective.
Some scholars adjusted or replaced the Lévy fight with some new models, such as introducing other algorithms or introducing Gaussian functions to speed up the optimization. Kamoona et al. proposed an enhanced cuckoo algorithm [16], which replaced the Lévy fight with the Gaussian virus difusion idea, and innovatively introduced the search formula of the artifcial bee colony algorithm. Zheng and Zhou proposed a new cuckoo algorithm based on Gaussian distribution optimization [17]. Te algorithm replaced the Lévy fight with the Gaussian distribution to a certain extent, and the algorithm had relatively good performance in local optimization performance. He et al. proposed a Spark-based Gaussian Bare-bones cuckoo Search with dynamic parameter selection [18]. For Pa values, a pool of candidate Pa values in the range [0.01, 0.5] was introduced, and the value for the step size was generated by a Gaussian distribution.
Inspired by the organizational evolutionary algorithm for numerical optimization, Zheng and Zhou designed a novel algorithm, the cooperative co-evolutionary cuckoo search algorithm (CCCS) [19], which combined dynamic populations and evolutionary operators for solving both unconstrained, constrained optimization and engineering problems. Cheng and Wang proposed a neighborhoodattracting cuckoo algorithm [20], which introduced the concept of neighborhood in the cuckoo algorithm, making the ftness value of the bird's nest closer to the best bird's nest in the area. At the same time, the diference between the ftness value of the individual and the best individual was analysed, and the step size of this iteration was determined by the diference.
Some researchers tried to optimize the cuckoo algorithm by mixing multiple algorithms, randomly or based on certain feedback to select the most suitable algorithm for the Input: population size N, the maximum number of iterations MaxIt, problem dimension D, and the probability of the bird's nest being found Pa.
(1) Randomly initialize the solution set with a population size of N : x t i (i � 1, 2, · · · , N); (2) Calculate the fitness of all solutions f t i � f(x t i ); Get the best solution x t best and its ftness value f best ; (3) for t � 1: MaxIt do (4) for i � 1 : N do (5) Generate a new solution x new by the Lévy fight mode in equation (1); (11) end for (12) Trow away a small fraction (controlled by Pa) of the worst solutions and use equation (7)  Computational Intelligence and Neuroscience current generation, which diversifed the iteration selection of the algorithm. Te disadvantage was that while promoting the algorithm's search ability, it also increased the instability of the algorithm. Rakhshani and Rahati proposed a new cuckoo algorithm based on Snap-Drift [21]. Te algorithm divided the entire optimization process into two modes: snap and drift, selecting the best search mode according to the optimization efect of the current algebra. Peng et al. proposed a multistrategy serial cuckoo algorithm [22], which divided the execution process of the overall algorithm into three stages, including jump learning, Gaussian walking learning, and begging behavior. Diferent strategies were formulated at each stage and achieved better results. In addition, he also proposed another similar algorithm, the multistrategy reconciliatory cuckoo search algorithm [23], which updated individuals based on a harmonic strategy. Te adaptive step size guided the cuckoo to seek optimization in a better direction, and three improved update methods were explored analytically from three perspectives: their own neighborhood, the current optimal individual, and the random position. Gao et al. proposed a multistrategy adaptive cuckoo algorithm [24], and the algorithm was designed with fve diferent search ideas. According to the performance of each iteration, a certain selection ratio was set for these fve strategies, making the algorithm tend to be diversifed.
In addition, the cuckoo algorithm also has some other improvements. Zhang et al. proposed a dynamic adaptive cuckoo search algorithm [25], which introduced feedback into the algorithm framework and established a closed-loop control system for CS algorithm parameters. Te improvement rate was maintained at 20 percent, and Pa and α were dynamically adjusted. Walton et al. proposed an adjusted cuckoo algorithm [26] to classify bird nests. Te excellent parts were set as the top nests; then, new nests were constructed by the relationship between the top nests. Excellent nests were used to construct new nests, thus selecting the best ones and improving the local exploration ability of the algorithm, while the poor nests used a larger step size to improve the global ability of the algorithm.

Cuckoo Algorithm Based on Global Feedback
Te cuckoo algorithm based on global feedback is described in the following sections.

Motivation.
In Lévy fights, it is difcult to balance the large-scale global exploration in the early stages and the local fne-grained search in the later stages with a fxed step size and probability of nest discovery. To improve the dynamic search characteristics of the CS algorithm, an adaptive adjustment scheme of step size α and the probability of nest discovery Pa are proposed to coordinate the overall search performance of the algorithm according to the evolution of the globally optimal individuals. Due to the guidance of the optimal solution in the population, the CS algorithm is easy to converge prematurely when solving some complex optimization problems and falls into the local optimal value prematurely. Even at the end of the whole algorithm, the algorithm fails to jump out of the local optimum. Given this situation, we introduce additional dynamic parameters to adjust the step size and Pa according to the change of the best ftness value of the current generation. If it has been determined that the algorithm is stuck in a local optimum after many rounds, then we consider resetting the algorithm.

4.2.
Step Size and Pa Adjustment. Te main parameters in the CS algorithm are step size α and the probability of the bird's nest being found Pa. α mainly controls the step size of the algorithm, and the value of Pa mainly controls the diversity of the algorithm for each round of exploration. If a smaller value of Pa is used, the global bird's nest will tend to be concentrated, and the local search ability of the algorithm will be strengthened, but the diversity of the algorithm gets worse. If the value of Pa is small but the value of α is large, the performance of the algorithm will be poor, resulting in a large increase in the number of iterations. If the value of Pa is large but the value of α is small, the convergence rate is fast, but the optimal solution may not be found. In the original CS algorithm, both Pa and α use fxed values, which cannot be changed during the algorithm iteration. It is difcult for the algorithm to guarantee the speed of global exploration in the early stage, and the lack of overall exploration capability may contribute to the inability of the algorithm to explore the global optimum. And in the later stage, due to the slightly larger step size, it is difcult for the algorithm to perform a very fne search locally. To improve the performance of the algorithm, an adaptive step size formula is introduced as follows: where and the formula for Pa is as follows: where t represents the current number of iterations, MaxIt represents the total number of iterations, α min and α max represent the preset minimum step size and the preset maximum step size, respectively, Pa max represents the preset maximum value of Pa, T represents a constant between 1 and 10, and m represents a dynamically adjusted global parameter, which will be introduced in the following sections.

Computational Intelligence and Neuroscience
Equations (8) and (9) refer to parts of [9], on which we change the values of the upper and lower bounds of their defnitions and use dynamic global variables to adjust their values again. So, we choose α max � 0.5 andα min � 0.005 .
For equation (10), inspired by the sigmoid function [36] in deep learning, we hope the value of parameter Pa to smoothly over from a larger value in the early stage to a smaller value in the later stage like the inverted sigmoid function during the whole algorithm, so equation (10) is designed. In order to prevent the curve from changing too drastically or slowly, an adjustable variable T is added on top of it to control the whole change process.
To ensure that P a (t) can obtain a large value when t is small at the beginning of the algorithm, we choose Pa max � 0.5. To ensure that the curve of P a (t) changes quickly but not too steeply, we did experiments on the value of T. Te experimental results show that the overall efect is best when T is taken as 6, so we choose T � 6 .

Global Feedback and Re-Fly. Te previous formulas of
Pa and α ensure that Pa and α can be quickly changed from larger values in the previous stage to relatively small values as the algorithm progresses. Tis process is essentially irreversible, and the algorithm is likely to fall into a local optimum, thus afecting the subsequent global exploration. For this, we introduce a dynamic adjustment parameter m and set the initial value of m to 1. As the algorithm runs, the formula for m is as follows: If the optimal ftness value does not change after this round of iteration, we will increase the value of m. Due to the infuence of m, the step size α and Pa of the algorithm are enlarged, and the global exploration ability of the algorithm is improved. If the optimal ftness value of the algorithm changes after a certain round of iteration, we consider that the algorithm has found a better solution and reset m to 1 at this point. If the algorithm falls into a local optimal value (nonglobal optimal value), as m continues to increase, it still does not obtain a better optimal ftness value. When m > 2, we will make a judgment at this time.
If t < MaxIt/2, we will keep the current best ftness value, reset all nests, set m to 1, and re-execute the algorithm. Tis stage is called "re-fy." Otherwise, we will continue to execute the algorithm and expand m, making the step size gradually expand until a better ftness value appears.
In the traditional CS algorithm and some improved versions of the CS algorithm, the algorithm may converge prematurely at the beginning of the algorithm and fall into a local optimum, and at the end of the entire algorithm, the algorithm does not obtain a better value. Terefore, the "refy" method is introduced in GFCS. If the best ftness value does not change after 1000 iterations and the current number of rounds does not reach half of the total number of rounds, the subsequent computing power is reserved to obtain better results.
In terms of the previous descriptions, the implementation of GFCS is shown in Algorithm 2.
To show the algorithm process more visually, the fowchart of the algorithm is shown in Figure 1.

Algorithm Complexity Analysis.
To demonstrate that the GFCS algorithm does not increase the time and space complexity of the CS algorithm, we analyse the algorithmic complexity of GFCS and CS. For the CS algorithm, assuming that the dimension of the problem is D, the time to evaluate the D-dimensional function is positively related to D, the total number of iterations is G, and the population size is n. Terefore, the time complexity of the CS algorithm is ap- GFCS is improved only in the Lévy fight phase and is consistent with the CS algorithm in the local walk phase, so we only need to analyse the diferences in the previous phase. GFCS requires additional calculations of variables α and Pa in each iteration, determines and updates the value of the parameter m. Te time consumption during the calculation is only related to the parameters G and not to D and n. In addition, "re-fy" is performed at most once in the algorithm, and the time consumption is only related to n and D and not to G. Terefore, the total time complexity of GFCS is still O(GDn).
In terms of space complexity, GFCS does not use extra storage space to store data, so it does not increase the space complexity of the algorithm. In summary, the complexity of GFCS is in the same order of magnitude as that of the original CS. In the subsequent experiments, we will further compare the total time consumption of GFCS with the original CS on the test set functions.

Experimental Study
Te experimental study is described in the following sections.

Experimental Environment and Benchmark Functions.
To verify the performance of GFCS, experiments are carried out on the test set of CEC2013 [37], which is widely used internationally. CEC2013 contains 28 test functions, among which f1-f5 are unimodal functions, f6-f20 are multimodal functions, and f21-f28 are combination functions. Te solution constraints of the functions are in the range of [−100, 100]. All experiments were performed on the Windows 10 platform, and all algorithms were implemented in MATLAB R2021a.
In our experiments, f opt represents the standard optimal value of the objective function and f min represents the actual optimal value of the objective function obtained by our algorithm. We record the following equation as the criterion for the algorithm detection: Terefore, the closer res is to 0, the better it is. Furthermore, to reduce the statistical error, the average error of all these independently running functions was chosen as the Computational Intelligence and Neuroscience performance metric. To ensure the fairness of the experiment, the population size n is set to 25, with the dimension D � 30 and the maximum number of iterations MaxIt � 20000. Each test function was run 30 times in the same environment, and its mean and standard error values were recorded.

Comparison with CS and Other Variants.
To explore the accuracy and convergence of GFCS, a comparative analysis was carried out with the original CS and its seven variants. Te seven CS variants were CS [6], ACS [10], NACS [20], GCS [17], ICS [9], ACSA [11], VCS [12], and MSRCS [23]. Table 1 lists the core ideas and specifc parameters of each algorithm.
In this section, we compare GFCS with the original CS and 7 improved CS variants. Tables 2 and 3 show the 30dimensional test results of GFCS and other 8 CS algorithms in the CEC2013 test set. In the tables, bold letters indicate the best results, "Mean" and "Std" represent the mean and standard error values, respectively. In addition, the average ranking results of the Friedman test are added at the bottom of the table, where "+" indicates that the results are better than the algorithm, "−" indicates that are worse, and "≈" indicates that the results are not much diferent.
Te data in Tables 2 and 3 show that GFCS works best on f2, f4, f6, f7, f9, f12, f13, f15, f18, f19, f20, f23, f24, f25, f26, and f27, which include unimodal functions, multimodal functions, and combined functions. In unimodal functions, the efect of GFCS is clearly better than other algorithms on f2 and f4, and the performance of each algorithm tends to be consistent on f1, f3, and f5. Te results of GFCS on multimodal functions f6, f7, f9, f12, f13, f15, f18, f19, and f20 are all better than other algorithms, and the results of f17 are better than other algorithms except VCS. And in the combined functions f23, f24, f25, f26, and f27, the results of GFCS are better, as it obtains better optimal values. Trough the results of the Friedman test at the bottom of Tables 2 and  3, it can be found that GFCS beats other algorithms on at least 18 functions compared to other algorithms. And the average ranking of GFCS for the 28 tested functions in the two tables are 1.43 and 1.46, respectively, which ranks frst compared to the other 8 CS algorithms. Te analysis shows that GFCS has good stability and convergence for diferent types of problems.
In addition, to visually display the ranking of the results in Tables 2 and 3, the ranking of the average error value (minimization problem) of each function is summarized, and stacked histograms based on the ranking statistics are drawn. Figures 2 and 3 shows that each ranking is represented by a color block. Te better the algorithm performs, the lighter the color of the corresponding block is. Figure 2 shows the ranking of GCS, ACSA, NACS, VCS, and GFCS, and Figure 3 shows the ranking of MSRCS, ACS, ICS, CS, and GFCS.
As can be seen from the fgures, the white block that marks the frst has the largest proportion in GFCS, and GFCS does not obtain the red square that marks the ffth, indicating that GFCS has the best performance compared to other algorithms. In addition, on some functions, such as f11 and f16, GFCS does not achieve the frst place, but still locates in the second or third position, showing that GFCS is still competitive. Overall, for the CEC2013 test set with D � 30, GFCS has a considerable advantage over other algorithms. Input: population size N, maximum number of iterations MaxIt, problem dimension D, mode switching parameter Pa.
(1) count � 0; (2) Randomly initialize the solution set with a population size of N : ; Get the best solution x t best and its ftness value f best ; m � 1; (4) for t � 1: MaxIt do (5) Calculate α and Pa using equations (8) and (10) with t, respectively; (6) for i � 1 : N do (7) Generate a new solution x new by the Lévy fight mode in equation (1); (8) if end if (11) end for (12) Trow away a small fraction (controlled by Pa) of the worst solutions and use equation (7) to generate a new solution; (13) Update the global optimal solution f new best ;  Computational Intelligence and Neuroscience

GFCS
Dynamic variables and "re-fy" mechanism are added to the operation of the algorithm, and feedback is obtained on a perround iteration to improve algorithm performance α max � 0.5, α min � 0.005 Pa max � 0.5, and T � 6    Figure 4, where the horizontal axis indicates the number of iterations and the vertical axis indicates the error values. From Figure 4, it can be observed that GFCS converges signifcantly faster than other competitors on f2, f4, f6, f12, f19, and f24. For the remaining functions, although the In the table, bold letters indicate the best results, "Mean" and "Std" represent the mean and standard error values, respectively.  convergence speed of GFCS is not the fastest compared with other algorithms, the best optimization results can still be achieved as the iteration progresses.
In addition, we can fnd that, for functions f2, f4, f6, f13, f19, and f24, GFCS can obtain a smoother descending curve during the convergence process because GFCS can quickly adjust the parameters Pa and step size α, to obtain faster convergence speed and more precise search results.
For other multimodal functions or mixed functions, these functions tend to make the algorithm fall into a local optimum. After the algorithm falls into the local optimum in a very early period, its optimum value is usually difcult to change greatly. For these functions, the advantages of GFCS are even more obvious. As f9, f12, f15, f18, and f23, Figure 4 shows that the optimization curve of GFCS falls rapidly again after a period of stagnation, jumping out of the current local optimum. At this time, the global feedback part and "re-fy" of GFCS play a role, making the algorithm jump out of the local optimum and fnd a better solution. Since the algorithm balances exploration and exploitation according to the optimization of each round of iterations, the convergence curve is not a continuous decline but presents a state of gradual optimization in stages to approach the global optimal solution.
According to the previous analysis, GFCS has a good convergence speed and optimization ability on various types of test functions and is able to jump out of local extremes. Terefore, we can conclude that GFCS achieves better performance than other algorithms when dealing with 30dimensional problems.

Efect of Dimension Growth.
As can be seen in the above sections, GFCS outperforms others in handling the 30dimensional functions on the CEC2013 benchmark features. However, for a good algorithm, it should also be able to generate high-quality solutions to high-dimensional problems. To study the impact of dimensional growth on GFCS performance, we investigate the scalability of the algorithm on 28 test functions of CEC2013 with problem dimension size scaled from 30 to 50 In this section, we choose MSRCS, VCS, ACS, ECS, ICS, and GFCS, which performed better in the previous sections for comparison experiments, and the experimental results are shown in Table 4 and Figure 5.
From Table 4, GFCS wins on f2, f7, f9, f12, f13, f19, f21, and f27. Although it does not fnish frst on many other functions, it still achieves a relatively high ranking. Likewise, MSRCS is the champion on f4, f8, f15, f16, f18, f20, f23, f24, and f25. ICS is the f10 and f11 champion. VCS obtains the best results on f14, f17, and f22, CS obtains the best solution on f5, and ACS obtains the best solution on f6. Furthermore, all algorithms achieve the same result on f3. As can be seen at the bottom of the table, compared with other algorithms, GFCS outperforms at least 17 functions and has an average rank of 1.96. In addition, Figure 5 shows that GFCS still has the lightest overall color block and high rankings on most functions. More specifcally, GFCS achieves the frst-or second-best results on most functions. It still has a clear advantage over other algorithms. Based on all previous experimental analyses, we can conclude that although the advantage of GFCS mildly decreases when the dimensionality of the problem increases from 30 to 50 dimensions, GFCS is still the best algorithm to handle these benchmark functions combining the results of the previous experiments.
To visually compare the optimization of GFCS with other algorithms in the case of D � 50, we draw the optimization images of some functions in Figure 6. Figure 6 shows that, on the six functions f2, f7, f12, f13, f19, and f21, GFCS is signifcantly faster than the other competitors and is able to achieve better optimal values. On f4, f9, and f27, although GFCS is not signifcantly faster than the other competitors, it is relatively fast and can eventually achieve the best ftness value. In conclusion, the proposed GFCS has a better performance compared to the other competitors.

Comparison with Other Evolutionary Algorithms.
To further confrm the superiority of GFCS, we select some other evolutionary algorithms for comparison. Te diferential evolution algorithm [2] (DE) and frefy algorithm [5] (FA) are widely studied and used swarm intelligence optimization algorithms. To further verify the performance of GFCS, DE, FA, and some variants of DE, ABC, and BSO are selected for comparison, the variants being NABC [38], ABCX [39], CUDE [40], and MSBSO [41].
In view of the fairness of the experiments, the population size � 25, the problem dimension � 30, and the number of evaluations � 1E6 are set for these algorithms, and each test function independently runs 30 times. For some other  2   2  2  2 2  2  2 2 2  2 2 2  2   3  3  3   3   3 3  3  3 3   3   3   3  3   3  3  3   3   f (4) f (6) f (7) f (9) f (12) f (13) f (15) f (18) f (19) f ( Table 5, and the best result is shown in bold. According to Table 5, GFCS fnds the best values on f1, f2, f5, f6, f10, f12, and f26. Likewise, NABC behaves well on f11, f15, f16, f19, f25, and f26 and FA provides the best solutions on f8 and f9, while does not yield optimal results for any function. In addition, ABCX performs best on f14, f17, f21, f22, f23, f27, and f28, CUDE fnds the best values on f3 and f4, and MSBSO provides the best solutions on f1, f5, f7, f12, f13, f18, f20, and f24. In terms of average ranking results, GFCS generates an average rank value of 3.00, and ranks frst, followed by the MSBSO algorithm with a ranking of 3.03, CUDE with a ranking of 3.43, NABC with a ranking of 3.71, and ABCX with a ranking of 4.25, respectively. Te results of the average rank obtained by the Friedman test show that GFCS is still competitive with other swarm intelligence algorithms. In addition, it can be seen from the bottom of Table 5 that, compared with these algorithms, GFCS surpasses the other algorithms in most functions, which show that GFCS still has a relatively large advantage compared with other evolutionary algorithms. Figure 7 shows that we plotted a superimposed histogram based on the ranking statistics to better visualize the ranking of the results in Table 5. Figure 7 shows that GFCS has lighter overall color block rankings and ranks in the top three on 19 functions. Except for the eighth ranking of GFCS on function 3, GFCS does not achieve any other sixth or seventh ranking. By comparing the ranking with other algorithms, we can see that GFCS is still competitive. To further verify the performance of GFCS, we select some other classical evolutionary algorithms for comparison. Te genetic algorithm [42], GA, particle swarm optimization [3] (PSO), ant colony algorithm [43] (ACO), artifcial bee colony algorithm [4] (ABC), and brain storm optimization algorithm [44] (BSO) are selected for comparison.
In view of the fairness of the experiments, the population size N � 25, the problem dimension D � 30, and the number of evaluations � 1E6 are set for these algorithms, and each test function independently runs 30 times. For GA, the probability of crossover is set to 1, and the probability of mutation is set to 0.01. For PSO, we set the personal learning coefcient � 1.5 and the global learning coefcient � 2.0. For ACO, the evaporation rate of pheromone is set to 0.1. Moreover, for ABC, the parameter limit is set to (0.6 * N) * D. For BSO, the number of clusters is set to fve. For other parameters of the GA, PSO, ACO, ABC, and BSO, we followed the settings in the literature, and the parameters of GFCS are consistent with the previous tests. Te experimental results are shown in Table 5, and the best result is shown in bold.
According to Table 6, GFCS fnds the best values on all functions except f3, f8, f10, f15, and f16. In terms of the average ranking results, GFCS produces an average ranking value of 1.39, which has a greater advantage over other SI algorithms. In addition, it can be seen from the bottom of Table 6 that GFCS beats these classical algorithms on most functions, which shows that the GFCS algorithm has a greater advantage over them. In the table, bold letters indicate the best results, "Mean" and "Std" represent the mean and standard error values, respectively. 14 Computational Intelligence and Neuroscience

MSRCS
We draw a stacked histogram based on the ranking statistics to better visualize the ranking of the results in Table 6, as detailed in Figure 8. Figure 8 shows that GFCS has lighter overall color blocks and ranks frst on 23 functions. Moreover, GFCS does not achieve any other ffth or sixth ranking except the sixth ranking on function 3. By comparing the rankings with these SI algorithms, we can conclude that GFCS has a clear superiority in searching for the global optimum.

Comparison of Calculation Time.
To demonstrate the efectiveness of GFCS in terms of running time, we calculate the time of running the 28 test functions of the CEC2013 test set with GFCS and the original CS algorithm in diferent dimensions. Among them, the dimensions are set to 30 and 50. For the other parameters, the upper limit of the number of iterations is set to 20,000, and the population size is set to 25. To exclude measurement chance, each function on the CEC2013 test set runs 10 times independently, and the total run time is calculated. Te experimental results are shown in Table 7. Table 7 shows that there is no signifcant diference in runtime between CS and GFCS for either dimension D � 30 or D � 50. Tis again validates the complexity analysis of CS and GFCS in Section 4.4, where there is no signifcant diference between GFCS and CS in terms of time complexity. f (9) f (12) f (13) f (19) f (21) f (27)  Computational Intelligence and Neuroscience

Conclusion
Tis article proposes a global feedback-based cuckoo search algorithm (GFCS). In GFCS, we introduce the concept of global feedback and the "re-fy" mechanism. In addition, we set new parameter formulas that change with the number of iteration rounds and are controlled by the dynamic global variables. To evaluate the performance of the GFCS algorithm, GFCS is compared with the other eight variants of the CS algorithm and several classical evolutionary algorithms and their variants. Based on the experimental results, the following conclusions can be drawn: (i) GFCS algorithm adopts a global feedback strategy and the "re-fy" mechanism in the optimization search process. According to the evolution of the current generation, GFCS adjusts the parameters of the algorithm globally during the evolution process, efectively accelerates the algorithm's convergence speed, and enriches the population and the diversity of learning. (ii) Compared with CS, some CS variants, and several SI algorithms in the experiment, the GFCS algorithm has faster convergence speed and better convergence accuracy. (iii) As we compare in Sections 4.4 and 5.5, the time and space complexity of GFCS is comparable to that of the traditional CS algorithm, which means that the GFCS algorithm does not improve the complexity of the algorithm.
In the future, we intend to extend our current work in the following directions. Firstly, for the switching parameters, we will try to adjust the adaptive adjustment In the tables, bold letters indicate the best results, "Mean" and "Std" represent the mean and standard error values, respectively.  Computational Intelligence and Neuroscience mechanism or introduce a multistrategy mechanism to further improve the search ability. Secondly, we will consider the application of GFCS to some other scientifc problems, such as applying the algorithm to the feld of machine learning or deep learning. Tirdly, we will discuss the application of the algorithm to some practical problems.

Data Availability
Te datasets generated and/or analysed in this study can be obtained from the corresponding authors upon reasonable request.

Conflicts of Interest
Te authors declare that they have no known conficts of fnancial interest or personal relationships that could have infuenced the work reported in this paper.