An Improved New Caledonian Crow Learning Algorithm for Global Function Optimization

The New Caledonian crow learning algorithm (NCCLA) is a novel metaheuristic algorithm inspired by the learning behavior of New Caledonian crows learning to make tools to obtain food. However, it suffers from the problems of easily falling into local optima and insufficient convergence accuracy and convergence precision. To further improve the convergence performance of NCCLA, an improved New Caledonian crow learning algorithm (INCCLA) is proposed in this paper. By determining the parent individuals based on the cosine similarity, the juveniles are guided to search toward different ranges to maintain the population diversity; a novel hybrid mechanism of complete and incomplete learning is proposed to balance the exploration and exploitation capabilities of the algorithm; the update strategy of juveniles and parent individuals is improved to enhance the convergence speed and precision of the algorithm. The test results of the CEC2013 and CEC2020 test suites show that, compared with the original NCCLA algorithm and four of the best metaheuristics to date, INCCLA has significant advantages in terms of convergence speed, convergence precision, and stability.


Introduction
A large number of optimization problems exist in real life, engineering design, computer technology and other fields, such as minimum cost, optimal parameters, minimum time, pipeline route design, welded beam design, etc. In order to obtain higher economic efficiency and social value, scholars strive to obtain the optimal solution to optimization problems, thus making the research of optimization methods widely concerned.
Optimization methods usually include traditional optimization methods such as the fastest descent method and metaheuristic methods. Among them, traditional optimization methods usually require the optimization problem to be derivable, and the convergence speed and convergence precision are difficult to meet the practical needs when the optimization problem has multiple extrema. And the metaheuristic algorithms proposed by simulating biological habits in nature, etc., have good exploration and exploitation performance through information exchange among individuals in the population. Compared with traditional optimization algorithms, metaheuristic algorithms are better in convergence speed, convergence precision, robustness, and stability, with no strict requirements on the form of optimization problems. ,is makes the metaheuristic algorithm the most effective and widely used optimization method. Researchers have focused on two main aspects of metaheuristics to obtain the best optimization results: improving existing metaheuristics and proposing new ones.
Various metaheuristic algorithms have been proposed, and the more representative methods are as follows: the artificial bee colony algorithm (ABC) was proposed to simulate the behavior of bee colonies to find the optimal nectar source according to different internal divisions of labor [1]; the crow search algorithm (CSA) was proposed to simulate the behavior of crows to hide and search for food [2]; particle swarm optimization (PSO) algorithm was proposed to simulate the foraging behavior of a flock of birds [3]; the firefly algorithm (FA) was proposed to simulate the behavior of fireflies to attract each other [4]; the Marine Predator Algorithm (MPA) was proposed to simulate the behavior of biological interactions between marine predators and prey [5]; the Manta Ray Foraging Optimization Algorithm (MRFO) was proposed to simulate three unique foraging modes of manta rays: chain foraging, cyclone foraging, and somersault foraging [6]; the dolphin swarm optimization algorithm (DSA) was proposed to simulate the habits of dolphins such as echolocation and information exchange [7]; and the gray wolf optimization algorithm (GWO) was proposed to simulate the hunting behavior of gray wolves [8] and so on. Some of the algorithms mentioned above have been applied to many research articles, such as applying particle swarm optimization to solve the UCP problem with deterministic and stochastic load demands [9], the ocean predator algorithm to solve the optimal reactive power dispatch (ORPD) problem [10], and the manta ray foraging optimization algorithm to solve the economic load dispatch and advance dispatches problems of microgrids [11].
Researchers have done a lot of work on the improvement of existing metaheuristic algorithms, and the more representative research results are given below. Many improvement methods for particle swarm algorithms have been proposed in recent years, and the representative results are as follows: In 2016, Samma et al. proposed the RLMPSO algorithm [12], where each particle performs five operations under the control of the RL algorithm to improve the search performance of the particle swarm algorithm. In 2018, Zhang et al. proposed the DLPSO algorithm [13] to address the shortcomings of the PSO algorithm in multimodal indivisible problems that tend to fall into local optima, which extracts good vectors from the vectors distributed in the search space to form a new vector with a greater possibility of jumping out of local optima. In 2021, Lu et al. proposed the EMCPSO algorithm [14], which takes advantage of the technology of multiple populations in order to overcome the problem of premature convergence of PSO, which divides the population into four identical subpopulations, and the optimal individual in each subpopulation is used to represent the evolutionary state of that subpopulation, and by sharing information among the four populations, the evolutionarily stagnant subpopulations search for the optimal solution again, while introducing an exclusion mechanism to prevent premature convergence of particles further. In 2022, Wang et al. proposed the RLLPSO algorithm for largescale optimization problems [15], which constructs a levelbased population structure to improve population diversity, a reinforcement learning strategy as well as a level competition strategy to improve the search efficiency of the algorithm in order to overcome the complexity of large-scale optimization problems.
Many improvement methods have been proposed for the artificial bee colony algorithm, and the representative results are as follows: In 2018, Cui et al. proposed the DPABC algorithm [16], which uses a dual population framework to divide the population into a convergence population and a diversity population, responsible for developing promising regions as well as maintaining population diversity, respectively, to improve the overall performance of the algorithm. In 2019, Awadallah et al. improved the onlooker bee stage [17], combining four selection methods, including global optimum, tournament, linear ranking, and exponential ranking, to guide the search process of the onlooker bee in order to determine the impact of the selection scheme on the onlooker bee stage. In 2021, Zhou et al. proposed the ABC-MNT algorithm [18], which applies three different neighborhoods to different individuals, helping the algorithm to achieve a better balance between exploration and exploitation, in addition to employing a global neighborhood search strategy and opposition-based learning that preserves the search experience of the scout bee phase. In 2022, Ye et al. proposed the RNSABC algorithm [19], which uses a random neighborhood structure so that each solution has a random neighborhood, in addition to a depth-first search method to enhance the search capability of the following bee to improve the algorithm's ability to search for the optimal solution.
Representative results of the improvement of other mainstream metaheuristic algorithms are as follows: In 2017, Wang et al. proposed a firefly algorithm with neighborhood attractiveness (NaFA) [20], where each firefly selects attractive individuals from a predefined region instead of the whole population, and the proposed strategy can effectively improve the solution accuracy and reduce the time complexity. In 2018, Sun et al. proposed an improved whale optimization algorithm (MWOA) for solving large-scale optimization problems [21], which uses a nonlinear dynamic strategy based on the cosine function to update the control parameters, balances the exploration and exploitation capabilities of the algorithm, uses a Levy flight strategy to make the algorithm jump out of the local optimum, and uses quadratic interpolation for the optimal individuals of the population to enhance the local exploitation capabilities of the algorithm. In 2019, Zamani et al. proposed a conscious neighborhood-based crow search algorithm (CCSA) [22], which introduces three search strategies, neighborhoodbased local search strategy, non-neighborhood global search strategy, and roaming-based search strategy, to enhance the balance between local and global search. In 2020, Gupta et al. proposed a memory-based gray wolf optimization algorithm (mGWO) [23], which modified crossover and greedy selection based on the historical optimum of individuals, enhancing the algorithm's ability to perform the global search, local exploitation, and the balance between the two. In 2022, Long et al. proposed a velocity-based butterfly optimization algorithm (VBOA) [24], which introduced velocity and memory to guide individuals in the local search phase and introduced a refraction-based learning strategy, effectively enhancing the diversity of populations and the exploration ability of the algorithm.
Many excellent metaheuristic algorithms have been proposed in recent years. In 2018, Wang proposed the moth search algorithm (MSA) [25] inspired by the phototropism of moths and Levy flight, which treats moths as individuals. Moths with smaller distances from the optimal individual perform Levy flight, while moths with more considerable distances approach the optimal individual in a straight line. ,e above two stages optimize the algorithm. In 2019, 2 Computational Intelligence and Neuroscience Heidari et al. proposed the Harris Hawk optimization algorithm (HHO) [26] based on the inspiration of the collaborative group behavior of Harris hawks during predation, which uses Harris hawks as individuals, and the search process includes three stages: exploration, exploration to exploitation conversion, and exploitation, and the algorithm is characterized by few control parameters and excellent global search capability. In 2020, Li et al. proposed the slime mold algorithm (SMA) [27], inspired by the behavioral and morphological changes in Physarum polycephalum during foraging, which creates three different forms to optimize the problem by using weights to simulate the positive and negative feedback generated by slime molds during foraging.
In the same year, Al-Sorori and Mohsen proposed the New Caledonian crow learning algorithm (NCCLA) [28] based on the behavior of New Caledonian crows to obtain food by learning to make tools. ,e advantage of this algorithm is its stochastic nature, which guarantees that the algorithm does not get trapped at the local optimum. In the same year, Mohamed et al. proposed a gaining-sharing knowledgebased algorithm (GSK) [29] inspired by the process of acquiring and sharing knowledge in the human life cycle, which treats people as individuals and improves their knowledge by using junior gaining and sharing phase and senior gaining and sharing phase, i.e., solving optimization problems on continuous space. In 2021, Tu et al. proposed the colony predation algorithm (CPA) [30], inspired by the supportive behavior of herd animals and the behavior of selective hunting, which is based on the coexistence of social animals and focuses on optimizing the problem through five stages: communicating and collaborating, dispersing food, surrounding food, supporting the closest individual, and finding food. In 2022, Hashim et al. proposed the snake optimizer (SO) [31], based on the behavior of snakes to forage or breed under different temperature and food availability conditions, in which individuals explore and exploit the conditions of temperature as well as food. ,e various metaheuristic algorithms mentioned above provide new ideas for solving optimization problems and further advance the development of optimization techniques. Compared with the more classical PSO and DE, they have significantly improved in terms of convergence speed and convergence precision. However, unfortunately, for the highly nonlinear and complex optimization problems that emerge one after another in practical engineering, the convergence speed and convergence precision of the existing metaheuristic algorithms are obviously insufficient and even fall into local optimum, making it difficult to obtain highly satisfactory economic and social values. ,erefore, improving the optimization performance of each new metaheuristic algorithm has been one of the main research contents in the field of evolution.
In this context, given the literature [28] and a large number of experimental studies, the NCCLA algorithm is a very excellent metaheuristic algorithm because of its simple operation and significantly better convergence capability than optimization algorithms such as GWO, CSA, and WOA, and is highly promising in fields such as engineering optimization. In this paper, we only study NCCLA. In order to further improve the problems of insufficient convergence precision and convergence speed and easy falling into local optimum when NCCLA deals with very complex optimization problems, we propose an improved New Caledonian Crow Learning Algorithm (INCCLA) in this paper.
,e main innovations and contributions of INCCLA are as follows: (1) A cosine similarity-based parent individual selection approach is proposed. ,e globally optimal individual and another excellent individual with a significant difference in similarity are selected as the parent, and the juvenile crow individuals are guided to search toward different ranges to maintain the population diversity while maintaining the convergence speed of the algorithm. (2) Improving the learning phase of juvenile crows. A new hybrid learning mechanism of complete learning and incomplete learning is set up, in which the individual juvenile crows in the complete learning stage can select learning objects according to their own conditions, which effectively improves the convergence speed of the algorithm while maintaining the population diversity to a certain extent; while the juvenile crows in the incomplete learning stage learn the behavioral attributes of different individuals in order to maintain the population diversity of the algorithm.
(3) Improving the reinforcement phase. For the juvenile reinforcement stage, a weighting factor is introduced to enable the algorithm to have a strong exploration ability in the early evolutionary stage and a strong exploitation ability in the late evolutionary stage, and at the same time, a small range of random perturbations is added to increase the possibility of convergence of the algorithm to the global optimum; for the parents' reinforcement stage, the update methods of the two parents' individuals are improved further to respectively balance the exploration and exploitation ability of the algorithm. ,e results of testing on the CEC2013 and CEC2020 test suites show that the INCCLA proposed in this paper has significant advantages in terms of convergence speed, convergence precision, and stability compared with four other more representative optimization algorithms.
,e rest of the paper is organized as follows: Section 2 describes the working principle and flow of the NCCLA algorithm. Section 3 analyzes the defects of the original NCCLA algorithm and further proposes an improved INCCLA algorithm. Section 4 shows the simulation results and analysis of the INCCLA algorithm with the original NCCLA algorithm and other more mainstream improved algorithms on the CEC2013 and CEC2020 test function suites. Section 5 concludes the proposed algorithm in this paper.

New Caledonian Crow Learning Algorithm
In nature, New Caledonian crows are divided into the juvenile and the parent crows, which enhance their tooldesign skills through learning and their own experience and knowledge, respectively, to obtain food from the pandanus tree. Inspired by the above behavior, Wedad and Abdulqader proposed the New Caledonian crow learning algorithm (NCCLA). In NCCLA, individuals represent the Computational Intelligence and Neuroscience 3 manufacturing behavior of New Caledonian crows and fitness values represent the behavioral advantage of each crow. ,e algorithm has three main phases: initialization, learning phase, and reinforcement phase. ,e pseudo-code of NCCLA is shown in Algorithm 1, and the key steps are briefly described as follows.

Population Initialization.
Suppose the number of individuals in population X is N. Each individual X i (i � 1, 2, . . ., N) represents the behavior of a crow, which can be expressed as , where D is the dimension of the optimization problem, and X i,j denotes the j-th behavioral attribute of the i-th crow. At the beginning of the algorithm, the initial behavior of each crow is generated randomly according to Equation.
where X U and X L correspond to the upper and lower bounds of the j-th dimensional search space in the optimization problem, respectively, and U (0,1) is a uniformly distributed random number in the range [0,1].

Learning Phase.
In NCCLA, only juveniles enter the learning phase, and each behavioral attribute X i,j of juveniles will be socially or asocially learned according to the probability SL prob or 1-SL prob , respectively. SL prob is recommended to be set to 0.95, but can be set to other values.

Social Learning.
After the j-th behavioral attribute of juvenile crow i, X i,j , is determined to require social learning according to the probability SL prob , it is then decided to perform vertical learning or horizontal learning according to the predetermined probability VSL prob or 1 − VSL prob . ,e details are shown in Equation.
where X i,j (t) is the j-th new behavioral attribute acquired by juvenile crow i after social learning in iteration t, VSL prob is recommended to be set to 0.99, and can also be set to other values. In (2), when rand ≤ VSL prob , the juvenile crow X i performs vertical learning to its parent, i.e., it copies the corresponding behavioral attributes of its parent X p1 or X p2 with probability P1 prob , obviously k 1 = 1 or 2; Otherwise, the juvenile crow X i performs horizontal learning, i.e., it randomly selects a sibling k 2 that is more experienced and copies its corresponding behavioral attributes, and the expression formula for k 2 is shown in equation (3). It should be noted that for the juvenile crow with the best fitness value, only vertical learning is performed, not horizontal learning.
where rand is a random number uniformly distributed in the range [0, 1], [·] means rounding is performed.

Asocial
Learning. When a crow X i performs asocial learning, its behavioral attributes are randomly updated using (1) according to the probability TaE prob , or retained the previous behavioral attributes according to 1 − TaE prob . ,is is shown in Equation .
where TaE prob is recommended to be set to 0.99, but can also be set to other values.

Reinforcement Phase.
After completion of the learning phase, certain attributes of the learned juvenile crow behavior and parent behavior are reinforced according to the reinforcement probability RP prob . RP prob is recommended to be set to 0.99, and can be set to other values.

Juvenile Reinforcement.
Each behavioral attribute of juvenile crows was reinforced according to Equation.
where RW is shown in Equation. RW where r1 and r2 are random numbers between 0 and 1, α represents the difference between their behavioral attributes before and after learning, as shown in (7), and β represents the social learning effect developed over time, as shown in Equation (8).
where t represents the number of current iterations, X i,j (t − 1) is the j-th behavioral attribute of crow i before the current generation of learning, r is a normally distributed random number in the range [0,1], mean (j) is the average of the j-th behavioral attribute of all individuals in the population, and lf is a learning factor, as shown in Equation . lf where max_t represents the maximum number of iterations, lf max and lf min represent the maximum and minimum values of the learning factor, respectively, and are recommended to be set to 0.02 and 0.0005, respectively, or can be set by oneself.

Parents
Reinforcement. ,e parents X p1 and X p2 update certain attributes according to the reinforcement probability RP prob . When rand ≤ RP prob , the j-th dimensional behavioral attribute of crow i is reinforced, as shown in (10); otherwise, it retains the original behavioral attribute. 4 Computational Intelligence and Neuroscience where r1 is a normally distributed random number, r2 is a uniformly distributed random number in the range [0, 1], and mean (j) is the mean of the j-th behavioral attribute of all individuals in the current population.

Proposed Algorithm
To further improve the convergence performance of NCCLA, this section proposes an improved New Caledonian Crow Learning Algorithm (INCCLA), whose pseudo-code is shown in Algorithm 2.

Determination of Parent Individuals Based on Cosine
Similarity. In NCCLA, VSL prob and P1 prob are set to 0.99 and 0.95, respectively, meaning that each juvenile crow will perform vertical learning with a probability of 0.99 × 0.95 toward the parent individual during the learning phase. ,e parent individuals are always the two individuals with the best fitness in the population. As evolution proceeds, the two-parent individuals will rapidly approach each other, showing a high degree of similarity, which will lead most of the juvenile crows to approach them through vertical learning rapidly, and can only search around the parent individuals, lacking exploration of other ranges. Although rapid convergence can be achieved in the early stage of evolution, the loss of population diversity is apparent, and the algorithm is straightforward to fall into the local optimum. In order to solve the above problem, the following cosine similarity-based parent individual selection method is proposed in this section. First, the best individual in the population is determined as the parent individual X p1 ; then, the cosine similarity of the remaining individuals in the population to X p1 is calculated according to (11), and they are arranged in order from smallest to largest and evenly divided into two groups; finally, the individual with the best fitness value is selected as the parent individual X p2 from the other group different from the group in which X p1 is located. It is important to note that the parent individuals are selected for every P generation above. Generally, P � 50 is sufficient to achieve good results, but it can also be set according to the optimization problem. Input: N, D, RP prob , SL prob , VSL prob , P1 prob , TaE prob , MaxIter Output: best solution and its fitness (1) Initialization of variables (N, D, RP prob , SL prob , VSL prob , P1 prob , TaE prob , MaxIter) (2) Generate the initial population X according to Section 2.1 (3) Calculate the fitness value F i of each individual X i (4) While t ≤ MaxIter do (5) Rank all individuals and select the two best individuals, X 1 and X 2 , as parents, noted as X p1 and X p2 , and the rest as juvenile crows (6) For Each juvenile crow X i in population X do (7) For Each behavior attribute j in X i do //Learning phase (8) If rand ≤ SL prob then (9) Decide to perform vertical or horizontal learning according to probability VSL prob in Section 2.2.1//Social learning (10) else (11) Random update of behavior attributes or retention of previous behavior attributes using (1) as decided in Section 2.2.2 based on the probability TaE prob //Asocial learning (12) end If (13) Based on the probability RP prob , reinforce the attributes of X i after learning using (4)//Juvenile reinforcement (14) end For (15) end For (16) Based on the probability RP prob , reinforce the attributes of the parents X p1 and X p2 using (9)//Parents reinforcement (17) t � t + 1 (18) end While (19) Output the global optimal solution ALGORITHM 1: NCCLA.

Computational Intelligence and Neuroscience
where X p1,j (t − 1) denotes the j-th dimension of the globally optimal individual X p1 (t − 1) determined by relying on the previous iteration of the population, and sim i,p1 (t − 1) denotes the cosine similarity of the i-th individual X i (t − 1) without any evolutionary operation to X p1 (t − 1) in the current generation.
To further illustrate the above method of determining parent individuals based on cosine similarity, taking the optimized 2-dimensional Rotated High Conditioned Elliptic function as an example, the specific determination process is given in Figure 1. It can be seen that, according to the original parent individual determination method, X 1 and X 2 , which are close to each other, will be selected as parent individuals, while according to the proposed method in this section, X 1 and X 4 , which are farther apart, will be selected as parent individuals. ,e fitness value of X 4 is not much worse than X 2 .
In summary, compared with the original approach of selecting two more similar optimal individuals as parents, this section retains the optimal individual of the population as a parent individual, which does not affect the convergence speed too much. In contrast, the other parent individual is selected to be less similar to the optimal individual of the population but with better fitness value. ,is way can guide juveniles in their search across a range and ensure that they learn from more experienced individuals, maintaining good population diversity while not reducing the convergence speed too much.

Improving Juvenile Crow Learning Phase.
As seen in Algorithm 1, the learning phase generates its own learning objects for juvenile crows intending to provide excellent evolutionary directions for the reinforcement phase. In-depth analysis can be found that the various behavioral attributes of the learning objects generated in the learning phase are not almost wholly derived from the same individual. Although the population diversity can be better maintained, it is difficult to ensure the superiority of the learning objects composed of them due to the complete separation of each behavioral attribute. ,erefore, it is difficult to guarantee the convergence speed of the reinforcement phase. ,e excellent individuals themselves are already better integrated with each behavioral attribute, which has a vital role in the rapid convergence of the algorithm but is not conducive to the maintenance of population diversity. Given this, this section proposes a novel juvenile learning approach as shown in Figure 2. R juveniles are randomly selected to perform complete learning, i.e., each behavioral attribute of the corresponding learning object originates from the same individual completely, which promotes fast convergence of the algorithm; while the remaining juveniles perform incomplete learning, i.e., each behavioral attribute of the learning object originates from different individuals, which ensures population diversity.
Input: N, D, RP prob , SL prob , MaxIter Output: best solution and its fitness (1) Initialization of variables (N, D, RP prob , SL prob , MaxIter) (2) Generate the initial population X according to Section 2.1 (3) Calculate the fitness value F i of each individual X i (4) While t ≤ MaxIter do (5) Rank all individuals and select the parents according to Section 3.1, noted as X p1 and X p2 , and the rest as juvenile crows (6) Randomly select R individual juvenile crows to form a subpop (7) For Each juvenile crow X i in population X do (8) ifX i ∈ subpopdo (9) Perform complete learning of the juvenile crow X i according to Section 3.2.1 (10) else (11) Perform incomplete learning of the juvenile crow X i according to Section 3.2.2 (12) end if (13) operations on juvenile crows X i according to Section 3.3.1 (14) end for (15) Perform parent reinforcement operations on parents X p1 and X p2 according to Section 3.3.2 (16) t � t + 1 (17) end While (18) Output the global optimal solution ALGORITHM 2: INCCLA. 6 Computational Intelligence and Neuroscience 3.2.1. Complete Learning. As mentioned above, the complete learning phase aims to enhance the convergence speed of the algorithm by copying all the behavioral attributes of a particular outstanding individual. In contrast to social learning, asocial learning focuses on maintaining population diversity; therefore, in the complete learning phase, asocial learning is eliminated, and only social learning is used. Social learning in NCCLA includes vertical and horizontal learning, where vertical learning is learning from the two best parent individuals in the population, while horizontal learning is learning from other individuals who are better than oneself. Obviously, compared with vertical learning, horizontal learning is more capable of maintaining population diversity. In NCCLA, vertical or horizontal learning is chosen according to the fixed probability VSL prob . If VSL prob is high, the population will quickly approach the parent individuals, accelerating the algorithm's convergence. However, the rapid loss of population diversity can easily cause the algorithm to fall into local optimum. If VSL prob is low, each individual may have different learning objects, and introducing multiple learning objects makes the evolutionary direction of each individual more diffuse, which is     Computational Intelligence and Neuroscience not conducive to the rapid convergence of the population. ,e NCCLA recommends that VSL prob be set to 0.95, allowing all juveniles to primarily use vertical learning as their social learning method. In vertical learning, individuals need to choose to learn from the best parent X p1 or the second-best parent X p2 according to the fixed probability SL prob . If SL prob is large or small, all juvenile crows will search around a parent almost exclusively, which is not conducive to the maintenance of population diversity and increases the possibility of the algorithm falling into local optimum; If SL prob is set to about 0.5, it will learn from the parent individuals with equal chances, which is similar to random selection and has certain blindness, slowing down the convergence of the algorithm to some extent. In short, the selection methods of vertical learning or social learning by fixed probability and the selection of a certain parent individual for learning by fixed probability in vertical learning are not very reasonable. Given this, to effectively improve the convergence speed of the algorithm and not destroy the population diversity too much, we propose the complete social learning method as shown in Equation.
where F i (t − 1) denotes the fitness value of juvenile X i , F i (t − 1) denotes the mean of the fitness values of all individuals in the population, sim i,p1 (t − 1) denotes the cosine similarity of juvenile X i to the best individual X p1 in the population. sim i,p1 (t − 1) denotes the mean of the cosine similarity of all individuals in the population to X p1 . When are satisfied, the juvenile X i performs vertical learning from the parent X p1 or X p2 according to (13); otherwise, the juvenile X i will perform horizontal learning in the original way of NCCLA, i.e., it randomly selects a juvenile with a better adaptation value than itself as the learning object.
where sim i,p1 and sim i,p2 denote the cosine similarity of juvenile X i to the parent individuals X p1 and X p2 , respectively, and NF i denotes the normalization ability of juvenile X i as shown in equation (14). ,e better the individual fitness value, the stronger its normalization ability.
In summary, this section proposes the complete social learning approach with the following advantages. First, compared with the original fixed-probability selection of vertical or horizontal learning, the new selection approach proposed in this section, as shown in (12), can rely on individuals' conditions to adaptively select the learning mode. Only juvenile individuals with convergence potential, i.e., those who are more similar to the optimal individuals and have better fitness values, will perform vertical learning. In contrast, the rest of the individuals will perform horizontal learning. Obviously, this approach makes a small number of dominant individuals, who are not far from the optimal parent and are more excellent, focus on mining in the region with more convergence prospects and then quickly determine the more excellent evolutionary direction, driving the rapid convergence of the rest individuals. In turn, most of the remaining individuals learn from other better juvenile crows, which can develop other search areas, facilitating the maintenance of population diversity and reducing the risk of the algorithm falling into a local optimum. Second, the new vertical learning approach as shown in (13)proposed in this section, which relies on a comprehensive judgment of individual fitness values and the degree of similarity with individuals of the two parents, selects individuals learning from the parent with less similarity and excellent performance, and enables the juvenile crows to explore different promising areas as much as possible, which can better maintain the population diversity while ensuring the convergence speed. In short, the complete learning approach proposed in this section selects the learning mode and learning objects in a targeted way according to the juvenile crows' conditions, which effectively improves the convergence speed of the algorithm and maintains the population diversity to a certain extent. In addition, the above process no longer uses the parameters VSL prob and SL prob , which avoids the trouble of parameter debugging.

Incomplete Learning.
As mentioned above, the incomplete learning phase will generate new learning objects by copying and absorbing the behavioral attributes of different individuals. To further ensure that the algorithm maintains population diversity and avoids falling into local optimum while improving the convergence speed, this section improves the asocial learning behavior and social learning behavior in the original NCCLA separately and proposes a new incomplete learning approach. ,e details are as follows.
(1) Improvement of social learning behavior ,is section improves the conditions of vertical and horizontal learning in the social learning approach in NCCLA, as well as the horizontal learning approach, and proposes a new incomplete social learning approach as follows. First, the adaptive selection factor δ i (t) is calculated for the i-th juvenile crow individual X i as shown in (15); then, a random number rand between [0,1], and if rand ≤ δ i (t) is generated, vertical learning is performed, i.e., the behavioral attributes corresponding to the j-th dimension of the parent individual X p1 or X p2 as shown in equation (13) are copied, otherwise, horizontal learning is performed, i.e., the behavioral attributes of the j-th dimension of the individual shown in equation (16) are copied.
where F gbest (t − 1) and F i (t − 1) represent the fitness values of the globally optimal individual and the juvenile Xi before the current generation's juvenile crow learning operation, respectively.
where X p1 is the optimal parent individual, X r1 is a randomly selected individual with a better fitness value than the juvenile crow X i , X r2 is a randomly selected individual in the population, and X s is the individual selected among all juvenile crow individuals using roulette selection according to the probability corresponding to equation (17).
where sim i,k denotes the cosine similarity between the juvenile crow X i and the juvenile crow X k before learning was performed, and NF k denotes the normalization ability of the juvenile crow X k . (2) Improvement of asocial learning behavior In NCCLA, the probability of asocial learning behavior occurring is only 1-VSL prob × P1 prob , which is only 1 − 0.99 × 0.95 by its proposed parameter setting. In essence, such a small probability of asocial learning behavior occurring is intended to provide new evolutionary genes and reduce the possibility of the algorithm falling into a local optimum. Obviously, there is no need to keep certain properties unchanged with a certain probability. For this reason, the asocial learning in the incomplete learning phase only randomly updates each behavioral attribute according to equation (1). ,e analysis shows that the incomplete learning approach proposed in this section has the following advantages. First, each behavioral attribute of new juvenile individuals after incomplete learning may originate from different individuals, including parent individuals X p1 and X p2 , any other individual in the population, and new genes generated by asocial learning, forming various combinations that can ensure excellent population diversity. Second, the incomplete learning phase mainly uses incomplete social learning and rarely uses asocial learning. Compared with the original social learning approach, the new social incomplete learning proposed in this section better adapts to the needs of the algorithm performance at different stages of the algorithm, as follows: at the early stage of evolution, the individuals in the population are more distributed, with better population diversity, and the difference between individuals and optimal individuals is large. According to the adaptive selection factor, most of the dimensions of the new individuals come from the parent individuals, and a few dimensions come from other juvenile individuals, which further promotes the rapid convergence of the population; while in the late evolutionary stage, all individuals gradually approach the globally optimal individuals, the population shows a certain aggregation, and the population diversity decreases, at this time, most of the dimensions of the new individuals are derived from other more experienced juvenile individuals, which maintains the population diversity without slowing down the convergence of the algorithm. ,ird, when individuals perform horizontal learning in a certain dimension, they can no longer learn and communicate only with other juveniles who are better than themselves but have the opportunity to exchange information with any individual in the population, which further enhances population diversity, and learning from individuals who are less similar to and better than themselves based on roulette selection does not reduce the convergence speed while maintaining population diversity. In summary, the incomplete learning approach proposed in this section can indeed achieve the goal of maintaining population diversity and maximizing convergence speed.

Improvements of Juvenile Crows Reinforcement Phase.
In fact, the juvenile crow reinforcement phase in NCCLA is an offset search of RW near the learning objectives identified in the juvenile crow learning phase. ,e offset range RW is a combination of α and β, where α represents self-perception and β represents social perception, which aims to enhance the exchange of evolutionary information with other individuals and thus explore more search space. ,e experimental study shows that the unreasonable settings of RW and β make the quality of the reinforced individuals still have much room for improvement, for which they are improved separately.
(1) Improvement of β calculation method An in-depth analysis of (8) reveals that β is actually a reference to the information of other individuals and scales the individual itself by exp (−lf × r × t × mean (j)) times. As the number of iterations t increases, if Computational Intelligence and Neuroscience mean (j) < 0, exp (−lf × r × t × mean (j)) will be huge, even tending to infinity. ,e corresponding β and RW are also extremely large, making it extremely easy for the reinforced individuals to exceed the search range, resulting in ineffective reinforcement and making it difficult to provide new individuals with more excellence, leading to slow convergence of the algorithm or even failure to converge to the global optimum. Given this, a new calculation of the social perception factor β is proposed, as shown in equation (18).
where k is a randomly selected individual different from X i in the unlearned juvenile crow population, i.e., i ≠ k, and w(t) is a weighting factor, as shown in equation.
where w max and w min are denoted as the maximum and minimum values of the weighting factors, respectively, generally, when w max and w min are 2 and 0, better results can be obtained. ,e change process of the weight factor with iteration is shown in Figure 3. At the early stage of evolution, the weight factor maintains a large value, which disguisedly increases the social learning phase in the reinforcement phase. ,e communication between individuals is more extensive, which makes individuals search extensively in different regions, further enhancing the global search ability of individuals and reducing the risk of the algorithm falling into the local optimum; as the evolution proceeds, the weight factor gradually decreases, especially at the late stage of evolution, the weight factor maintains a small value for a long time, which weakens the communication between individuals and other individuals, and enhances the local exploration ability of individuals in their neighborhood, which is more conducive to finding the global optimum. (2) Improvement of RW calculation method An in-depth analysis of RW calculation, as shown in (6), reveals that when α and β are zero simultaneously, the reinforcement phase is ineffective and fails to provide new individuals. In practical optimization, the possibility of this situation is not low. For example, in the late stage of algorithm evolution, almost all individuals in the population will gather around the optimal individual, and the vast majority of them have similar behavioral attributes, or even some behavioral attributes of some individuals are entirely identical when α and β are likely to be zero at the same time. Obviously, in order to enable individuals to perform a refined search around the obtained optimal value and thus converge to the globally optimal position, when both α and β are zero at the same time, a new stochastic reinforcement strategy is proposed to calculate RW as shown in equation (20). Here, it should be noted that when α and β are not zero simultaneously, RW is still calculated according to the original way shown in equation (6).
where s1 and s2 are two different individuals randomly selected in the unlearned juvenile crow population, i.e., s1 ≠ s2 ≠ i, and r is a random number uniformly distributed in the range [0, 1].
As seen in (20), when the individuals within the population are more similar, the above random reinforcement strategy makes the individuals perform a small random perturbation near themselves, increasing the possibility of convergence of the algorithm to the global optimum.

Improvements of Parent Crows Reinforcement Phase.
In NCCLA, the parent individuals in this iteration are the two relatively better individuals identified in the previous iteration of the population, which directly guide the evolution of the current generation of juveniles play an important role in the exploration and exploitation of the algorithm. As seen from the parent reinforcement phase in Section 2.3, the two-parent individuals each self-update in their independent reinforcement according to the probabilistic RP prob . Further simplification reveals that the parent individuals X p1 and X p2 perform the reinforcement operation according to Equations 21 and 22. Both of these reinforcement methods include e r1×(mean(j)− X pi,j (t− 1)) . From the properties of the e exponential function, we can find that the effect of e r1×(mean(j)− X pi,j (t− 1)) is to amplify the gap between mean (j) and the two-parent individuals, especially when mean(j) − X i,j (t − 1) > 0, the gap amplification is more pronounced. In general, early in evolution, individual distribution is extremely dispersed, and the gap between individuals is not small, which is easily exceeded by the search space of the optimization problem after exponential amplification, resulting in ineffective reinforcement and waste of computational resources. Moreover, in the late stage of evolution, although the gap between individuals is not large, the further amplification of the gap by e r1×(mean(j)− X i,j (t− 1)) will cause the parent individuals to produce a not small offset in this dimension, which is also very easy to deviate from the excellent evolutionary direction and will likewise cause ineffective reinforcement. Obviously, the reinforcement mentioned above of the two-parent individuals cannot effectively meet the needs of algorithm evolution. Given the different roles of the two-parent individuals in the algorithm, the reinforcement methods of these two-parent individuals are improved separately further to balance the exploration and exploitation capabilities of the algorithm.
(1) Novel reinforcement of parent individual X p1 ,e parent individual X p1 is the optimal individual determined by relying on the previous iteration of the population, which plays a vital role in guiding the algorithm's convergence. However, if its evolutionary direction points directly to the optimum local peak, it will increase the possibility of the algorithm falling into the local optimum. In view of the fact that the juvenile individuals of this generation have already achieved self-improvement based on the two more excellent parent individuals, carrying more excellent evolutionary information, which can provide more excellent reference information for the parent individuals to determine the evolutionary direction, a novel reinforcement was designed for the first parent individual X p1 as shown in Equation .
where r1 is a random number uniformly distributed in the range [−1.5, 1.5], X k,j (t) is the j-th behavioral attribute of a randomly selected individual different from X p1 (t − 1) in the current population, and G 1 is a number conforming to the Gaussian distribution N∼(X p1 , j (t − 1),1), as shown in Equation From the above novel reinforcement approach for the parent individual X p1 , it can be seen that if for the j-th dimension of X p1 relying on the probability RP prob determines that reinforcement is needed, an individual X k (t) needs to be randomly selected from the current population. By comparing whether X k,j (t) is equal to X p1,j (t − 1), it is determined that learning toward the mean (j) or X k,j (t) is performed. At the early stage of evolution, X k,j (t) is almost not equal to X p1,j (t − 1). ,e j-th dimension of X p1 is reinforced by learning from X k,j (t). Since the individuals in the current population already carry more excellent evolutionary information, the optimal evolutionary direction can be determined as soon as possible with reference to their evolutionary direction. As evolution proceeds, the range r1 × e − 16t 2 /max t 2 of X p1 tries to explore a more optimal region gradually decreases, and gradually locks in a smaller range near the optimal individual to exploit a more optimal individual. ,is is in accordance with the process and law of evolutionary algorithms that gradually approach the region where the global optimal solution is located, avoiding the phenomenon of missing the optimal solution due to too large a search range. In addition, because the individuals of X p1 reinforcement learning are not the same on each behavioral attribute, it further enriches the range of dominant regions that can be explored by X p1 , increasing the possibility of the algorithm locking the actual optimal region and reducing the risk of the algorithm falling into a local optimum due to a single evolutionary direction. Moreover, as evolution proceeds, individuals are more similar by the late stage of evolution, and the j-th dimension of X p1 will be reinforced with a high probability by learning from mean (j). Compared with X k,j (t), the j-th dimension of X p1 is more likely to be different from mean (j), and the difference between mean (j) and the j-th dimension of X p1 is smaller, making the optimal individual X p1 able to perform a more Computational Intelligence and Neuroscience refined search within the current range, making it easier for the individual to find the optimal value of the current dimension, thus speeding up the convergence of the algorithm. (2) Novel reinforcement of parent individual X p2 Unlike the parent individual X p1 , which focuses on determining the evolutionary direction of the algorithm and improving the overall convergence speed, the parent individual X p2 focuses on providing excellent evolutionary information to guide the evolution of juveniles while maintaining population diversity. Given this, for the parent individual X p2 , let it learn from the optimal individual X p1 and other individuals in the population together and propose a novel reinforcement approach as shown in Equation .
X p2,j (t) � X p2,j (t − 1) where individual X q is a juvenile randomly selected from the population of juvenile crows after reinforcement learning, and G 2 is a number that fits the Gaussian distribution N∼(X p2,j (t − 1),0.5), as shown in Equation .
Compared with the parent individual X p1 reinforcement method, the parent individual X p2 learns from the optimal individual X p1 when performing reinforcement to ensure that it does not deviate from the optimal evolutionary direction, but when referring to the evolutionary information of other individuals, it can explore in a more extensive search range, potentially providing information on other excellent locations that are not in the same region as the parent individual, making it possible for the juvenile individuals learning from it to explore in other regions that are not the same as X p1 , further balancing the exploration and exploitation capabilities of the algorithm. In addition, the parent individual X p2 controls the scope of learning from the optimal individual X p1 and individual X q with r1 and G 2 , respectively. In the early evolutionary stage, G 2 is greater than r1 with a greater probability, i.e., compared with learning from the optimal individual, the parent individual X p2 learns more from the remaining individuals, which can better maintain population diversity, ensure the global search of the algorithm, and increase the possibility of convergence of the algorithm to the global optimum. In the late evolutionary stage, G 2 is smaller than r1 with a greater probability, i.e., the parent individual X p2 learns mainly from the optimal individual, and learns supplementally from the rest of the individuals, further ensuring that individuals perform a refined search near the optimal individual, thus locking the global optimal position.

Experimental Results and Discussion
In this section, to verify the performance of the INCCLA algorithm, the following four parts of experiments will be conducted in this paper: (1) ,e parameter sensitivity analysis; (2) Verification of the effectiveness of the proposed three improvement strategies; (3) Compare the performance of the improved INCCLA algorithm with the original NCCLA algorithm and four other representative and excellent performance evolutionary algorithms; (4) Comparison of the effectiveness of each algorithm in engineering applications.
,is section uses the CEC2013 and CEC2020 test suites for experimental simulation. ,e CEC2013 test suite contains a total of 28 test functions, among which F1∼F5 are unimodal functions, which have only one optimal value and are used to verify the convergence performance of the algorithm; F6∼F20 are multimodal functions, which have multiple locally optimal solutions and are used to verify the ability of the algorithm to escape from the local optimum; F21∼F28 are composition functions. ,e CEC2020 test suite contains a total of 10 test functions, of which F1 is a unimodal function, F2∼F4 are multimodal shifted and rotated functions, F5∼F7 are hybrid functions, and F8∼F10 are composition functions. ,e relevant functions for the CEC2013 and CEC2020 test suites can be found in the literature [32,33], respectively.
In this section, to ensure the fairness of the algorithm comparison, all algorithms were run on a computer with Windows 11 operating system, i5-11400H CPU, and programmed with MATLAB R2021a.

Sensitivity Analysis of Parameters
. ,e INCCLA algorithm proposed in this paper involves five parameters: RP prob , SL prob , R, lf min , and lf max . Compared with the NCCLA algorithm, the INCCLA algorithm retains the originally proposed settings of RP prob and SL prob , changes the settings of lf min and lf max , and adds the parameter R. Given this, to analyze further the effect of parameters on the performance of the INCCLA algorithm, this section only analyzes the effect of parameters R, lf min and lf max on the performance of the INCCLA algorithm. To ensure fairness of comparison, the population size in each algorithm is N � 50, the dimension of the test function is D � 30, and the maximum number of function evaluations is MaxFEs � 150,000, RP prob � 0.9, SL prob � 0.99. Table 1 gives the mean and average values of the optimal results obtained from 30 independent runs of the INCCLA algorithm on the CEC2013 test set when R is set to different parameters. In this experiment, lf min � 0.0001 and lf max � 0.09. Table 2 give the mean and average values of the optimal results of the INCCLA algorithm for 30 independent runs on the CEC2013 test set when lf min and lf max are set to different combinations of parameters. For this experiment, R � 15. Tables 1 and 2 blacken the parameters that achieved the best results on each function and count the number of functions that performed best on each parameter set in the last row.
According to the data in Table 1, it can be seen that INCCLA achieves the best convergence on 19 test functions when R � 15. When R � 5, 25, and 35, the best convergence is achieved on 9, 11, and 14 test functions, respectively. ,us, the performance of INCCLA is sensitive to the setting of the parameter R, and the algorithm performs best when R � 15. Similarly, according to the data in Table 2, the performance of INCCLA is also sensitive to the settings of lf min and lf max , and INCCLA achieves the best convergence among the 19 tested functions when lf min � 0.0001 and lf max � 0.09. In summary, if there is no special requirement, INCCLA with R, lf min , and lf max set to 15, 0.0001, and 0.09, respectively, the algorithm can obtain better optimization results.

Experiment on Effectiveness of Each Improvement
Strategy. According to Section 3, it is known that the INCCLA algorithm improves the NCCLA algorithm in three aspects. In this paper, to verify the effectiveness of the three improvement strategies, the NCCLA algorithm is combined with the three improvement strategies individually to form three new algorithms, namely, the INCCLA algorithm based on cosine similarity, the INCCLA algorithm based on improved juvenile learning phase, and the INCCLA algorithm based on improved reinforcement phase, named as INCCLA1, INCCLA2, and INCCLA3, respectively, and compared with the original NCCLA algorithm on the CEC2013 test suit.
In this section, to ensure fairness of comparison, the population size in each algorithm is N � 50, the dimension of the test function is D � 30, and the maximum number of function evaluations is MaxFEs � 5000 × D � 150,000. ,e other parameters of each algorithm are set as shown in Table 3. To avoid the contingency of a single operation of the algorithms, each algorithm was run 30 times independently on each test function. Table 4 presents the running results of each algorithm on 28 test functions in 30 dimensions, where the "±" before and after represents the mean and standard deviation of the optimal values in 30 experiments, respectively, and the data that outperform the original NCCLA algorithm on the same function are marked in bold. To compare the significance of the performance of each improved strategy with the performance of NCCLA and to verify that the obtained results are not coincidental, Tables 5 and 6 present the results of the Wilcoxon rank sum test and Friedman test [34] between each improved strategy and NCCLA algorithm on 28 test functions, respectively. In Table 5, when the p value is greater than 0.05, it indicates that there is no significant difference between the improvement strategy and NCCLA, which is indicated by the symbol " = ." When the p value is less than 0.05, and the mean value of the optimal solution of the result obtained in 30 experiments of the improvement strategy is better than NCCLA, it indicates that the improvement strategy is significantly better than NCCLA, which is indicated by the symbol "+"; otherwise, it indicates that the performance of the improvement strategy is significantly worse than NCCLA, which is indicated by the symbol "−." In Table 6, the smaller the rank mean value corresponding to the algorithm, the better the algorithm's overall performance.   that all three improvement strategies proposed in this paper achieve better results.
In summary, all three improvement strategies proposed in this paper have certain improvement effects on NCCLA. ,e improvement strategies in the juvenile learning phase and the improvement strategies in the reinforcement phase have the most obvious improvement effects.

Performance Comparison of INCCLA with Other
Algorithms. In this section, to verify the superior performance of the INCCLA algorithm in terms of convergence precision and convergence speed, this section compares the INCCLA with the NCCLA algorithm and the four better evolutionary algorithms on the CEC2013 and CEC2020 test suites, including the artificial bee colony algorithm based on new neighborhood selection mechanism (NSABC) [35], the sine cosine algorithm based on transition parameters and mutation operators (MSCA) [36], artificial tree algorithm based on two populations (IATTP) [37] and the improved crow search algorithm (ICSA) [38]. To ensure the fairness of the comparison, the population size in each algorithm is N � 50, and the maximum number of function evaluations is MaxFEs � 150,000. ,e other parameters of each algorithm are set as shown in Table 7, where the parameter values of each comparison algorithm are taken as in the original paper.

Comparison of INCCLA with Other Algorithms on Convergence Precision
(1) Testing at CEC 2013. In order to fully compare the performance of INCCLA with other algorithms in terms of convergence precision, tests were conducted on the CEC2013 test suite with three different dimensions, D � 10, D � 30, and D � 100, respectively. Tables 8-10 give the mean and standard deviation of 30 independent experiments for each algorithm   Table 11. To comprehensively evaluate the overall performance of all algorithms, the results of the Friedman test are given in Table 12.

2.15E + 01
Bold indicates the best results obtained on each function.

1.19E + 03
Bold indicates the best results obtained on each function.    F7, F9, F11, F12, and F13; ICSA obtained the theoretical optimal for five functions, including F3, F7, F9, F12, and F13. As seen in Table 11, compared to INCCLA, NCCLA has similar performance on 5 functions and significantly better performance on F4 only, but significantly inferior performance on the 22 tested functions; NSABC has significantly better performance on F6 and F26, but significantly inferior performance on 14 functions; MSCA has similar performance on 4 tested functions, but significantly inferior performance on the remaining 24 tested functions; IATTP had significantly better performance on F4 only, but significantly inferior performance on 17 test functions; ICSA has similar performance on 7 test functions only, but significantly inferior performance on the remaining 21 test functions. In summary, it shows that for the 30-dimensional CEC2013 test suite, the INCCLA proposed in this paper has a significant advantage in convergence accuracy compared with other representative methods, and the performance gap between algorithms is significantly more significant than that of the 10-dimensional optimization problem.
For the 100-dimensional optimization problem, it can be seen from  Table 11, NCCLA performs significantly better on only four functions compared to INCCLA, including F4, F14, F17, and F22, but performs significantly inferior on the 20 test functions; NSABC has similar performance on 5 tested functions, significantly better performance on 7 functions, but significantly inferior performance on 16 functions; MSCA has similar performance on F8 and F11, but significantly inferior performance on the remaining 26 test functions; IATTP has significantly better performance on F4 only, but significantly inferior performance on the 24 test functions; ICSA has significantly better performance on F4 only, but significantly inferior performance on the 23 test functions. In summary, it shows that for high-dimensional optimization problems, the INCCLA proposed in this paper has a significant advantage in convergence precision compared with other representative methods, and the performance gap between the algorithms is more significant than that for optimization problems in 10 and 30 dimensions. Table 12 shows that the overall performance of INCCLA is the best among the six optimization algorithms for the 10dimensional, 30-dimensional, and 100-dimensional test functions, and its advantage is more obvious as the dimensionality of the optimization problem increases. In both 10-dimensional and 30-dimensional test functions, NSABC ranks second in overall performance, and IATTP, ICSA, and NCCLA are slightly worse than NSABC in both dimensions. In the 100-dimensional test function, the overall performance of NCCLA ranked second, and NSABC, ICSA, and IATTP were slightly worse than NCCLA in this dimension. ,e overall performance of MSCA was the worst among the six algorithms. In summary, compared with other algorithms, the INCCLA proposed in this paper has certain advantages in terms of convergence precision.  In summary, compared with NCCLA and the other four superior optimization algorithms, INCCLA shows a significant advantage in convergence precision. ,e advantages become more obvious as the dimension of the optimization problem increases.

Comparison of INCCLA with Other Algorithms on
Convergence Speed. In order to compare the differences in convergence speed among the algorithms more intuitively, Figure 4 shows the convergence curves of each algorithm on the 28 test functions of the CEC2013 test suite when the dimension is 30, where the horizontal coordinate is the number of function evaluations and the vertical coordinate is the logarithm of the fitness value.
As shown in Figure 4 In summary, compared with NCCLA and the other four superior optimization algorithms, the INCCLA proposed in this paper is superior in the overall performance in terms of convergence precision and convergence speed, although it has the shortcoming of not converging fast enough in the early stage for individual complex functions. ,e essential reasons for this are all caused by the three improved evolutionary strategies in Sections 3.1-3.3 of this paper, which are analyzed in depth as follows. ,e update strategy of the INCCLA algorithm has a large step size in the early evolutionary stage, which makes individuals search a farther distance around themselves, thus ensuring the global search of the algorithm and reducing the possibility of the algorithm falling into a local optimum, for more complex optimization problems, which also inevitably slows down the rapid aggregation of the population around a dominant region. ,erefore, it makes INCCLA not converge fast enough in the early stage on individual complex functions. However, as the iteration proceeds, the individual search step size gradually decreases, and it is easier to search to have the advantage of the area, easy to refine the search, which improves the convergence speed in the late evolutionary stage and facilitates the algorithm to obtain a more excellent convergence precision. In addition, the parental selection mechanism can guide the evolutionary direction of juveniles during the learning phase, allowing the algorithm to increase algorithmic diversity while maintaining convergence. ,e juvenile crow hybrid learning mechanism can determine the state of the algorithm in the current evolutionary stage according to the individual's attributes and select the learning mode in a targeted manner so that the algorithm can effectively maintain the balance of convergence speed and diversity in the evolutionary process. ,e interplay and

26
Computational Intelligence and Neuroscience

28
Computational Intelligence and Neuroscience influence of these three mechanisms ensures the overall advantage of INCCLA in terms of convergence precision and convergence speed. In the next step, we will work on further improving the convergence speed of the INCCLA algorithm in the pre-evolutionary stage while ensuring no significant change in the convergence precision.

Comparison of the Effects of Engineering Applications.
To further compare the effectiveness of INCCLA with each comparison algorithm for engineering applications, this section will be validated by dealing with the collaborative beamforming optimization problem. ,e collaborative beamforming optimization problem is a typical problem in antenna arrays. ,e amplitude ξ ∈ [0, 1] and phase α ∈ [−π , π] of the transmit signal weights of each collaborative node are used as decision variables in the algorithm optimization process, and the peak side valve level PSL minimization as shown in (27) is achieved by algorithm optimization.
PSL � 20 log 10 max AF θ SL , w AF(ϕ, w) , (27) where, AF(θ, w) represents the array factor, as shown in (28) and φ is the main beam direction. ,e positions of θ SL can be found by finding all the peak points of the array factor (other than the main lobe's peak) for the domain θ ∈ [−π, ϕ) ∪ t(ϕ, π], the denominator AF(ϕ, w) is the main beam power, which can be calculated as described in the literature [39], and molecule max|AF(θ SL , w)| is the maximum beam power in the side flap.
,is section examines the practical engineering optimization of INCCLA and the other comparative algorithms by optimizing the collaborative beamforming problem shown in Figure 5, in which the wavelength of the transmitting signal is λ. ,e six synergistic nodes are distributed in a circular domain of radius 6λ, where one synergistic node is located in the center of the circular domain.
,e parameter settings of each algorithm are shown in Table 7. In order to avoid adverse effects of chance on algorithm evaluation, each algorithm was run 10 times   Computational Intelligence and Neuroscience  independently. ,e best median PSL obtained by each algorithm corresponding to the collaborative beam optimization scheme was selected for comparison. ,e beam diagram of INCCLA with each comparison algorithm in the Cartesian coordinate system is visually presented in Figure 6. ,e PSL obtained by each algorithm is labeled within the figure.
According to Figure 6, it can be seen that each algorithm achieves better collaborative beam optimization compared to the unoptimized algorithm, where the best PSL values obtained by INCCLA, NCCLA, NSABC, MSCA, IATTP, and ICSA are −5.5044 dB, −5.2306 dB, −5.2181 dB, −3.5948 dB, −4.1655 dB, and −5.3668 dB, respectively. Compared with the other five algorithms, the best PSL value obtained by INCCLA is the smallest and achieves the best collaborative beam optimization among the five algorithms. In summary, the proposed INCCLA also performs better in engineering applications.

Conclusions
,is paper proposes an improved New Caledonian Crow Learning Algorithm (INCCLA) to improve further the convergence performance and the ability to escape from the local optimum of the NCCLA algorithm. First, INCCLA introduces cosine similarity in the parent selection phase. ,e selected parent can guide the juvenile crow to exploit different regions, maintaining the balance between population diversity and the convergence speed of the algorithm in the evolutionary process. Second, INCCLA sets up a mixed learning mechanism of complete learning and incomplete learning, with the complete learning phase accelerating individual convergence and improving the convergence speed of the algorithm, and the incomplete learning phase increasing population diversity and enhancing the ability of the algorithm to escape from the local optimum. Finally, the juvenile crow reinforcement phase introduces weight factors and random perturbations to increase the global search and local exploration ability of the algorithm in the evolutionary process. ,e proposed parent reinforcement phase enhances the individual search capability of the parent and improves the overall performance of the algorithm. Experimental results on the CEC2013 and CEC2020 test suites show that the improvement strategy proposed in this paper effectively improves the overall performance of INCCLA, enabling the algorithm to maintain a balance between convergence speed and population diversity during the evolutionary process, and can achieve better results in most of the test functions. In addition, INCCLA is also more competitive in unimodal, multimodal, and composition problems compared with the other four optimization algorithms. INCCLA is also applied to the collaborative beamforming optimization problem, demonstrating the usefulness of INCCLA in engineering applications.
Although the INCCLA algorithm proposed in this paper has obvious advantages in terms of convergence accuracy, its

32
Computational Intelligence and Neuroscience time complexity is slightly higher. In future work, the time complexity of the algorithm needs to be further reduced. In addition, it further broadens the application of the INCCLA algorithm in practical engineering fields.

Data Availability
,e labeled datasets used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
,e authors declare that there are no conflicts of interest regarding the publication of this paper.