An Enhanced Comprehensive Learning Particle Swarm Optimizer with the Elite-Based Dominance Scheme

In recent years, swarm-based stochastic optimizers have achieved remarkable results in tackling real-life problems in engineering and data science. When it comes to the particle swarm optimization (PSO), the comprehensive learning PSO (CLPSO) is a well-established evolutionary algorithm that introduces a comprehensive learning strategy (CLS), which effectively boosts the efficacy of the PSO. However, when the single modal function is processed, the convergence speed of the algorithm is too slow to converge quickly to the optimum during optimization. In this paper, the elite-based dominance scheme of another well-established method, grey wolf optimizer (GWO), is introduced into the CLPSO, and the grey wolf local enhanced comprehensive learning PSO algorithm (GCLPSO) is proposed. Thanks to the exploitative trends of the GWO, the algorithm improves the local search capacity of the CLPSO. The new variant is compared with 15 representative and advanced algorithms on IEEE CEC2017 benchmarks. Experimental outcomes have shown that the improved algorithm outperforms other comparison competitors when coping with four different kinds of functions. Moreover, the algorithm is favorably utilized in feature selection and three constrained engineering construction problems. Simulations have shown that the GCLPSO is capable of effectively dealing with constrained problems and solves the problems encountered in actual production.


Introduction
Optimization problems are common problems in real life, and we need to achieve the best solution when tackling a specific problem. With the increase of complexity of the problem, the traditional gradient-based method is difficult to better optimize some types of problems [1,2]. To deal with this problem, metaheuristic algorithms are widely used in real life. ese algorithms use iterations and randomly generate optimal solutions for optimization problems by simulating natural phenomena or social behaviors [3][4][5][6]. e underlying idea behind these technologies is using mathematical algorithms to simulate biological and physical systems in nature, such as natural evolutionary and swarm intelligence algorithms. Previous studies confirm that metaheuristic algorithms possess more effectiveness than gradient-based algorithms in coping with some problems that involve optimization [7][8][9][10]. Also, the metaheuristic algorithm (MA) has some weaknesses to be improved. For example, convergence to an optimal solution is relatively slow, and there is no universal model when dealing with different problems. erefore, it is required to modify and enhance the core exploratory and exploitative abilities of stochastic algorithms on some optimization problems [11][12][13][14][15][16][17][18].
PSO [45] is a MA proposed by Eberhart et al. in 1995, which is motivated by the communication behaviors and social interaction of animals. PSO simulates the hunting behavior of the birds that cooperatively search for food. Each member of the group adjusts their search model by learning their own experience or other members. Inspired by this phenomenon, a mathematical model was established. In the PSO algorithm, a particle means a number of the group, which is a potential solution of the problem optimized and represents a point in the search space. e position of food is regarded as globally optimal. Each particle possesses a fitness value and speed, which can be adjusted according to the global optimal solution and the individual optimal solution. Due to its small number of parameters and ease of use, the PSO algorithm was used in function optimization [46], filter design [47], proportional-integral-derivative (PID) control [48], power allocation [49], and other scientific and engineering applications [50][51][52][53]. However, the algorithm tends to be trapped in local optimization when encountering complex multimodal problems. With the purpose of improving the performance of the algorithm, researchers have come up with a large number of PSO variants to promote the acceleration coefficient and inertia weight of the parameters controlled [54,55] and applied the population number [56,57] to the optimization problem. Gong et al. [58] added GA to PSO to promote the convergence performance of the algorithm. Zhan et al. [59] added the orthogonal learning (OL) strategy to PSO to promote the capacity of the algorithm to escape from the local optimum. Cheng and Jin [60] combined competitive learning strategies with PSO to better the convergence accuracy of the algorithm. To enhance the ability of the PSO algorithm to obtain the optimum on complex multimodal problems, Liang et al. [61] proposed a new, improved PSO algorithm, which is CLPSO. It adopted an innovative comprehensive learning strategy (CLS), where the individual best position of all particles is used to update the particle's speed. is mechanism allows group diversity to be conserved to prevent convergence in the premature period. However, the convergence speed of CLPSO on the unimodal function is very slow. For the sake of making the algorithm to converge to the optimal solution, it is necessary to enhance the local search ability of the algorithm near the optimum. Grey wolf optimizer (GWO) [62] is a MA proposed for global optimization. Its inspiration comes from the hunting process of grey wolves in nature. GWO uses the same principle to organize different individuals in the algorithm after learning the wolf group organization hierarchy. Since the GWO has fewer parameters, on the contrary, this strategy is relatively simple, flexible, and scalable, and the algorithm has a good convergence effect on the unimodal function. At present, this method has been applied in many fields, including neural network [15,16,63,64], environment [65], medical diagnosis [17,[66][67][68], and image processing [69]. In this research, the GWO algorithm is introduced into CLPSO to generate a novel algorithm called GCLPSO, which can reach a certain harmony between local search and global search to enhance the ability of the algorithm when finding the optimum. Specifically, CLPSO can effectively preserve the population diversity and evade premature convergence. en, GWO is utilized to perform a local search for excellent particles in CLPSO to achieve high convergence speed and accuracy. eoretically, the proposed mechanism can greatly better the balance between exploration and development so that the algorithm can quickly converge to the optimum.
To analyze the efficiency of the algorithm, the benchmarks in CEC2017 [70] were adopted to evaluate the performance of the GCLPSO and other comparison algorithms. e comparison algorithms include seven MAs, such as PSO, dragonfly algorithm (DA) [71], GOA, sine cosine algorithm (SCA) [72], MFO, WOA, and GWO, and eight advanced evolutionary algorithms, such as Cauchy and Gaussian sine cosine optimization (CGSCA) [73], sine cosine algorithm with differential evolution algorithm (SCADE) [74], chaotic fruit fly optimization algorithm (CIFOA) [75], adaptive mutation fruit fly optimization algorithm (AMFOA) [76], Lévy flight trajectory-based whale optimization algorithm (LWOA) [77], improved whale optimization algorithm (IWOA) [78], biogeography-based learning particle swarm optimization (BLPSO) [79], and CLPSO [61]. Experimental results have shown that the improved algorithm is considerably superior to other comparison algorithms in finding the optimal solution. Also, GCLPSO has shown a good effect on the engineering constraint problems. is paper applies the proposed algorithm to the problems of the pressure vessel, welded beam, and I-beam design models. It can be seen from the optimization outcomes of the comparisons that the improved algorithm was significantly better than other methods. is paper is divided into five sections. Section 2 briefly describes the CLPSO algorithm and GWO algorithm. Section 3 provides a detailed definition of the GCLPSO. Section 4 is the experimental part, which details the experimental results of the GCLPSO and other comparison algorithms on these benchmark functions, feature selection, and engineering problems. Section 5 summarizes the contributions of this paper and plans for future work.

Background Knowledge
In this paper, the idea of the grey wolf algorithm is integrated into the CLPSO to strengthen the capability of the algorithm scouting for the optimal solution. is section will explain the grey wolf algorithm and the CLPSO in detail. It uses a new comprehensive learning strategy (CLS) that uses the personal best position of the particle, pbest, to update the speed of the particle. e CLS can maintain the diversification of the population and prevent premature fall into a local optimum. e formula of speed and position update in the CLPSO algorithm is given as follows: where f i (d) represents the value of the dth dimension in a particle pbest, represents the learning sample vector defined for particle i, and pbest f i (d),d represents the optimal position of all particles pbest with the corresponding dimension value. e particle speed is updated by learning which dimension depends on the parameter learning probability Pc. When a dimension of a particle requires to update the speed, it will produce a random number. e corresponding dimension value will be learned from its own pbest if the random number is greater than Pc. Otherwise, it will learn from other particles pbest. e algorithm selects the learning particle from other particles as follows: (1) Firstly, select two particles randomly from the population, excluding the particles that have updated the speed. (2) Compare the fitness values pbest between the two particles and choose the best one. In this paper, the fitness value is the smallest solution of the function, which indicates that the function value is extremely small when solving the minimization problem.
CLPSO allocates a learning probability Pc to each particle by the following equation: where a and b determine the maximum and minimum learning probabilities and N is the total number of particles. Also, in order to avoid wasting time in bad directions when learning the optimal personal position of particles from samples, the threshold m of particle learning times was set. If the adaptive value of the particle is not improved after m times of continuous movement, a random particle will be generated to replace the particle. Pseudo-code of the particle f i generation method in CLPSO is shown in Algorithm 1.

Grey Wolf Optimizer.
Mirjalili et al. [62] proposed a new MA named GWO in 2014. e algorithm is inspired by the social level and hunting strategy of the wild grey wolf. In the GWO, the population is divided into four levels, including the highest alpha (α), beta (β), delta (δ), and the lowest omega (ω). e better wolves α, β, and δ lead the other wolves ω to explore the preferable solution field. In the GWO, wolves can spot the position of prey and encircle them.
where X → is the position vector of the grey wolf; A → and C → are coefficient vectors; X → p is the position vector of the prey; and t is the number of iterations. e calculation method of C → and A → is shown as follows: where r → 1 and r → 2 are random numbers between [0, 1]; a → decreases from 2 to 0 as the number of iterations increases. e hunting process of the grey wolf is shown by the following formula: e pseudo-code of the GWO is shown in Algorithm 2.

Proposed GCLPSO Method
is section interprets the GCLPSO in detail. In this paper, due to the slow convergence speed and low convergence accuracy of the CLPSO algorithm when dealing with optimization problems, core mechanisms of the GWO algorithm are cooperated to enhance the CLPSO algorithm, and a new algorithm called GCLPSO is proposed. e GWO's mechanisms can effectively promote the exploitative engine of the algorithm.
CLPSO updates the particle speed through pbest of all particles to avoid the algorithm from trapping in the local optimal solution prematurely and prevents the algorithm from carrying out local search near the global optimal solution. e elite-based dominance idea of the improved algorithm, called GCLPSO in this paper, is to select the three optimal solutions generated in each iteration of the CLPSO algorithm, such as GWO's alpha, beta, and delta, and then explore the vicinity of the three optimal solutions generated by the CLPSO algorithm through the GWO algorithm's idea. Meanwhile, the optimum searched is compared with the Complexity 3 optimal solution in the CLPSO algorithm. If the optimum searched is superior to the optimum in the CLPSO algorithm, the optimal value of the CLPSO algorithm is updated to the global optimal solution. e improved algorithm enhances the local search of the CLPSO algorithm through the idea of the GWO algorithm. It boosts the local search capacity of the algorithm and enhances the accuracy and the acceleration of the algorithm under the condition that the algorithm does not trap in the local optimal solution in premature. e specific steps of the algorithm are described as follows: (1) First, initialize the particles and parameters, and calculate the fitness for each particle. (2) Update every particle with the CLPSO algorithm.
(3) e three optimal solutions of the CLPSO are selected as grey wolf algorithm alpha, beta, and delta, and the GWO is used to search locally near the optimal solution. If the optimum found is superior to the optimal solution in CLPSO, the optimal solution in CLPSO is replaced. (4) Keep repeating Steps 2 and 3 until the condition of termination is met.
e specific process of the GCLPSO is described in Algorithm 3, which shows each step involved in the algorithm in detail. In order to illustrate the process of the improved algorithm more clearly and intuitively, the flowchart of the GCLPSO algorithm is shown in Figure 1.
e time complexity of the GCLPSO depends on the algorithm search population initialization O (n) and the grey wolf population initialized to O (n); update the search particle position to O(n × d × g), update the local search for all grey wolf positions O(n × d × g), and sort the population fitness values as O(n × log n × g). n is the size of the population, d is the dimension, and g is the maximum times of iterations. erefore, the final time complexity of the GCLPSO algo- Input the best position of each particle pbest and the fitness of an individual's best positions fit (pbest); learn probability Pc i (i � 1, 2, . . . , N) Initialize the grey wolf population X i (i � 1, . . . , N) and the parameters a, A, C Calculate the fitness of each wolf X α � the best wolf α X β � the second best wolf β X δ � the third best wolf δ while (t < T)//T is the maximum number of iterations for i � 1 : N Update the position of the current wolf by equation (12) end for Update a, A, and C Update X α , X β , and X δ t � t + 1 end while Return X α ALGORITHM 2: Pseudo-code of the GWO.

Experimental Studies
is section firstly compares the algorithm proposed in this paper with other advanced methods through 30 classic benchmark functions in CEC2017. e performance of the GCLPSO algorithm on benchmark functions was verified. en, the algorithm was applied to the design of three engineering construction problems, and good optimization results were obtained, which confirms the ability of the algorithm in coping with constraints.

Benchmarks' Validation.
In this paper, 30 classical benchmarks in CEC2017 were utilized to compare the algorithm proposed in this paper with other advanced methods. ese functions consist of unimodal (C01-C03), multimodal (C04-C10), hybrid (C11-C20), and composition functions (C21-C30). e performance of the algorithm was evaluated more comprehensively by different types of benchmarks. e descriptions of the 30 benchmarks are shown in Table 1. e CLPSO is also compared with other PSO variants on 10 classic benchmark functions in CEC2019. e benchmark functions of CEC2019 are shown in Table 2.
To obtain fair and unbiased results, the experiment was carried out with the same parameter setting: the population size and the maximum number of iterations were set to 30 and 2000, accordingly. Each competitor runs independently thirty times on the benchmark functions. en, the Friedman test [80] is used to comprehensively assess the optimal results of all competitors on the benchmarks. e Friedman test is a nonparametric statistical comparison test, which is usually adopted to distinguish the differences between multiple test results. en, the average performance of the selected method is sorted, and further statistical comparison is carried out to achieve the ARV (average sort value) in the result of comparison. Moreover, the paired Wilcoxon symbolic rank test [81] was adopted for the statistical test to detect the significant difference between the two sample mean values. Only in the condition that the p value obtained was less than 0.05, the performance of the GCLPSO was considered to be significantly superior to other competitors. In this paper, two effective test approaches were applied to Set the maximum number of iterations, the threshold m, and the dimensionality of the space. Generate learning sample vectors f i using Algorithm 1; flag(i) � 0(i � 1, 2, . . ., n) Randomly generate the grey wolf population M i (i � 1, . . . , n); get the fitness of each agent fit(M i ) Initialize a, A, and C Create the initial population x i (i � 1, . . . , n); calculate the objective function value of x i : fit(x i ) Record the best position of each particle pbest and fitness of personal best positions fit(pbest); calculate the learning probability Updating velocities and locations using equations (1) and (2) Compute fitness of population Update global optimal solution gbest Select the best three solutions x a , x b , and x c from fit(x i ) as alpha M α , beta M β , and delta M δ for i � 1 : n Update the position of the current wolf by equation (12)

Comparisons of the GCLPSO with Other Algorithms.
An this section, several MAs used were compared with the GCLPSO on 30 benchmark functions in CEC2017. In order to fully certify the performance of the GCLPSO, this paper uses seven classical MAs and eight advanced MAs as comparison algorithms. e classical MAs involved are as follows: PSO [45], GOA [82], DA [71], MFO [29], SCA [72], WOA [24], and GWO [62]. e advanced MAs include CGSCA [73], SCADE [74], CIFOA [75], AMFOA [76], LWOA [77], IWOA [78], BLPSO [79], and CLPSO [61]. As shown in Table 3, the parameters of all Randomly generate the grey wolf population and calculate the fitness Update velocities and locations using equations (1) and (2) Update the position of the gery wolf search agent using equation (14) Select three optimal solutions as alpha, beta, and delta Update global optimal solution gbest Update pbest of each particle in the population   Complexity comparison algorithms were set according to the original paper. In order to make the experimental results fair and reliable, all algorithms were executed in the same environment.
In the experiment, the maximum number of iterations is set to 2000, and the population size is set to 30. Each algorithm runs 30 times independently on the benchmark function.
e comparison results are shown in Table 4, where the mean and standard deviation of the algorithm after 30 independent executions on 30 benchmark functions are listed. On the unimodal benchmark functions, LWOA and PSO in the C2 case have strong optimization ability, and the optimum of the LWOA algorithm is superior to all other competitors. e PSO algorithm in the C3 case has strong competitiveness, and its final result is superior to other algorithms. In the case of C1 and C4, the optimization result Shifted and rotated Lunacek bi-Rastrigin function 700 C08 Shifted and rotated noncontinuous Rastrigin's function 800 C09 Shifted and rotated Lévy function 900 C10 Shifted and rotated Schwefel's function 1000     of the algorithm in this paper is superior to all the competition algorithms. In the multimodal benchmark functions, C5 and C8 cases, the solution of the GWO algorithm is greater than that of the GCLPSO algorithm, but in the functions of C6, C7, C9, and C10, the optimal solution obtained by the GCLPSO algorithm is higher than that of all other competing algorithms. On hybrid benchmark functions, GCLPSO algorithm is better compared to other algorithms. In the composition benchmark functions, in addition to the strong competitive advantage of LWOA, GOA, and PSO in the C28 function, GCLPSO has obvious advantages in other functions. e GCLPSO algorithm uses the GWO algorithm to strengthen the local scout of the CLPSO algorithm so that the algorithm has a better harmony between local search and global search and possesses an optimal solution on most benchmark functions. is proves that the algorithm can effectively deal with unimodal, multimodal, hybrid, and composition functions at the same time.
Also, the Friedman test and Wilcoxon signed-rank test were used to evaluate the comprehensive effect of the algorithm. e Wilcoxon symbol rank test measures the p values of all comparison algorithms on 30 benchmark functions, which are basically less than 0.05. At the same time, the symbol "+/�/−" can indicate that the GCLPSO is significantly superior to 15 competing algorithms on 30 benchmarks of 4 different types. Table 4 also shows the Friedman test comparison results. GCLPSO has the lowest ARV among the 30 benchmark functions. It is proved that the GCLPSO algorithm is greater than other popular algorithms in CEC2017. erefore, the algorithm proposed by us has a preferable convergence rate and a more accurate convergence solution than other competitors.
In order to more intuitively and clearly understand the convergence trend of the algorithm in terms of functions and estimate the performance of the algorithm, a representative benchmark function was selected from CEC2017 for analysis, and images demonstrated the convergence process of the algorithm. As shown in Figure 2, on unimodal benchmark function C1, the convergence tendency and precision of the GCLPSO algorithm are better than other algorithms. On multimodal benchmark functions C7 and C9, the convergence rate of the PSO in the early stage of the C7 function iteration is better than that of the GCLPSO algorithm, the convergence tendency of the GWO and BLPSO algorithm in the early stage of the C9 function iteration is greater than that of the GCLPSO algorithm, but the optimal value of the GCLPSO algorithm in the late stage of convergence is better than that of all other competing algorithms. On hybrid benchmark functions C12, C13, and C9, GCLPSO algorithm is lower than some competitors in the early convergence period, but the optimal value of convergence in the late convergence period is much higher than other competing algorithms. On the composition benchmark functions C22, C26, and C30, although the BLPSO algorithm is highly competitive on function C22, the optimal values searched by the GCLPSO algorithm on the three benchmark functions are all higher than other algorithms.

Comparisons of the GCLPSO with Other PSO Variants.
In this section, GCLPSO was compared with the improved PSO variants on the benchmark functions of CEC2017 and CEC2019. e advanced PSO variants include FST-PSO [83], PP-PSO [84], and SopPSO [85]. In order to make the experimental results fair and reliable, all algorithms were executed in the same environment.
In the experiment, the population size is set to 30, and the maximum number of iterations is set to 1000. Each algorithm runs independently on the benchmark function 30 times. e comparison results are shown in Table 5, which lists the average adaptive value and standard deviation of the algorithm after the algorithm is independently executed 30 times on 40 benchmark functions. e GCLPSO achieves optimal fitness values on most benchmark functions. As shown in Table 5, the optimal fitness value is shown in bold. On unimodal functions C1 and C3, the adaptive value searched by the GCLPSO is better than other comparison algorithms. On multimodal functions C5, C6, C7, C8, and  In order to more intuitively and clearly understand the convergence trend of the algorithm and evaluate the performance of the algorithm, representative benchmark functions were selected from CEC2017 and CEC2019 for analysis. As shown in Figure 3, on CEC2017 multimodal benchmark functions C6 and C9, the adaptive values searched by the GCLPSO are better than other comparison algorithms. On the C6 and C9 benchmark functions, the adaptive value searched by the GCLPSO in the early iterations is not the best, but the adaptive value searched by the GCLPSO in later iterations is better than other algorithms. is is because this paper introduces the GWO algorithm idea into CLPSO to further improve its local search ability. On the C17 function, all the algorithms in the early iterations quickly searched for an adaptive value. In the later iterations, part of the comparison algorithms fell into a local optimum, and the GCLPSO continued to update the optimal fitness value. On the C18, C31, C33, and C38 benchmark functions, compared with other comparison algorithms, the GCLPSO not only searched for the best fitness value in the later iterations but also had a higher convergence trend and still updated the optimal fitness value. On the C23 and C27 benchmark functions, the GCLPSO converged to an adaptive value in the early stage of the iteration, and the trend of updating the adaptive value in the later iteration was relatively low. However, the adaptive value of the GCLPSO is much better than other comparison algorithms.

Feature Selection.
Feature selection is a multiobjective optimization problem. e goal of this method is to select features as few as possible in the multifeature problem to obtain the greatest classification accuracy. In this section, the proposed GCLPSO was compared with the advanced feature selection algorithms on 12 different UCI datasets [86] as described in Table 6.
GCLPSO constantly updates the position of particles to achieve new solutions. Nevertheless, we need to select the feature of the problem in the binary form for feature selection problems. erefore, we need to convert the continued values obtained from the algorithm into binary values. We first use the random threshold to reinitialize the algorithm to generate a binary value, as shown in the following equation: where x denotes the specific position value of the individual and i and j denote the i-th row and the j-th column, respectively. en, the continuous solution obtained in the algorithm is compressed by using the V-shaped transfer function to implement the conversion, and the function enables the search agent to move in the 0 to 1 space as shown in the following equation: where x is a continuous value. e value achieved after conversion by the V-shaped transfer function is a continued value between 0 and 1. Finally, the binary value acquired by the initialization and the value achieved by the V-shaped transfer function are used to generate a new binary value by the following equation: x � posOut � ∼pos, rand < s, where posOut represents the newly achieved binary value, pos represents the initialized binary value, and s represents the continued value obtained by the V-shaped transfer function.
As with K-fold cross-validation, the data are separated into a training set and a test set. e verification data are e training set data are separated into K groups (K-fold), and each subset of the data is separately verified. One subset was used as the test data, and the remaining K − 1 subset data were used as a training set. In the experiment, each dataset was run N times, each performing a K-fold cross-validation procedure.
is experiment used the K-nearest neighbor (KNN) [87] classifier to classify the data. e KNN classifier uses the data in the training set to train the model and then uses the model to test the data in the validation set. Finally, the model is used to classify the test dataset to obtain the final accuracy. In this paper, the GCLPSO is binary-converted and compared with other feature selection methods including BGWO [86], BMFO [88], BPSO [89], and BBA [90] to estimate the performance of the proposed algorithm.
In this study, feature information is obtained in the binary form. Each of these search agents is a one-dimensional binary vector whose length relies on the number of features in the dataset. Each search agent represents a solution where an element value of 0 in the vector indicates that the feature is not chosen, and a value of 1 implied that the feature has been chosen. en, each solution is evaluated  Complexity 15 by the fitness value acquired. e relationship between the error rate and the fitness value obtained by the classifier through data set validation is as shown in the following equation: where correctRate represents the accuracy of the KNN classifier on the validation set, N represents the total number of features in the dataset, and N i represents the number of features obtained by the i-th search agent after feature selection. In addition, α and β are two weights, respectively, where α � 0.95 and β � 0.05. e entire process updates each search agent by iteration until the maximum iterations. Table 7 illustrates the average fitness values acquired by the GCLPSO and BMFO, BGWO, BPSO, and BBA in 12 datasets and uses bold fonts to represent the best results. It can be observed that the GCLPSO possesses the best fitness value on the D2 to D5 datasets and the D9 and D11 datasets. In these datasets, the number of features in the D4 and D9 datasets is greater than 30. e average fitness value of the GCLPSO is superior to the BMFO algorithm on all datasets. Except that the average fitness value of the GCLPSO is inferior than the BPSO on the D6 dataset, the GCLPSO is greater than the BPSO on other datasets. In general, the average fitness of the GCLPSO algorithm on the 12 datasets is the best, which also shows that the algorithm works best in this kind of problem. Table 8 demonstrates the average error rate of each algorithm on the verification set when K-fold cross-validation is performed for 12 datasets. As can be observed from the table, the GCLPSO has the lowest average error rate on the D4, D5, D6, and D9 datasets. At the same time, on the D2, D7, and D8 datasets, the average error rate of the GCLPSO algorithm is 0, which proves that the classification accuracy of the algorithm is 100% on these datasets. On the 12 datasets, BGWO has the lowest average error rate, and GCLPSO ranks second in the average error rate. is is because the excellence of the feature selection result is determined by two factors, namely, the selected feature number and the error rate. Although the average error rate of the algorithm in the 12 datasets is higher than that of the BGWO, the average fitness value of the 12 datasets is superior to the BGWO. Table 9 shows the average number of features selected for each dataset after feature selection. In the D1, D4, and D9 datasets, the GCLPSO selects the fewest number of features.
e GCLPSO averages the least number of selected features in 12 datasets. e GCLPSO was tested on twelve datasets and compared with other feature selection methods to verify the effectiveness of the algorithm. Although the error rate of the GCLPSO is higher than that of the BGWO on 12 datasets, the optimal fitness value is obtained from the two factors that affect the feature selection effect, which proves the capability of the algorithm proposed.

Practical Constraint Modeling
Problems. In this part, GCLPSO was used to solve 3 mathematical models with a constrained problem: I-beam, welded beam, and pressure vessel design problems. We need to choose an appropriate constraint processing method to solve mathematical problems with constraints. Coello Coello [91] described several kinds of penalty functions in detail. e death penalty is the most appropriate type of penalty functions. It constructs the primary target value of the mathematical model to be processed and uses a heuristic algorithm to eliminate the infeasible solution automatically. erefore, GCLPSO combined with penalty functions was used in this experiment to solve the three famous mathematical model problems.

Welded Beam Design Problem.
e purpose of the welding beam construction [92] problem is to obtain the minimum manufacturing cost of the model. Among them, four factors are influencing the manufacturing cost constraints, including the bucking load (P c ), the deflection rate (δ), the bending stress in the beam (θ), and the shear stress (τ). We control the manufacturing cost of the model through four optimization parameters, including the height of the bar (t), weld thickness (h), bar thickness (b), and bar length (l). e mathematical model description of the welding beam design problem is shown as follows: Objective:   Variable ranges: where is engineering design model has attracted the attention of many researchers. Kaveh and Khayatazad used RO [93] to process this mathematical model. Lee and Geem [94] used HS to optimize the mathematical model. As shown in Table 10, the model was optimized by the HS method, and the optimal manufacturing cost was 2.3807. e improved HS (IHS) [95] also optimized the model, and the optimal manufacturing cost was 1.7248. Radgsdell and Phillips [96] used mathematical methods such as Davidon-Fletcher-Powell and simplex method to solve the optimal cost. e mathematical model is optimized with the solution of the GCLPSO, and the results are compared with those of other methods. As shown in Table 10, the minimum manufacturing cost optimized by the GCLPSO is 1.715355, indicating that when the four parameter values are set to 0.20799, 3.25802, 9.02820, and 0.208064, respectively, the manufacturing cost of this model can reach 1.715355, and the minimum cost is less than the results obtained by other methods. It is shown that the proposed GCLPSO algorithm can optimize the mathematical model and obtain the minimum design and manufacturing cost of the welded beam.

Pressure Vessel Design Problem.
e purpose of this mathematical model is to minimize the cost of cylindrical pressure vessels [97]. Among them, the manufacturing cost of the model is closely associated with welding, material, and structure. e end of the model is covered, and the head part is a hemispherical figure. We reduce the manufacturing cost of the model by optimizing the variable internal radius (R), head thickness (T h ), shell thickness (T s ), and cross-section range minus head (L). e mathematical expression of the model is shown as follows: Variable ranges: is mathematical model has been tried by many researchers to find the optimal value. As shown in Table 11, He and Wang [98] used a particle swarm optimization algorithm to optimize the model, and the optimized manufacturing cost was 6061.0777. Deb [99] optimized the model by the genetic algorithm, and the manufacturing cost was 6410.3811. Also, some scholars used IHS [95], ES [100], and mathematical methods [97,101] to solve the optimal solution. e optimal value obtained by optimizing the mathematical model through the GCLPSO is 5989.654, indicating that when the parameter values of T s , T h , R, and L are set to 0.784508, 0.387656, 40.6289, and 195.8892, respectively, the total cost of cylindrical pressure vessels is the minimum. GCLPSO algorithm and other methods utilized to find the optimal design scheme are shown in Table 11. It can be seen that the GCLPSO algorithm provides a better solution for this model.

I-Beam Design Problem.
e mathematical model aims to optimize the structure of the I-beam. e design of the I-beam is improved to achieve the minimum vertical deflection.

(23)
Some scholars use different methods to optimize the model. Wang used ARSM [102] to optimize the model to obtain the minimum vertical deflection and also used the improved IARSM [102] to optimize the model. Gandomi et al. used CS [103] to solve the minimum vertical deflection of the model. Cheng and Prayogo used SOS [104] to optimize this problem.
As shown in Table 12, the optimization results of the GCLPSO algorithm were compared with those of other   methods before, and the minimum vertical deflection obtained by the GCLPSO was 0.00662596. is observation indicates that when the parameters are set to 50, 80, 1.764669, and 5, respectively, the vertical deflection of the I-beam is 0.00662596. As can be seen from the table, GCLPSO can also provide a good solution for this model.

Conclusion and Future Directions
is paper presents an improved algorithm named GCLPSO. is algorithm introduces the GWO into CLPSO to improve the local search capability of the CLPSO. e GCLPSO achieves a more stable status between global search and local search, which boosts the ability to search for the optimal solution. e improved algorithm was compared with seven classical MAs and eight advanced metaheuristic algorithms on the CEC2017 benchmark functions. Experiments were carried out in the same experimental environment, and the experimental results have shown that the algorithm proposed had distinct advantages over other comparison algorithms in terms of CEC2017 benchmark functions with four different types. It was proved that GCLPSO has the strong searching ability on the benchmark functions. In this paper, the improved algorithm was binarytransformed and compared with other algorithms when coping with feature selection on 12 datasets, which proves that the algorithm has a good effect on feature selection. Moreover, the algorithm was also applied to three practical engineering design problems: pressure vessel, I-beam, and welded beam design problems. e results of algorithm optimization were compared with the results obtained by other methods. GCLPSO has achieved good optimization results in all three engineering problems. It shows that this algorithm can deal with the constraint problem effectively at the same time.
In the future research work, the algorithm can be used in many aspects. For instance, the algorithm can be used to optimize the machine learning models such as neural networks [105][106][107][108][109][110]. On the contrary, the performance of the algorithm can be improved further and can also be extended to multiobjective directions [111]. It is also possible to explore a parallel computing framework based on the algorithm of this paper, which is applied to more complex optimization problems. At the same time, it can also be combined with finance, agriculture, and other applications [109,112,113] to explore the best practical application scenarios of the algorithm. erefore, there are still many aspects for us to explore and discover as a further study to be implemented.

Data Availability
e data involved in this study are all public data, which can be downloaded through public channels.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of the article.