Gaussian Perturbation Specular Reflection Learning and Golden-Sine-Mechanism-Based Elephant Herding Optimization for Global Optimization Problems

Elephant herding optimization (EHO) has received widespread attention due to its few control parameters and simple operation but still suffers from slow convergence and low solution accuracy. In this paper, an improved algorithm to solve the above shortcomings, called Gaussian perturbation specular reflection learning and golden-sine-mechanism-based EHO (SRGS-EHO), is proposed. First, specular reflection learning is introduced into the algorithm to enhance the diversity and ergodicity of the initial population and improve the convergence speed. Meanwhile, Gaussian perturbation is used to further increase the diversity of the initial population. Second, the golden sine mechanism is introduced to improve the way of updating the position of the patriarch in each clan, which can make the best-positioned individual in each generation move toward the global optimum and enhance the global exploration and local exploitation ability of the algorithm. To evaluate the effectiveness of the proposed algorithm, tests are performed on 23 benchmark functions. In addition, Wilcoxon rank-sum tests and Friedman tests with 5% are invoked to compare it with other eight metaheuristic algorithms. In addition, sensitivity analysis to parameters and experiments of the different modifications are set up. To further validate the effectiveness of the enhanced algorithm, SRGS-EHO is also applied to solve two classic engineering problems with a constrained search space (pressure-vessel design problem and tension-/compression-string design problem). The results show that the algorithm can be applied to solve the problems encountered in real production.


Introduction
Many challenging problems in applied mathematics and practical engineering can be considered as the processes of optimization [1]. Optimization is the process of selecting or determining the best results from a set of limited resources [2]. In general, there exist several explicit decision variables, objective functions, and constraints in optimization problems. In the real world, however, optimization problems vary widely, from single to multiobjective, from continuous to discrete, and from constrained to unconstrained. Optimization algorithms are used to obtain the values of decision variables and optimize the objective function under a certain range of constraints and search domains. If the search domain is compared to a forest, the optimization algorithm needs to find the potential area where the prey can be found. In this way, the optimization problem can be solved easily rather than laboriously.
Optimization algorithms are divided into two categories, namely, exact algorithms and heuristic algorithms. Traditional exact algorithms (e.g., branch-and-bound algorithms and dynamic programming), although capable of giving global optima infinite time, must rely on gradient information, and the runtime of the algorithm grows proportionally to the number of variables [3].
erefore, it is difficult to achieve good results in the face of many types of nondifferentiable, noncontinuous, and complex high-dimensional problems in the real world [4]. As research has progressed, the emergence of heuristic algorithms (local search algorithm, tabu search, simulated annealing algorithm, etc.) has provided ideas for solving complex problems. Owing to the introduction of the greedy strategy and fixed search steps, the number of iterations of the algorithm is reduced [5]. For the NP-hard problem, an approximate and accurate solution can be given. However, the drawback is that such algorithms are greedy and often fall into local optima when solving complex problems, thus narrowing the scope of its application [6]. e metaheuristic algorithm that emerged later is a higher-level heuristic strategy, with a problem-independent algorithmic framework, and provides a set of guidelines and strategies for developing heuristic algorithms. In addition, it has fewer control parameters, greater randomness, more flexibility, and simplicity, can effectively handle discrete variables, and is computationally less expensive [7]. Compared with exact methods and heuristic algorithms, metaheuristic algorithms are more applicable in solving complex optimization problems. Due to its unique advantages, metaheuristics of various versions such as continuous and binary have been developed to be suitable for solving continuous and discrete optimization problems. Under this kind of tidal current, in recent years, metaheuristic algorithms have become more popular among researchers and are widely used in various fields.
Metaheuristic algorithms can be broadly classified into three categories, namely, evolutionary algorithms, physicsbased algorithms, and swarm intelligence algorithms. Evolutionary algorithms were first proposed in the early 1970s and were mainly generated by simulating the concept of evolution in nature. Inspired by Darwinian biological evolution, Goldberg and Holland [8] proposed the first evolutionary algorithm, the genetic algorithm (GA), in 1988, which provides a stochastic and efficient method to perform a global search among a large number of candidate solutions. In addition, similar algorithms also include the differential evolutionary algorithm (DE) [9], biogeography-based algorithm (BBO) [10], and evolution strategy (ES) [11]. Physics-based algorithms are modeled using physical concepts and laws to update the agents in the search space, mainly including analytical modeling based on the laws of the universe, physical/chemical rules, scientific phenomena, and other ways [12]. For example, to overcome the defects that traditional GA algorithms tend to converge prematurely and have long running time, Hsiao et al. [13] proposed the space gravitational algorithm (SGA) based on the inspiration of Einstein's theory of relativity and the laws of motion of asteroids in the universe. en, Rashedi et al. [14] proposed the gravitational search algorithm (GSA) by analyzing the interaction between gravity in the universe, which received wide attention. It has been slightly modified in the literature [15] to be adaptable to industrial applications. Inspired by the idea of GSA, Flores et al. [16] proposed the gravitational interactions optimization (GIO) algorithm, which mainly modified the nondecreasing constant, and the global search capability and local search optimization are stronger than those of the GSA. In 2019, Faramarzi et al. [17] proposed an equilibrium optimizer (EO) by simulating the mass balance model of physics to achieve the final equilibrium state (optimal result) by continuously updating the search agents.
Also common are multiverse optimization (MVO) [18], electromagnetic field optimization (EFO) [19], the artificial electric field algorithm for global optimization (AEFA) [20], and lightning search algorithm (LSA) [21]. e swarm intelligence algorithms focus on artificially reproducing the social behavior and thinking concepts of various groups of organisms within nature so that the intelligence of the swarm surpasses the sum of individual intelligence. In such algorithms, multiple search agents perform the search process together, sharing location information between them, and using different operators that depend on the metaphor of each algorithm to shift the search agents to new locations [22]. On that basis, the probability of finding the optimal solution can be increased, so that the best solution can be found with low computational complexity. For example, Shi et al. [23,24] proposed particle swarm optimization (PSO) based on the behavior of biological groups of fish, insect swarms, and bird flocks. By simulating the local interactions between individuals and the environment, a globally optimal solution can be achieved. Notably, Liu et al. [25] extended PSO by introducing chaotic sequences. e exploration and exploitation capabilities of the algorithm were effectively balanced by introducing adaptive inertia weights. A novel algorithm, called the classic self-assembly algorithm (CSA), was proposed by Zapata et al. [26]. Using PSO as a navigation mechanism, the search agents were guided to continuously move toward the constructive region. Based on the collective foraging behavior of honeybees, Karaboga [27] proposed artificial bee colony optimization (ABC) in 2005, which is simple and practical and is now one of the most cited next-generation heuristics [28]. However, in the operation of the algorithm, there may be a stagnant state, which tends to make the population fall into a local optimum [29]. In addition, to solve the multiobjective problem under complex nonlinear constraints, Yang and Deb [30] replicated the reproductive behavior of cuckoo and proposed cuckoo search (CS). Owing to the introduction of Levy flights and Levy walks [31], the convergence performance of the algorithm is improved by capturing the behavior of instantaneously moving group members instead of the simple isotropic random wandering approach. Compared with algorithms such as PSO, the CS algorithm has fewer operating parameters, can satisfy the global convergence requirement, and has been widely used [32]. In the literature [33], a CS variant, called island-based CS with polynomial mutation (iCSPM), was proposed from the perspective of improving population diversity. e strategy of the island model and Levy flight strategy were introduced to enhance the search effectiveness of the algorithm. Furthermore, Yang and Gandomi [34] proposed the bat algorithm (BA) for the predatory behavior of bats. It aims to solve single-objective and multiobjective optimization problems in continuous domain space by simulating the echo-location approach. e slime mould algorithm (SMA) was proposed by Li et al. [35] in 2020, which was a very competitive algorithm. Precup et al. [36] provided a more understandable version of SMA and introduced it for fuzzy controller tuning, extending the application of SMA. Moreover, researchers have proposed bacterial foraging 2 Computational Intelligence and Neuroscience optimization (BFO) (Passino) [37], krill herd (KH) (Gandomi and Alavi) [38], the artificial plant optimization algorithm (APO) (Cui and Cai) [39], grey wolf optimizer (GWO) (Mirjalili et al.) [40], crisscross optimization algorithm (CSO) (Meng et al.) [41], whale optimization algorithm (WOA) (Mirjalili and Lewis) [42], crow search algorithm (CSA) (Askarzadeh) [43], salp swarm algorithm (SSA) (Mirjalili et al.) [44], Harris hawks optimization (HHO) (Heidari et al.) [45], sailfish optimizer (SFO) (Shadravan et al.) [46], manta ray foraging optimization algorithm (MRFO) (Zhao et al.) [47], and bald eagle search (BES) (Alsattar et al.) [48]. In 2016, Wang et al. [49] developed a novel metaheuristic algorithm named elephant herding optimization (EHO) for solving global unconstrained optimization problems by studying the herding behavior of elephants in nature. According to the living habits of elephants, the activity trajectory of each baby elephant is influenced by its maternal lineage. erefore, in EHO, the clan updating operator is used to update the distance of individual elephants in each clan relative to the position of the maternal elephant. Since in each generation male elephants must move away from the clan activity, the separating operator is introduced to perform the separation operation. It is experimentally demonstrated [50] that, for most benchmark problems, the EHO algorithm can achieve better results compared to DE, GA, and BBO algorithms. It has thus aroused plenty of research interest owing to its fewer control parameters, easy implementation, and better global optimization capability for multipeaked problems [51]. Scholars and engineers have promoted EHO in various areas of practical engineering, including wireless sensor networks [52], bioinformatics [53], emotion recognition [54], character recognition [55], and cybersecurity [56].
From the perspective of EHO, although it is a relatively effective optimization tool, there are still some shortcomings, such as the lack of mutation mechanisms, slow convergence, and the tendency to fall into local optimality, which make the algorithm limited in practical applications. In recent years, researchers have achieved numerous results to overcome the deficiencies of EHO, and the research can be divided into three aspects. e first is to mix EHO with other algorithms or strategies to improve the performance of the algorithm. For example, Javaid et al. [57] combined EHO with the GA to develop a novel algorithm, GEHO, for smart grid scheduling, which reduces the maximum cost. Wang et al. [50] mixed EHO with three different approaches, namely, cultural-based EHO, alpha-tuning EHO, and biased initialization EHO. e three approaches were tested on benchmark functions from CEC 2016 and carried out on engineering problems such as gear trains, continuous stirred-tank reactors, and three-bar truss design. Chakraborty et al. [58] proposed the IEHO algorithm, which combines EHO with opposition-based learning (OBL) and dynamic Cauchy mutation (DCM) to accelerate the convergence and improve the performance of EHO. Second, a noise interference strategy is applied [59]. To increase the population diversity of the algorithm, noise interference has become a streamlining technique. Two of the most representative are the Levy flight (LF) and chaos strategy. Xu et al. [60] proposed a novel algorithm, LFEHO, that combines Levy flight with the EHO algorithm to overcome the defects of poor convergence performance and ease of falling into local optima in the original EHO. Tuba et al. [61] introduced two different chaotic maps into the original EHO for solving the unconstrained optimization problem and tested them on the CEC 2015 benchmark function. ird, to improve the internal structure of EHO, this part of the research focused on proposing adaptive operators and stagnation prevention mechanisms. Li et al. [62] introduced a global speed strategy based on EHO to assign travel speed to each elephant and achieved good results on CEC 2014. Ismaeel et al. [63] addressed the problem of unreasonable convergence to the origin in EHO by improving the cladistic update operator and separation operator, achieving the balance between exploration and exploitation. Li et al. [64] took an original approach by extracting the previous state information of the population to guide the subsequent search process. Six variants were generated by updating the weights using random numbers and the fitness of the previous agent. e experiment results showed that the quality of the obtained solutions was higher than that of the original algorithm.
Most of the metaheuristics should be enhanced because they do not apply to complex problems, such as intricate scheduling and planning problems, big data analysis, complicated machine learning structures, and arduous modeling and classification problems. Scholars such as Dokeroglu [28] pointed out that a more fruitful research direction for metaheuristics is to optimize the internal structure of metaheuristics rather than to propose new algorithms similar to the existing ones. ese are one of the motivations why this paper attempts to strengthen a new metaheuristic algorithm instead of developing a new one. Moreover, the efficiency of a metaheuristic algorithm depends on the balance between the local exploitation ability and the global exploration ability during the iterations [65]. In this regard, exploration is to explore new search spaces that require search agents to be more diverse and traversable under the operation of operators. Exploitation is characterized by the algorithm's ability to extract solutions from explored regions that are more promising in approximating the global optimal solution. In that stage, search agents played a role in converging quickly toward the optimal solution. To promote the performance of the metaheuristic algorithm, a desirable balance must be struck between these two conflicting properties.
Like a coin having two sides, there are advantages and disadvantages in every developed metaheuristic.
at is exactly why each algorithm cannot be applied to all problems. According to the no free lunch (NFL) theorem [66], all algorithms cannot be regarded as a universal optimal optimizer type. In other words, the success of an algorithm does not apply to all optimization problems while solving a specific set of problems. In addition, the NFL theorem encourages innovations to improve existing optimization algorithms to enhance their performance in use. Given the constant emergence of new optimization problems and the exponential growth in the size and complexity of real-world Computational Intelligence and Neuroscience and engineering design problems, the development and improvement of new optimizers are inevitable. Khanduja and Bhushan [67] provided evidence in their research that hybrid metaheuristic algorithms can obtain better solutions than classical metaheuristic algorithms, which inspired us a lot. From these perspectives, the study of hybrid metaheuristic algorithms has a strong practical significance and value. erefore, in this paper, the plan is to mix the EHO algorithm with other algorithmic mechanisms to exploit the advantages of each for collaborative search and effectively improve the optimization performance.
Aiming to effectively achieve the balance between the exploration and exploitation capabilities, a Gaussian perturbation specular reflection learning and golden sine mechanism-based elephant herding optimization for global optimization problems, called SRGS-EHO, is proposed in the present paper, the main contributions of which are summarized as follows: (i) First, the poor diversity and traversal of randomly generated initial populations affect the convergence performance of an algorithm. In this paper, the specular reflection learning strategy is used to generate high-quality initial populations. Moreover, Gaussian perturbation is added for the mutation to further enhance the diversity of the initial population. (ii) Furthermore, to improve the global optimization capabilities, the golden sine mechanism is introduced to update the position of the clan leader in the algorithm to prevent the population from falling into the local optimum. At the same time, it is made to move toward the global optimum and obtain a balance between exploitation and exploration. (iii) Additionally, to fully verify the effectiveness of SRGS-EHO, 23 common benchmark functions are selected as tests; the Wilcoxon rank-sum test and Friedman test are also invoked. Compared with eight other recognized metaheuristics, the performance of SRGS-EHO in terms of accuracy, convergence, and statistics is completely evaluated. In addition, sensitivity analysis to parameters and experiments of the different modifications are conducted. e aims are to analyze the impact of different parameters and modules in the algorithm on the performance of the algorithm. (iv) Finally, SRGS-EHO is applied to solve two practical engineering design problems (the pressure-vessel design and tension/compression string design problems), and the results are compared with those achieved using other algorithms. Experiments are conducted to test the feasibility and applicability of the proposed algorithm for solving real-world problems.
e rest of this paper is organized as follows: in Section 2, the principle of EHO is briefly introduced. A detailed introduction to the proposed Gaussian perturbation SRGS-EHO method is given in Section 3. Experiments conducted are described in Section 4, which introduces the simulation experimental procedure and the analysis. In Section 5, the experiments and analysis of SRGS-EHO for solving practical engineering problems are represented. Finally, conclusions and future work are presented in Section 6.

Elephant Herding Optimization (EHO)
Elephants are herd-dwelling creatures, usually consisting of several clans. In each clan, the herd is headed by female elephants. Male elephants, however, undertake the tasks of defending the clan and usually operate outside the clan. In EHO, each clan contains an equal number of agents. According to the algorithm, the clan leader (patriarch) is identified as the individual with the best position. Depending on the relationship with the female elephant clan leader, the position of other agents is modified by the updating operator. Meanwhile, in each generation, there are a fixed number of male elephants set to leave the clan, and these elephants are modeled by using the separating operator. In general, the EHO algorithm is divided into the initialization operation, clan updating operation, and separating operation.

Initialization Operation.
Assuming that there are N elephants in the D-dimensional search space, the k th agent in the population can be represented as erefore, the definition of the initialized population is shown in the following equation: where m stands for dimensions, 1 ≤ m ≤ D, and u m and l m are the upper and lower bounds of the m th dimension. en, the initial population can be expressed as X(0) � X 1 (0), X 2 (0), . . . , X k (0) . Next, the entire initial population must be divided into the preset clans.

Clan Updating
Operator. At this stage, the position of each individual elephant will be updated according to its position relationship with the patriarch, which is shown as follows: where x new,ci,j indicates the updated position of the agent, x ci,j represents the current location of the agent, and x best,ci is the position of the current best agent. e scale factor α ∈ [0, 1], r ∈ [0, 1] is a random number. rough this operation, the diversity of the population can be enhanced. When x ci,j � x best,ci , the patriarch of the clan cannot be updated by equation (2). To avoid this situation, it is changed to the following equation: e scale factor β ∈ [0, 1] determines the extent to which x center,ci acts on x new,ci,j . x center,ci is the centre of clan ci, which is calculated by the positions of all agents. For the position of the d th dimension, the expression of x center,ci can be given as 4 Computational Intelligence and Neuroscience Among them, d is the dimensionality of the agent, D represents the total dimensionality, 1 ≤ d ≤ D, n ci represents the number of agents in clan ci, and x ci,j,d represents the d th dimension of the j th agent in clan ci.

Separating Operator.
In EHO, a certain number of adult male elephants will leave the clan life. e separation operator acts on the elephant with the worst fitness in each clan, which is expressed as follows: where x max and x min are the upper and lower bounds of agents in the population, respectively, x worst,ci denotes the worst agent in clan ci, and rand ∈ [0, 1] represents a random distribution from 0 to 1.

Proposed Algorithm
3.1. Motivations. EHO was proposed in 2016 with excellent global optimization capabilities, fewer control parameters, and ease of implementation, and its performance was verified in the original paper. Nevertheless, it can be observed that the original EHO suffers from the following deficiencies. First, the initialization of the original algorithm is completed randomly, which makes it difficult to guarantee diversity and traversal. erefore, it may make the algorithm unable to converge to the best solution while increasing the runtime. Second, in the process of iteration, the position of the patriarch is determined by the total agents in the clan, which may break the balance between global exploration and local exploitation. Meanwhile, it is easy to fall into the local optimum while dealing with complex problems. Once the population has stalled, the algorithm will converge prematurely. e clan leader, being the best-positioned agent in the clan, should have stronger exploration ability. e above issues make EHO perform poorly when dealing with more complex problems. e efficiency of metaheuristic algorithms depends mainly on striking the right balance between the global exploration and local exploitation phases. Among them, exploration is the process of exploring new search spaces, requiring search agents to be more diverse and traversable under the operation of operators. Exploitation is characterized by the algorithm's ability to extract solutions from the explored region that are more promising in approximating the global optimal solution. erefore, search agents are desired to converge quickly toward the optimal solution. To improve the performance of the metaheuristic algorithm, a desirable balance must be struck between these two conflicting properties. If the balance is broken, the algorithm will suffer from falling into a local optimum while failing to obtain a globally optimal solution.
To deal with these problems, improvements are made in two aspects in this paper. First, specular reflection learning is introduced to update the initialization scheme.
Subsequently, Gaussian perturbation is introduced to further enhance the population diversity. Second, the golden sine mechanism is presented to modify the position of the patriarch in each generation of the clan, making it converge to the global optimum continuously, improving the convergence performance by balancing the local exploitation ability and global exploration ability. With these modifications, the aim is, on the one hand, to increase the population diversity and promote convergence efficiency and, on the other hand, to strengthen exploration and exploitation capabilities and establish a balance between the two phases.

Gaussian Perturbation-Based Specular Reflection Learning for Initializing Populations.
In metaheuristic algorithms, the diversity of initial populations can significantly affect the convergence speed and solution accuracy of intelligent algorithms [68]. However, in EHO, the lack of a priori information about the search space tends to generate the initial population using random initialization, which imposes some limitations on the update strategy of the search agents. e reason for this is that, supposing the optimal solution appears at the opposite position of the randomly generated individuals, the direction of the population advance will deviate from the optimal solution. It has been demonstrated that solutions generated by the specular reflection learning (SRL) strategy are better than those generated using only random approaches. erefore, in this paper, specular reflection learning for population initialization is introduced and Gaussian perturbation is added to compute the opposite values of the initial population in the search space, and a mutation operation is performed on the resulting agents. en, the opposite individual fitness values are compared with those of the original individuals to filter out the better ones for retention.
Opposition-based learning (OBL) [69] is widely used to improve metaheuristic algorithms due to its excellent performance. In OBL, a candidate solution and its opposite position are simultaneously examined to speed up the convergence. e opposite point x, which is in the range [lb, ub], is defined as follows: Inspired by the phenomenon of specular reflection, Zhang [70] proposed specular reflection learning (SRL). In physics, there is an obvious correspondence between incident light and reflected light, as shown in Figure 1(a). Based on this phenomenon, the current solution and the reverse solution can be modeled in the way shown in Figure 1(b). Under this circumstance, it can be deduced that there is some correspondence between a solution and one of its neighbors of the opposite solution. Supposing both solutions are examined simultaneously, a better solution can be obtained. It has been demonstrated that the solutions generated by the SRL strategy are better than OBL [71]. erefore, in this paper, specular reflection learning is introduced for population initialization. Besides, Gaussian perturbation is added to perform various operations on the generated Computational Intelligence and Neuroscience agents. According to the results of the fitness values, the better N individuals are retained to form the initial population.
Suppose a point X � (a, 0) exists on the horizontal plane, and the opposite point is X ′ � (b, 0), ∀X, X ′ ∈ [X l , X u ]. When light is incident, the angles of incidence and reflection are α and β, respectively. O is the midpoint of [X l , X u ], O � (x 0 , 0). According to the law of reflection, the following correspondence can be obtained: When B 0 � μA 0 , equation (7) can be represented as where μ is the preset scale factor, and when μ takes on a different value, b is represented as It can be observed that, when μ changes, all values of [X l , X u ] can be traversed by b. erefore, it can be used to initialize the population and enhance the diversity and traversal of the initial population. Let According to the basic specular reflection model, the opposite point in equation (6) can be defined by its components: It is worth noting that the scale factor μ in equation (10) is set to a random number within [0,1] for the convenience of the operation. After that, x i and pi x must be merged to form a search agent of size 2N, where the population is x i , x pi . Next, the fitness of the population must be calculated, and the N agents with the best fitness value are selected as the initial population.
SRL can be seen as a special case of opposition-based learning, in both the current solution and the reverse solution, in order to select a better solution that can provide more opportunities for discovering the global optimal solution. It is well known that the diversity of populations has a significant impact on metaheuristic algorithms [72]. e reason is that the increase in diversity can make it more practical for the population to explore a larger search area and therefore promote a move away from the local optimum. From this perspective, there are two aspects that constrain the increase of initial population diversity in SRL. First, the method does not adjust well in small spaces. Second, the SRL method is relatively fixed. erefore, Gaussian perturbation is introduced in the present work to perform mutation operations after generating the reverse solution. e equation is as follows: where pi x is the current inverse solution, mi x is the newly generated inverse solution, k is the weight parameter (set to 1 in this paper), and randn (1) is the matrix that generates a matrix of 1 × 1 that conforms to a standard Gaussian distribution with mean 0 and variance 1. en, the elite solution is selected as the initialized population in the following way: where x i denotes the final generated i th initialized agent, i ∈ [1, N], and the final generated initialized population is

Golden Sine Mechanism.
e golden sine algorithm [73] is a novel metaheuristic algorithm proposed by Tanyildizi in 2017, the design of which was inspired by the sine function in mathematics, and its agents search the solution space to approximate the optimal solution according to the golden ratio. e sine curve is defined with a range [− 1, 1], a period 2π, and has a special correspondence with the unit circle, which is shown in Figure 2. When the value of the independent variable x 1 of the sine function changes, the corresponding dependent variable y 1 also changes. In other words, traversing all the values of the sine function is equivalent to searching all the points on the unit circle. By introducing the golden ratio, the search space is continuously reduced and the search is conducted in the region with more hope of producing the optimal value, so as to improve the convergence efficiency. e solution process is shown in Figure 3.
When the clan update operation is completed, the individual agent with the best fitness is screened and its position updated using the golden sine mechanism in the following equation: where x best,ci represents the global best individual, r 1 is the random number between [0, 2π], r 2 is the random number between [0, π], and m 1 and m 2 are the coefficient factors obtained by the following equations: where a and b are the initial values of the golden ratio, which can be adjusted according to the actual problem. τ represents the golden ratio, τ � ( � 5 √ − 1)/2. Next, the obtained agents must be compared with the global optimal solution, and the coefficient factors m 1 and m 2 must be updated according to the comparison results.
When f(x new,ci ) < f(x best,ci ), the update method is as follows: 6 Computational Intelligence and Neuroscience Supposing m 1 � m 2 , the method is denoted by e strategy of determining the clan leader's position by the average position is replaced by a renewed position update strategy, which, in turn, performs exploration with a strong directionality. As a result, the agents with the best fitness value can be made to continuously approach the optimal solution, obtaining a better solution in each iteration and reaching a balance between global exploration and local exploitation.

e Workflow of SRGS-EHO.
e pseudocode of SRGS-EHO is given in Algorithm 1. e algorithm starts from initialization based on SRL and further enhances the diversity of the population through Gaussian perturbation. Next, the golden sine mechanism is introduced to optimize the position of the patriarch in each clan. e position of agents is evaluated by comparing fitness, and then continuous iteration ensues until the maximum number of iterations is reached. e flowchart of SRGS-EHO is shown in Figure 4. Mirror surface Computational Intelligence and Neuroscience

Experimental Results and Discussion
To verify the effectiveness of SRGS-EHO for solving global optimization problems, experiments are conducted on 23 benchmark functions. Simultaneously, eight other different metaheuristic algorithms are selected for comparison, namely, the aforementioned EHO [49], WOA [42], EO [17], HHO [45], CSO [41], GWO [40], SFO [46], and IEHO [58]. To make the experiment fair, each algorithm is run 30 times independently on the benchmark function to ensure its stability. To better reflect the differences in performance between algorithms, the nonparametric Friedman test [74] and Wilcoxon rank-sum test [75]

Experimental Parameter Settings.
To make the experiments more credible, the values reported in the original papers or widely used in different studies are selected as parameters for the respective algorithms, which are shown in Table 2. e parameter settings are kept consistent except for those listed in the table.

Scalability Analysis.
Since dimensionality is also a significant factor affecting the accuracy of optimization, F1-F13 are extended from 30 to 100 dimensions to verify the solving ability of the algorithms in different dimensions. When completed, the results of each algorithm must be evaluated. To make the experiments more convincing, the Input: initialize the maximum number of iterations t max , the initial values of the golden ratio a and b, and the size of population N.

Begin
Initialize population x i and x pi by equations (10) to (11) Calculate the fitness of every initialized agent and select the N optimal solution based on equation (12) Divide the initial population into n c clans Repeat While t < t max do For ci � 1: N do For j � 1: n j do Generate x new,ci,j and update x ci,j based on equation (2) If x ci,j � x best,ci then Generate x new,ci,j and update x ci,j based on equation ( 8 Computational Intelligence and Neuroscience evaluation indexes are chosen as the mean (Ave) and standard deviation (Std). Among them, the mean value can reflect the solution accuracy and quality of the algorithm, and Std reflects the stability of the algorithm. When solving the minimization problem, the smaller the mean value, the better the algorithm performance. Similarly, the smaller the standard deviation, the more stable the algorithm performance. In addition, the maximum number of iterations t max for all algorithms is set to 500 and the overall size N is set to 30. Table 3 shows the experimental results when d � 30. As can be seen from the data, SRGS-EHO obtains the best solution on five of the seven single-peak functions (F1-F7). It is noteworthy that SRGS-EHO achieves a more significant advantage over the other algorithms on F1-F4. is is due to the introduction of the golden sine mechanism, which increases the local search ability of the algorithm, thus enhancing the exploitation ability as a result. In the performance of the multipeak functions (F8-F23), SRGS-EHO achieves the best results on F8-F11, F17, and F21-F23 and the best mean value on F14. All of the results obtained by SRGS-EHO are better than those obtained by the original EHO. is indicates that the algorithm has boosted its global capability compared to the original EHO after introducing SRL and updating the clan updating operator. In addition, the performance of multimodal functions with fixed dimensions shows that the algorithm strongly achieves a balance between exploitation and exploration. Tables 4 and 5 show the results when the dimension was increased to 50 and 100, respectively. e data in the tables indicate that the difficulty in gaining optimal solutions is lifted as the size of the problem increases. It can be seen from Table 4 that SRGS-EHO achieves the optimal solutions in F1-F4 and F7-F11. When d � 100, SRGS-EHO still achieves the best results in nine of the 13 benchmark functions. Combining the results from the two tables, it can be noted that the performance of SRGS-EHO does not degrade, proving that SRGS-EHO has good adaptability for handling high-dimensional problems. is indicates that the introduced Gaussian perturbation-based SRL can effectively enhance the population diversity. Moreover, the clan positions are updated by the golden sine mechanism to continuously approach the global optimum, which effectively balances early exploration and later exploitation. Computational Intelligence and Neuroscience No.

Dimension
Range   and F21-F23 and can maintain a better convergence rate. Compared with the original EHO, the convergence performance of SRGS-EHO has been significantly improved. e modifications for the initialized population and the strategy of introducing the golden sine mechanism are proved to be effective. e experimental results indicate that the optimization ability and convergence performance of SRGS-EHO are enhanced.

Statistical Tests.
Garcia et al. [76] pointed out that, when evaluating the performance of metaheuristic algorithms, comparisons only based on mean and standard deviation are not sufficient. Moreover, there exist inevitable chance factors that affect the experimental results during the process of iteration [77]. erefore, statistical tests are necessary to reflect the superiority of the proposed algorithm and the variability of other algorithms [78]. In this paper, the Wilcoxon rank-sum test and Friedman test are chosen to compare the performance between algorithms. Besides, the maximum number of iterations t max of all algorithms is set to 500 and the overall size of the population Nis set to 30. Other parameters are set as in Section 4.2. As usual, f 1 (x) to f 13 (x) are extended from 30 to 100 dimensions.
To make the experiment more convincing, the Friedman test is performed to screen the difference between the proposed SRGS-EHO and other algorithms. As one of the most well-known and widely used statistical tests, the Friedman test is used to detect significant differences between the results of two or more algorithms on consecutive data [79]. Specifically, it can be used for multiple comparisons between different algorithms by calculating the where k is the number of algorithms involved in the comparison, j is the correlation coefficient, N is the number of test cases or runs, and R j is the average ranking of each algorithm.
e experimental results of Friedman tests are shown in Table 8. According to the results of the Friedman test, the algorithm with the lowest ranking is considered to be the most efficient algorithm. From the results in the table, the proposed SRGS-EHO is always ranked first in different cases (d � 30, 50, 100). Compared with other metaheuristics, the SRGS-EHO has a greater competitive advantage.     14 Computational Intelligence and Neuroscience      (14) and (15), the maximum number of iterations t max , and the size of population N are set to different values to verify. In this experiment, N is set to 5, 20, and 50, t max is marked as 100 and 500, and a and b are set to two different sets of values [− π, π] and [0, 1]. Twelve variants of SRGS-EHO are created, each representing a combination of different parameters, as shown in Table 9. It should be noted that these parameters can be adapted to the actual problem. When each seed algorithm is performed 30 times, the results of the Friedman test reported are shown in Table 10.
e analysis of the data in the table yields that the quality of the obtained solutions varies if the set parameters are changed. By comparison, the subalgorithm SRGS-EHO6 with N � 50, t max � 500, a � − π, and b � π outperforms the other variants and achieves the highest ranking.  Table 11. Six representative benchmark functions are selected, including F1, F5, F10, F14, F15, and F17. e size of the population N is set to 30 and the maximum number of iterations t max is 500. Figure 6 shows the convergence curves of the four algorithms. It can be seen that the convergence rate of SR-GM is generally higher than that of EHO due to the optimized initialization method using Gaussian perturbation-based         Computational Intelligence and Neuroscience specular reflection learning. e introduction of the golden sine operator makes GSO a very significant improvement in search accuracy and breadth. By combining the two strategies, the convergence rate and the search accuracy of SRGS-EHO are simultaneously promoted. e mean value and Friedman ranking results obtained by different combinations of strategies are shown in Table 12, where the bold values indicate the best solutions obtained on the current benchmark functions. According to these results, all three enhanced versions outperform the original algorithm. Both SR-GM + EHO and GSO + EHO outperform EHO on five functions. On the one hand, SR-GM + EHO and GSO + EHO achieve different improvements in terms of the accuracy and breadth of the search compared to EHO. On the other hand, the performance of SRGS-EHO is enhanced comprehensively by the effective combination of SR-GM and GSO. By verification, it is shown that the modifications for EHO are effective. Meanwhile, SRGS-EHO is determined to be the final optimized version.

Applications of SRGS-EHO for Solving
Engineering Problems e applicability of SRGS-EHO is further tested in solving engineering design problems and the results are described here. In this paper, two restricted practical engineering test problems, namely, the pressure-vessel design and tension/ compression string design problems, are selected, and the results obtained by SRGS-EHO are compared with other algorithms to highlight superiority.
It is noteworthy that these two cases include some inequality constraints. Consequently, constraint handling methods should be used in SRGS-EHO and other compared methods. Constraint handling methods are divided into five categories, namely, penalty function methods, hybrid methods, separation of objective function and constraints, repair algorithms, and special operators [80]. In terms of penalty functions, there exist different types, including static, annealing, adaptive, coevolutionary, and death penalty. Among these, the death penalty is a popular and simplest constraint processing method. In this approach, search agents that violate any level of constraints impose the same penalty, i.e., being assigned a poorer fitness value. is approach does not require any modification of the original algorithm. Constraints are added to the fitness function and are efficiently handled by most optimization algorithms. erefore, in this study, SRGS-EHO is merged with the death penalty approach for solving constrained engineering problems. It is worth noting that the objective of solving real engineering problems is to provide the global optimal solution at the lowest possible cost. Based on this consideration, in this section, each compared algorithm is performed 10 times and the best combination and the maximum fitness value obtained are selected as the final comparison results.  e pressure-vessel design problem [81] is a common engineering design problem first proposed by Kannan and Kramer in 1994, which is shown in Figure 7. e objective of this optimization problem is to minimize the manufacturing cost of the pressure vessel. Four variables are involved: the thickness of the shell T s , that of the head T h , and the inner radius R and length L of the cylinder. Among them, the first two variables are discrete. In addition, the problem contains four constraints. Of these, three constraints are linear and one constraint is nonlinear. e mathematical form of the problem is expressed as follows: SRGS-EHO is applied to optimize the problem and compared with eight other algorithms separately, with some as listed earlier, i.e., EO [17], WOA [42], HHO [45], DE [9], evolution strategies (ESs) [82], PSO [23], the oppositionbased sine cosine algorithm (OBSCA) [83], improved sine cosine algorithm (ISCA) [22], and enhanced whale optimization algorithm (EWOA) [84]. e obtained results are shown in Table 13. According to the results, SRGS-EHO obtained the best solution among the nine algorithms. e four variables are optimized to 0.850468, 0.420387, 44.065679, and 153.694517, and the optimum cost obtained is 6020.753071.

Computational Intelligence and Neuroscience
As can be observed from Table 14, the optimum weight obtained by SRGS-EHO is 0.012044 when d, D, and N are optimized to 0.061414, 0.638027, and 3.004913, respectively.
is indicates that SRGS-EHO has a superior global optimization capability compared to other algorithms.

Conclusions and Future Work
In this paper, the Gaussian perturbation-based specular reflection learning and golden sine mechanism are introduced for dealing with the defective problem of the original EHO. According to the method proposed in this paper, the population initialization method and the clan leader position update strategy are optimized, which makes exploration and exploitation more efficient and leads to the enhancement of the algorithm performance. Experiments on 23 benchmark functions show that the proposed SRGS-EHO has an excellent performance in terms of optimization accuracy and stability compared with other metaheuristic algorithms, while the convergence rate is also promoted. In addition, SRGS-EHO is applied to solve real-world engineering design problems, such as pressure-vessel design and tension/compression string design problems. Compared with other algorithms, this indicates that SRGS-EHO has superiority and applicability. At the same time, the algorithm has great potential for dealing with different complex problems.
In the future, SRGS-EHO can be further developed and refined based on practical problems. In addition, it can be introduced to solve discrete and multiobjective optimization problems, and more encouraging results can potentially be achieved.

Data Availability
e data used to support the findings of this study are available from https://github.com/dyxdddddd/SRGS-EHO.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this study.