Improved Glowworm Swarm Optimization Algorithm Based on a Sigmoid Function for the Absolute Value Equation

Solving the absolute value equation (AVE) is a nondi ﬀ erentiable NP-hard and continuous optimization problem with a wide range of applications. Because its solutions have di ﬀ erent forms, it is challenging to design the most e ﬃ cient algorithm that can solve di ﬀ erent AVEs without using overcomplicated technical improvement and problem-dependent objectives. Hence, this paper proposed an improved glowworm swarm optimization (GSO) algorithm with an adaptive step size strategy based on the sigmoid function (SIGGSO) that solves the AVEs. Seven test AVEs, including multisolution and high-dimensional AVEs, are selected for testing and compared with seven metaheuristic algorithms. The experimental results show that the proposed SIGGSO algorithm has higher solution accuracy and stability when seeking multiple solution of AVEs compared to the basic GSO. Moreover, it obtains competitive advantages on multisolution and high-dimensional AVEs compared with other metaheuristic algorithms and provides an e ﬀ ective method for engineering and scienti ﬁ c calculations.


Introduction
The absolute value equation (AVE) is a nondifferentiable, NP-hard optimization problem in a continuous solution space. Many practical problems such as the site selection problem [1] and knapsack feasibility problem [2], which are nonlinear, nondifferentiable, multivariable, and multiparameter-complex optimization problems, are closely related to AVEs. AVEs are not only equivalent to standard and generalized linear complementarity problems but also to bilinear programming problems, which are typical mathematical programming problems with a wide range of applications in several disciplines. Compared to the other problems, the AVE has a simpler structure and easy to implement. Therefore, an in-depth study of AVEs is a good step towards solving the related problems as well.
Many algorithms have been proposed to solve the AVE. The results of traditional algorithms, such as the generalized Newton method [3], bilinear programming method [4], multivariate spectral gradient algorithm [5], and artificial neural networks (ANN) [6], have been reported. However, the traditional algorithms struggle to deal with the objective functions which lack good analytical properties, e.g., continuity, differentiability, and smoothness.
With the vigorous development of metaheuristic algorithms, many scholars have attempted to utilize metaheuristic algorithms for practical problems. We refer to the following literatures for a few examples of applications. [7] improved the traditional particle swarm optimization (PSO) algorithm through introducing into the sharing information mechanism and the competition strategy called information sharing based PSO (IPSO). The novel algorithm IPSO has the similar rapid convergence speed and enhanced global search capability to the traditional PSO. The experimental results show that IPSO has better performance than the traditional PSO and the GA algorithm on benchmark functions, especially for difficult functions. [8] presented a chaotic monarch butterfly optimization (CMBO) algorithm to solve large-scale 0-1 knapsack problem. They introduce twelve well-known one-dimensional chaotic maps to tune the parameters of CMBO. Additionally, Gaussian mutation is used to perturb small part of solutions with worse fitness.
The proposed CMBO can outperform the standard MBO and other eight state-of-the-art canonical algorithms. [9] presents two binary variants of a Hunger Games Search Optimization (HGSO) algorithm based on V-and Sshaped transfer functions (BHGSO-V and BHGSO-S) within a wrapper FS model for choosing the best features from a large dataset. The experimental results demonstrate that the BHGSO-V algorithm can reduce dimensionality and choose the most helpful features for classification problems. For feature selection problem, [10] proposed the island algorithm (IA) with a Gaussian mutation strategy (IAGM) to find the optimal feature subset in the set of feature subsets. The new variant of IA can efficiently address the problem that as the number of iterations increasing, the island algorithm tends to local optimization. Colony predation algorithm (CPA) is a recently proposed algorithm which already is applied in some areas. For example, [11] proposed a framework within CPA called colony predation algorithm (CPA) with a kernel extreme learning machine (KELM), abbreviated as ECPA-KELM. The framework leads an efficient intelligence method for the diagnosis of COVID-19 from the perspective of biochemical indexes. The statistical analysis results show that ECPA-KELM can be used to discriminate and classify the severity of the COVID-19 as a possible computer-aided method and provide early warning for the treatment and diagnosis of COVID-19, effectively. [12] first proposed a novel metaheuristic algorithm called Harris hawks optimization (HHO) algorithm. The main inspiration of HHO is the cooperative behavior and chasing style of Harris' hawks in nature called surprise pounce. The founders do extensive experiments, and the statistical results and comparisons show that the HHO algorithm provides very promising and occasionally competitive results compared to well-established metaheuristic techniques. Since HHO came up to us, it has been widely used in many areas among which [13] proposed a novel satellite image segmentation technique based on dynamic HHO with a mutation mechanism (DHHO/M) and thresholding technique. Compared with the original Harris hawks optimization (HHO), the dynamic control parameter strategy and mutation operator used in DHHO/M can avoid falling into the local optimum and efficiently enhance the search capability. Experiments on various satellite images illustrate that the DHHO/M is superior to others in the following three aspects: fitness function evaluation, image segmentation effect, and statistical tests.
For solving AVEs, several metaheuristic algorithms including the genetic algorithm (GA), particle swarm optimization (PSO) algorithm, differential evolution (DE) algorithm, and harmony search (HS) algorithm are applied. The authors have studied the optimum correction of the absolute value equation by using GA [14] and obtained some computational results which prove the effectiveness of the algorithm. They did not give sufficient examples, instead only verifying the case of high-dimensional AVEs. PSO with exponentially decreasing inertia weight (EDIW) [15] has been deeply discussed. By employing dynamic changes of the inertia weight, PSO can easily escape from a local optimum and progress towards a global optimum. The authors gave a series of test AVEs with a unique solution or multiple solutions to verify the effectiveness of EDIWPSO. However, a few test examples without high-dimensional AVEs are contained in the numerical experiments section. Despite this fact, the problem is that even for AVEs with multiple solutions, the EDIWPSO can only capture the multiple solutions of the AVE by running the algorithm several times. An improved adaptive differential evolution (IADE) algorithm [16] was proposed for solving AVEs. The algorithm combined global search ability and local search ability, using a quadratic adaptive mutation operation and crossover operation. Numerical results show that the improved algorithm can quickly find solutions of the test AVEs. However, it faces the same issue of insufficient test examples and comparisons. An improved harmony search algorithm with chaos (HSCH) [17] has been applied for solving AVEs, and this improved algorithm has a better optimization capability than the basic harmony search algorithm (HS) and the improved harmony search algorithm with differential mutation operator (HSDE). The authors verified the performance of HSCH by using three test AVEs, but their test still lacks any higher-dimensional AVEs, AVEs with multiple solutions, and comparisons with other metaheuristic algorithms.
Although many metaheuristic algorithms have been employed to solve optimization problems containing AVEs, existing metaheuristic algorithms still provide low precision solutions to AVEs and lack the ability to straightforwardly attain multiple solutions of AVEs. Therefore, the above algorithms constructed to solve AVEs lack the strong generalization ability required to work well. Hence, to address different forms of AVEs effectively, finding and optimizing an efficient algorithm possessing powerful universality is a critical issue.
Compared to the above algorithms for solving AVEs, the glowworm swarm optimization (GSO) algorithm has a natural advantage when solving multimodal optimization problems. Although the GSO algorithm shares certain characteristics with other metaheuristic algorithms compared in this study, there are several differences. GSO can simultaneously detect the multiple peaks of multimodal functions in parallel. This problem cannot be solved directly by the original version of the compared metaheuristic algorithms. Generally, the compared metaheuristic algorithms are used to find one global optima of the optimization problem. However, for solving AVEs, multiple solutions are generally obtained because of the existence of the absolute value vector jxj, and one important consideration of the algorithms solving AVEs is to locate as many solutions as possible. This capability is what separates GSO from the other metaheuristic intelligent algorithms. This is the motivation of improving the basic GSO and designing SIGGSO to solve AVE problems. Furthermore, GSO is not subject to conditions such as individual failures, the addition of noise, and the limitation of differentiable functions and multimodal functions, which may cause GSO to lose its searchability [18]. These advantages render GSO more adaptable to practical application problems, and practical optimization problems such as AVEs, which are multimodal functions under certain conditions, can be solved. However, because the basic GSO has troubles leaving local optima, we need to modify it to improve its ability to solve multisolution and highdimensional AVEs. We illustrate this in Section 5.

Wireless Communications and Mobile Computing
Notably, the present scrutiny attempts to address a clear scientific gap with the following contributions: (i) Based on that sigmoid function can show a good balance between linear and nonlinear, an adaptive step size strategy derived from sigmoid function is designed and applied to GSO in this paper (ii) The introduction of this adaptive step size strategy can make GSO possess a strong ability to jump out local optima, compared with the basic GSO with fixed step size strategy (iii) The proposed improved GSO outperforms basic GSO in solving several test AVEs and obtains competitive advantages on multisolution and highdimensional AVEs compared with other metaheuristic algorithms like PSO, GA, HHO, DE, and HS. This improved GSO can provide an effective method for engineering and scientific calculations The rest of the paper is organized as follows: in Section 2, background information and the existence and uniqueness theory of AVE's solutions are provided. Section 3 describes the basic GSO. Section 4 describes the proposed SIGGSO in detail. The experimental design is presented in Section 5. In Section 6, conclusions and future research work are provided.

Absolute Value Equation
The general form of the AVE is Ax − jxj = b, where A ∈ R n×n , x, b ∈ R n , and jxj denote the absolute value of each component of x. It is an important subclass of the absolute value matrix equation Ax + Bjxj = b [19]. Subsequently, Mangasarian and Meyer [4] reported certain theoretical results on the existence and uniqueness of AVE solutions, which are listed below: Lemma 1. If A ∈ R n×n and all the singular values of A exceed one, then there exists a unique solution for the AVE for any b ∈ R n .

Lemma 2.
For A ∈ R n×n , if kA −1 k ≤ 1, for any b ∈ R n , then there exists a unique solution for the AVE.  Since the GSO we consider is generally suitable for solving unconstrained optimization problems, we transform the AVE into an unconstrained optimization problem as follows: Theorem 4. The AVE can be equivalently transformed into the following unconstrained optimization problem: where f ðxÞ = Ax − jxj − b, kxk represents the Euclidean norm, Ψ : ℝ n ⟶ ℝ + , and if the optimal value of ΨðxÞ is zero, then x is the solution of AVE.
Proof. Suppose that x * is the optimal solution of ΨðxÞ = 1/ 2k f , ðxÞk 2 ; according to the positive definiteness of the Euclidean norm, Hence, the solution x * of the AVE is equivalent to the optimal solution x * of ΨðxÞ = 1/2kf , ðxÞk 2 when ΨðxÞ reach the optimal value zero. [20] proposed a new swarm intelligence optimization algorithm called the artificial glowworm swarm optimization (GSO) algorithm. After several years of development, the GSO has good prospects for application to optimization in the scheduling of certain tasks, vehicle routing problems, and building design [21][22][23].

Basic Concepts of GSO. In 2005, Krishnanand and Ghose
Each iteration of the GSO execution process includes five phases: the glowworm deployment phase, luciferin update phase, movement probability calculation phase, location update phase, and neighborhood range update phase, which are described below.

Mathematical Model of GSO.
In this section, we introduce the mathematical model of GSO proposed by Krishnanand and Ghose [20]. For the reader's convenience, we first provide a list of the notation we will use, shown in Table 1.
3.2.1. Glowworm Deployment Phase. n glowworms (solution vectors) are randomly placed in the feasible domain of the problem and labeled x 1 , x 2 , ⋯, x n , respectively, where each glowworm is considered to be an m-dimensional vector x i = ðx i1 , ; ;x i2 , ; ; ⋯ , x im Þ T , i = 1, 2, ⋯, n. Each glowworm is initialized with luciferin level l 0 , local decision radius r 0 , step size s, threshold value of the number of glowworms contained in the local decision domain n t , luciferin decay factor ρ, fitness enhancement factor γ, domain change rate β, sensor range of the glowworms r s (threshold of the local decision domain), and iteration number M.

Luciferin Update
Phase. The luciferin level of each glowworm is equal to that of the previous iteration plus a certain extracted proportion of the glowworm's current 3 Wireless Communications and Mobile Computing fitness, minus a certain proportion of the luciferin level that varies with time, as described by the following equation: where l i ðtÞ is the luciferin level at iteration t and Jðx i , ð tÞÞ indicates the fitness of glowworm i at iteration t, i.e., the corresponding value of the objective function.

Movement Probability-Calculating
Phase. Each glowworm decides its movement direction according to the luciferin level of the glowworms in its local decision domain. For each glowworm i, the probability of moving toward a neighbor j is given by where j ∈ N i ðtÞ, N i ðtÞ = fj, : ,kx j , ðtÞ,−,x i , ðtÞk,<,r i d , ðtÞ,;, l i , ðtÞ,<,l i , ðtÞg denotes the set of all neighboring glowworms of glowworm i at iteration t, r i d ðtÞ denotes the adaptive local decision domain of glowworm i at iteration t, and 0 ≤ r i d ðtÞ ≤ r s .

Location Update Phase.
Glowworm i select a glowworm j ∈ N i ðtÞ with P ij ðtÞ given by (4) and perform a location update. Glowworm i moves with a certain step size toward glowworm j with the maximum luciferin level in its local decision domain; then, the movement process can be represented as 3.2.5. Neighborhood Range Update Phase. Each glowworm uses an adaptive local decision radius, which changes at each iteration according to the number of neighboring glowworms (the local decision radius increases when the number of neighbors is smaller, and vice versa). At each iteration, the following rule is applied: A variant of neighborhood range update rule was first introduced in [18]. Its mathematical model is as follows: where D i ðtÞ = N i ðtÞ/πr 2 s is the glowworm density in the local decision domain of glowworm i at iteration t and β is a constant representing the domain change rate.
The GSO flowchart is shown in Figure 1.
The set of all neighboring glowworms of glowworm i at iteration t

Iteration number
Initialization related parameters Luciferin update by model (2) Adopt a fixed step size to move the location by model (4) Neighborhood range update by model (5) Calculate the probability by model (3) and use the roulette method for selection

Improved GSO Based on the
Sigmoid Function 4.1. Basic Idea for GSO Improvement. In a neural network (NN), the sigmoid function is derived from the activation function used for limiting the output amplitude of a neuron.
As it suppresses the output signal to a permissible range, it is also known as the suppression function. We refer to [24][25][26] for some application examples of sigmoid function. In general, the normal output range of a neuron can be expressed as a unit closed interval ½0, ; ;1. The value range of the sigmoid function is a continuous interval from 0-1 and is differentiable. It is for this reason that the sigmoid function has a wide range of applications in neural networks as a suppression function. The sigmoid function is a strictly increasing function which shows better balance between linear and nonlinear functions. An example of the sigmoid function is the logistic function: where a is the tilt parameter, which can be modified to change the degree of tilt.
To utilize the sigmoid function to construct an adaptively decreasing step size, we need to modify the sigmoid function such that it is strictly monotonically decreasing, and the function value remains within the ½0, ; ;1 range. Therefore, we set an initial large step size s 0 during GSO execution and then multiply s 0 by the constructed function. Based on this analytical approach, we make the following changes to the sigmoid function:

Wireless Communications and Mobile Computing
where λ and ε are undetermined parameters. Therefore, the fixed step size location update model (5) is changed into an adaptive variable step size model, as shown below: As detailed in the numerical experiments (Section 5), we establish that performance of the GSO, based on the adaptively decreasing step size strategy (9), for solving AVEs is good. We call this improved GSO with adaptive variable step size model (10) SIGGSO, which comprises models (3), (4), (10), and (7).
To ensure that the improved adaptively decreasing step size model (10) can be better applied to the GSO for solving AVEs, the selection of the values of parameters λ and ε is critical.
Here, we consider AVE1 given in Section 5 for testing the set values of λ and ε. The mean value and standard deviation of the fitness when the optimal solution is obtained are listed in Table 2 for different values of λ and ε: The curve with different values of λ and ε of strategy (9) is displayed in Figure 2. Table 2 shows that when ε is fixed, the results improve as λ is increased. When λ = 50, SIGGSO solves AVE1 with Adaptive variable step-size GSO based on the Sigmoid function Set the dimension of solution of AVE as m, number of glowworms as n, maximum iteration number as T max , l 0 , r 0 , r s , n t , ρ, γ, β, s 0 , t = 1 Let x i ðtÞ be the location of glowworm i at time t; fori = 1 to n do

Intialization SIGGSO parameters
Luciferin update by model (2) Move the location by model (9) Neighborhood range update by model (6) Update step size by using s (t) = s(0) • (1-1/(1+exp(-50(t/t max -0.1)))) Calculate the probability by model (3) and use the roulette method for selection Figure 3: Flowchart of the SIGGSO algorithm. 6 Wireless Communications and Mobile Computing very high accuracy, the minimum fitness is zero, and the algorithm is stable. When λ is fixed, the experimental results improve as ε is decreased. When ε = 0:1, SIGGSO solves AVE1 with very high accuracy, the minimum fitness is zero, and the algorithm is stable. From Figures 2(a) and 2(b), as λ increases, ε decreases, and the change trend of φðtÞ with t is first rapid decline and then gradual stabilization. This is also in line with our basic idea for adaptive step size improvement, namely, that during the early the iterations, the step size can exhibit a relatively large change, such that the glowworms can rapidly converge near the solutions, whereas in the later stages, the step size can exhibit a relatively small change, such that the glowworms can be fine-tuned near the solutions, thus rendering the solution more accurate. Based on this analysis, we fix λ = 50 and ε = 1 for the SIGGSO in the numerical experiments. We use the following model to update the locations of the glowworms: The pseudocode of SIGGSO algorithm is depicted in Pseudocode 1, and the flowchart of SIGGSO is shown in Figure 3.

Time Complexity Analysis of SIGGSO.
In this section, we will use big O notation for time complexity analysis.

Time Complexity Analysis for Population
Size n. Pseudocode 1 shows that the SIGGSO algorithm has an outer loop and two inner loops, and the second inner loop has another inner loop. Move phase O T max , ·, n, ·, l, e, n, g, t, h,   [20]. SIGGSO Improved glowworm swarm optimization algorithm proposed in this study for solving AVEs. IADE Improved adaptive differential evolution algorithm [16]. EDIWPSO Particle swarm optimization algorithm with exponentially decreasing inertia weight [15]. HSCH Improved harmony search algorithm with chaos [17]. GA Genetic algorithm [14].

HHO
Harris hawks optimization algorithm [12].  Remark 5. a · brepresents the product of two scalar a and b. Table 3 shows that the total time complexity of SIGGSO is According to the addition rule, only the highest order of the time complexity is considered, and according to the multiplication rule, Oðc, nÞ is equivalent to OðnÞ (c is constant). Therefore, the time complexity of SIGGSO is Oðn, ·, l, e, n, g, t, h, ðN i , ðtÞÞÞ. As lengthðN i , ðtÞÞ changes for each glowworm at each t, its range is constant ≤ lengthðN i , ðtÞÞ ≤ n. Finally, TðnÞ is between OðnÞ and Oðn 2 Þ.

Time Complexity Analysis for m-Dimensional AVEs.
We assume that both the maximum iteration number n and T max are constants that are set when analyzing the time complexity for the AVE dimension.

Numerical Experiments
In this section, we first give seven test AVEs containing multisolution and high-dimensional ones and have analyzed the solution characteristics of the test AVEs. Parameter setting details is depicted in Section 5.3. The primary objective that shows the comparing the results of SIGGSO, basic GSO and other metaheuristic algorithms are described in Sections 5.4 and 5.5. On the compared results, we mainly consider the mean and standard deviation of fitness value as comparing metrics. Furthermore, the well-known Wilcoxon signedrank test method performs the significance difference test on the comparing results.       In AVE4, AVE5, AVE6, and AVE7, m is the size of the problem, i.e., the dimension of the solution. rand is a random number that follows a uniform distribution in [0,1], rand ðmÞ is an m-order square matrix whose elements are generated by rand, ðr, a, n, dÞ m×1 is m-dimensional column vector whose elements are generated by rand, I is an m -order identity matrix, and e is an m-dimensional column vector whose elements are all unity. R T represents the transpose matrix of R.
The above AVEs were employed to test and compare the performances of SIGGSO with other metaheuristic algorithms which were programmed using MATLAB 2018a. All the experiments were performed on a PC configured with an Intel Core i5-7400 processor and 8-GB RAM. For a quick impression, the compared algorithms are listed in Table 5.

Characteristics of the Test AVEs.
For AVE1, AVE4, AVE5, and AVE6, we can easily obtain the singular values of the matrix A, which all exceed one. Hence, Lemma 1 tells us that both AVEs have unique solutions. The condition of Lemma 2, i.e., kA −1 k ≤ 1, applies to A in AVE7; hence, a unique solution to AVE7 exists. A and b in AVE2 and AVE3 satisfy Lemma 3; hence, there exist 2 n solutions for these three AVEs. Furthermore, as AVE2 is a two-dimensional AVE, it has four solutions. The dimension of AVE3 is three; hence, it has eight solutions. Based on Lemma 3, the sign pattern of AVE2 is

Parameter Setting.
Here are the parameter setting details of the compared algorithms: (a) For SIGGSO, the fixed parameter settings which are not specifically tuned for every AVEs are the same as those in Krishnanand and Ghose [27,28]. We list them in Table 6. A full factorial analysis is carried out in Krishnanand and Ghose [18] to show that the choice of r s has some influence on the performance of the algorithm, in terms of the total number of peaks captured, and they suggested that r s is equal to r 0 . Based on extensive numerical experiments on test AVEs, we determined the appropriate values of r s and r 0 , as shown in Table 7 (b) For IADE, EDIWPSO, HSCH, HHO, and GA, all the parameter settings were the same as those in their referred papers (discussed in Section 1)

Remark 7.
For all the algorithms compared in this study, the initial population was generated randomly in the solution space, the maximum number of iterations was 1000, and the population size was 50.

Wireless Communications and Mobile Computing
The eight solutions of AVE3 are The following figures show the process of capturing the AVE solutions directly obtained by GSO and SIGGSO. To clearly demonstrate the location changing process of the glowworms, we adjusted certain parameters in this subsection. For all the AVEs used in the simulation, we set n = 100, T max = 1000. For AVE4, AVE5, AVE6, and AVE7, we set m = 2. Figures 4-9 indicate that under the same number of iterations, for solving AVE2-AVE7, SIGGSO has a strong ability to leave the local optima and then converge to the global

5.5.
Comparison of the Performances of GSO, SIGGSO, IADE, EDIWPSO, HSCH, HHO, and GA in Solving AVEs. We executed each of these algorithm 20 times independently and then compared the mean value and standard deviation of the fitness value of different test AVEs, as shown in Table 8. Table 9 shows the statistical results for the data of Table 8 using leftsided Wilcoxon signed-rank test. Table 10 shows the mean value and standard deviation of the execution time generated by the seven compared algorithms. Table 8 indicates the following: (a) when solving lowdimensional, multisolution AVEs, SIGGSO, IADE, and EDIWPSO exhibit excellent solution accuracy and stability, whereas the performance of HSCH and GA is unsatisfactory the performance of GSO and HHO is the worst. As a matter of fact, HSCH and GA did not perform well on the whole test AVEs. Moreover, SIGGSO outperforms all the other algorithms in the comparison on AVE2. (b) When solving highdimensional AVEs, EDIWPSO, HSCH, and GA are far inferior to SIGGSO and IADE in all aspects of our comparison. The performance of GSO is slightly inferior to SIGGSO on all high-dimensional AVEs. Furthermore, IADE outperforms    17 Wireless Communications and Mobile Computing SIGGSO on AVE4_100, AVE4_200, AVE7_100, and AVE7_ 200. However, we noticed that IADE has a poor performance on all 500-dimensional AVEs compared with SIGGSO, which means that IADE failed to perform well on higherdimensional AVEs. HHO outperforms SIGGSO on 50% test AVEs including AVE4_100, AVE4_200, AVE4_500, AVE7_ 100, AVE7_100, and AVE7_100. On high-dimensional AVE5 and AVE6, HHO did not perform well than SIGGSO.
For more convincing statistical analysis, Wilcoxon signed-rank test (WSRT) is adopted to perform pairwise comparisons between SIGGSO and the rest of algorithms. Here is the hypothesis for SIGGSO proposed in this study: H 0 . The mean and standard deviation of the fitness value obtained by the SIGGSO is greater than that of another algorithm.
H 1 . The mean and standard deviation of the fitness value obtained by the SIGGSO is less than or equal to that of another algorithms.
We use the left-sided WSRT function signrankðx, y, ′ tail ′ , ′lef t′Þ in MATLAB for the above hypothesis test. Table 9 gives the statistical results on all test AVEs.
According to the p value, Table 9 shows that SIGGSO shows a significant improvement over GSO, EDIWPSO, HSCH, HHO, and GA with a level of significance α = 0:05.
For the comparison between SIGGSO and IADE, the p value is 0.2307 (>0.05), which means that the difference between these two algorithms cannot be deemed significant. Meanwhile, for the comparison between IADE and SIGGSO, the p value is 0.7770 (>0.2307), which means that we prefer to accept the H 1 hypothesis over SIGGSO vs. IADE; i.e., SIGGSO possesses competitive advantages compared with IADE. Table 10 shows that SIGGSO always requires more execution time than IADE, EDIWPSO, and GA on all the compared AVEs. Although EDIWPSO and GA required far less time than SIGGSO, the three algorithms fail to solve AVEs well, as they have lower solution accuracy and weaker stability than SIGGSO. Due to its' superiority in execution time compared to SIGGSO, IADE is preferable for solving AVEs, until the performance of IADE on higher-dimensional and multipeak AVEs is improved. Compared to SIGGSO, the execution time of HSCH is unacceptable in solving higherdimensional AVEs. For example, HSCH required almost 19, 18.9, 19.5, and 21.4 seconds for solving AVE4_500, AVE5_500, AVE6_500, and AVE7_500, respectively. This is nearly 3.5 times than the time required by SIGGSO. Compared to SIGGSO, HHO not only required less time but also did not perform well on most test AVEs. Based on the

18
Wireless Communications and Mobile Computing previous analysis, there is a trade-off in SIGGSO between the execution time and the solution accuracy and the capacity for solving high-dimensional AVEs. Figure 10 displays the fitness plots of one iterative process of the compared AVEs obtained by the compared algorithms.
As shown in Figures 10(a)-10(c), SIGGSO requires more iterations to reach the global optima than the other metaheuristic algorithms. However, when solving high-dimensional AVEs containing AVE5_200, AVE_500, AVE6_100, AVE6_200, and AVE6_500, SIGGSO converges significantly faster, i.e., with fewer iterations than the other algorithms. Moreover, the ability of SIGGSO to leave the local optima is stronger than that of the other four algorithms. For instance, Figures 10(i)-10(l) show that the ability of IADE to leave the local optima is weaker.
All figures show that when solving either multisolution AVEs or high-dimensional AVEs, GA and HSCH fluctuate considerably during their early iterations and cannot leave the local optima, and their solution accuracy is the worst out of the five algorithms.
One point that is worth to consider is that SIGPSO requires slightly less iterations than EDIWPSO on most test AVEs. For example, from Figure 10(k), we can see that SIGPSO requires about 120 iterations to converge to a global optimum, whereas EDIWPSO needs more than 200 iterations. This shows that the adaptive model (9) designed in this paper is a thinkable adaptive step size strategy.
Considering that we need a robust algorithm to solve AVEs, SIGGSO is absolutely an effective technique to this end, especially for multisolution AVEs or high-dimensional AVEs. This is because SIGGSO uses effective adaptive step size techniques with GSO, and it indicates that the strategy selecting method is useful for solving AVEs.

Conclusion
The AVE is a NP-hard problem, and its solution has several forms. There are both low-dimensional and highdimensional AVEs. There are also single-or multisolution AVEs. This study verified the advantages of GSO in solving multisolution AVEs compared to the other metaheuristic algorithms. As the basic GSO has relatively poor solution accuracy and struggles to solve high-dimensional AVEs, an improved GSO based on the sigmoid function, called SIGGSO, was proposed in this study. The sigmoid function was used to reconstruct the adaptive step size model in GSO. This improvement enhances the convergence rate of the GSO during the early iterations and improves its ability to capture the global optima during the later stages. Through numerical experiments, it was established that SIGGSO had higher solution accuracy and better stability compared to the other algorithms. For solving high-dimensional AVEs, it could converge to the global optimum with fewer iterations. The proposed SIGGSO algorithm can be applied to linear complementarity, bilinear programming, concave minimization, and other problems. Moreover, it can be applied to other continuous optimization problems.
In the end, we suggest several potential directions of future research. We intend to focus on improving the speed of the algorithm for higher-dimensional AVEs, which is a limitation of SIGGSO, and we hope to find other activation functions of NN that may further improve the performance of GSO.

Data Availability
The data that support the findings of this study are available from the first author upon reasonable request.

Conflicts of Interest
There are no conflicts of interest to declare.