A Hybrid Lightning Search Algorithm-Simplex Method for Global Optimization

In this paper, a novel hybrid lightning search algorithm-simplex method (LSA-SM) is proposed to solve the shortcomings of lightning search algorithm (LSA) premature convergence and low computational accuracy and it is applied to function optimization and constrained engineering design optimization problems. The improvement adds two major optimization strategies. Simplex method (SM) iteratively optimizes the current worst step leaders to avoid the population searching at the edge, thus improving the convergence accuracy and rate of the algorithm. Elite opposition-based learning (EOBL) increases the diversity of population to avoid the algorithm falling into local optimum. LSA-SM is tested by 18 benchmark functions and five constrained engineering design problems. The results show that LSA-SM has higher computational accuracy, faster convergence rate, and stronger stability than other algorithms and can effectively solve the problem of constrained nonlinear optimization in reality.


Introduction
Optimization is concerned with finding the best solutions for a given problem.It is everywhere and important in many applications such as production line scheduling, power system optimization, 3D path planning, and traveling salesman problem (TSP).The goal of optimization is to minimize costs to maximize profitability and efficiency.In the broadest sense, techniques for dealing with different types of optimization problems are classified into exact and stochastic algorithms.When the problems are large and complex, especially if they are either NP-complete or NP-hard, the use of stochastic algorithms becomes mandatory.These stochastic algorithms do not guarantee an optimal solution, but they are able to find quasi-optimal solutions within a reasonable amount of time [1].
Metaheuristic optimization algorithms are stochastic algorithms that have become the most popular solution for solving optimization problems in the last few decades and they have the characteristics of simplicity, flexibility, strong robustness, and so on.Some of the most famous of these algorithms are genetic algorithm (GA) [2], ant colony optimization (ACO) [3], particle swarm optimization (PSO) [4], harmony search (HS) [5], artificial bee colony (ABC) [6], cuckoo search (CS) [7], bat algorithm (BA) [8], bacterial foraging optimization (BFO) [9], black hole (BH) [10], and so forth.The GA simulates the Darwinian biological evolutionary principle.In this algorithm, individuals are expressed as genotypes.Each individual is selected, crossed, and mutated in the evolutionary process, in accordance with the "survival of the fittest" principle of evolution, and finally the optimal solution to the problem is gotten.The ACO algorithm is based on the phenomenon that ants communicate with each other during the foraging process by releasing pheromones to find the best path between the nest and the food source.The PSO algorithm is inspired by bird predation.This algorithm regards birds as no mass particles, and, in the search space, the change in particle position depends not only on their own experience, but also on the excellent individual of society.The HS algorithm simulates the process by which musicians adjust the different instrumental tones repeatedly to achieve the most beautiful and harmony.The ABC algorithm is a bionic intelligent method for simulating bee populations to find excellent nectar.In short, the inspiration and wisdom from nature spawned the development of heuristic algorithms to solve complex and large-scale optimization problems.In addition, inspired by the natural phenomena of lightning, a new metaheuristic optimization algorithm called lightning search algorithm (LSA) [11] is proposed.
Lightning search algorithm was first proposed by Shareef et al. in 2015, which is inspired by the phenomenon of lightning discharge to the ground during a thunderstorm to solve constraint optimization problems.The proposed optimization algorithm utilizes the mechanism of step leader propagation and considers the involvement of fast particles known as projectiles in the formation of the binary tree structure of a step leader.It has been proved that LSA has high convergence rate and successfully solved the small-scale TSP problem.Although a long time did not pass since the introduction of LSA, it has been widely studied by scholars.In 2015, Shareef et al. used LSA to solve fuzzy logic PV inverter controller optimization problems [12].In the same year, Ali et al. proposed a novel quantum-behaved lightning search algorithm (QLSA) to improve the fuzzy logic speed controller for an induction motor drive [13].In 2016, Islam et al. proposed a variant of LSA to solve binary optimization problems called binary lightning search algorithm (BLSA) [14].Sirjani and Shareef applied LSA for parameter extraction of solar cell models in different weather conditions [15].Mutlag et al. applied LSA to design an optimal fuzzy logic controller for a PV inverter [16].LSA was used to improve the artificial neural network (LSA-ANN) for the home energy management scheduling controller for the residential demand response strategy in 2016 [17].LSA was also used to optimize the learning process of feedforward neural networks [18].Because the LSA algorithm simulates the fast propagation characteristics of lightning, it has high convergence rate and strong robustness.However, there are some shortcomings in LSA itself, such as having insufficient stability, being easy to fall into local optimum, and having low convergence accuracy.Therefore, the goal of this paper is to improve the LSA and expand its application.
In this study, a novel optimization method called hybrid lightning search algorithm-simplex method (LSA-SM) has been applied to function optimization and constrained engineering design optimization.Simplex method (SM) has the characteristics of fast search speed, small computation amount, and strong local optimization ability.By introducing the reflection, expansion, and compression operations of the simplex method in the LSA algorithm to improve the step leaders in the worst position, the accuracy of the algorithm is improved.LSA-SM adds two strategies, SM and elite opposition-based learning (EOBL).SM iteratively optimizes the worst step leaders, making the population faster and closer to the global optimal solution.EOBL increases the diversity of population and expands the search space to avoid the algorithm falling into local optimum.The performance of proposed LSA-SM is tested by the well-known 18 benchmark functions and five constrained engineering design problems.The test results show that LSA-SM has faster convergence rate, higher optimization accuracy, and strong stability.The remainder of the paper is organized as follows.Section 2 briefly introduces the original lightning search algorithm.Section 3 presents the novel hybrid lightning search algorithm-simplex method.Simulation experiment setup, results, and analysis are provided in Section 4. The proposed LSA-SM is applied to solve five constrained engineering design optimization problems in Section 5. Finally, Section 6 gives conclusions and future studies.

Lightning Search Algorithm (LSA)
Lightning search algorithm (LSA) is based on the natural phenomenon of lightning, and it is inspired by the probabilistic nature and sinuous characteristics of lightning discharges during a thunderstorm (Figure 1).This optimization algorithm is generalized from the mechanism of step leader propagation using the theory of fast particles known as projectiles.The projectile represents the initial population size, similar to the term "particle" used in the PSO.In the LSA, the solution refers to the tip of the current step leader's energy [11,12].
2.1.Projectile Model.LSA consists of three types of projectiles: transition, space, and lead projectiles.The transition projectiles create the first step leader population for solutions, the space projectiles engage in exploration and attempt to become the leader, and the lead projectiles attempt to find and exploit the optimal solution [15].

Transition
Projectile.An early stage of formation of a stepped lead, the transition projectile   = [  1 ,   2 , . . .,    ] is ejected from the thunder cell in a random direction.Therefore, it can be modeled as a random number drawn from the standard uniform probability distribution as follows: where   is a random number that may provide a solution and  and  are the lower and upper bounds, respectively, of the solution space.For a population of  stepped leaders Thus, the position and direction of    at step + 1 can be written as follows: where exp rand is an exponential random number and   is taken as the distance between the lead projectile   and the space projectile    under consideration.If   _new provides a good solution at step + 1 and the projectile energy   _ is greater than the step leader  sl_ , then    is updated to   _new .Otherwise, they remain unchanged until the next step.

Lead Projectile.
The lead projectile   moves closer to the ground, which can be modeled as a random number taken from the standard normal distribution as follows: The randomly generated lead projectile can search in all directions from the current position defined by the shape parameter (  ).This projectile also has an exploitation ability defined by the scale parameter (  ).The scale parameter   exponentially decreases as it progresses towards the Earth or as it finds the best solution.Thus, the position of   at step +1 can be written as follows: where normrand is a random number generated by the normal distribution function.Similarly, if   new provides a good solution at step + 1 and the projectile energy   _ is greater than the step leader  sl_ , then   is updated to   new .Otherwise, they remain unchanged until the next step.

Forking Procedure.
Forking is an important property of a stepped leader, in which two simultaneous and symmetrical branches emerge.In the proposed algorithm, forking is realized in two ways.First, symmetrical channels are created because the nuclei collision of the projectile is realized by using the opposite number as follows: where   and   are the opposite and original projectiles, respectively, in a one-dimensional system and  and  are the boundary limits.In order to maintain the population size, the forking leader selects   or   with a better fitness value.
In the second type of forking, a channel is assumed to appear at a successful step leader tip because of the energy redistribution of the most unsuccessful leader after several propagation trials.The unsuccessful leader can be redistributed by defining the maximum allowable number of trials as channel time.
The flowchart of the LSA procedure is shown in Figure 2.

Hybrid Lightning Search Algorithm-Simplex Method (LSA-SM)
Standard LSA has a fast convergence rate, but there are still some shortcomings, such as premature convergence, easy fall into local optimum, poor solution accuracy, and low ability to solve multimodal optimization problems.In order to improve the search performance of LSA, a hybrid lightning search algorithm-simplex method (LSA-SM) is adopted.In view of the shortcomings of LSA, two optimization strategies, namely, simplex method (SM) and elite opposition-based learning (EOBL), are added to the standard LSA.

Simplex Method (SM).
Simplex method has exceptional advantages in local search, often optimized to obtain higher precision.The method is proposed by Nelder and Mead, a simple and derivative-free line search method for finding a local minimum of a function.It is based on the idea of comparing the values of the objective function at the  + 1 vertices of a polytope (simplex) in -dimensional space and moving the polyhedron towards the minimum point as the optimization progress [44,45].In this paper, we choose  step leaders with the worst objective function values and use SM to optimize their positions, as shown in Figure 3.This allows the algorithm to be closer to the optimal solution, improve its exploit ability, and speed it up to find the optimal solution.The procedures of the SM in this study are shown as follows.
Step 1.According to the objective function values of all searched step leaders, find the optimal point (with minimum function value) , the suboptimal point , and the  worst points; take one of them as .
Step 2. Calculate the center  of the optimal point  and suboptimal point .
Step 3. Reflect  through the center  to a new point .
And calculate the objective function value at this point.
where  = 1.5 is expansion coefficient. Step where  = 0.5 is contraction coefficient.Step 6.If () < () < () then shrink  to generate a new point .The shrinkage coefficient is the same as the contraction coefficient.
The SM allows the current worst step leaders to find a better position at each iteration, which may be better than the optimal point.This avoids the population searching at the edge, guiding it towards the global optimum and the convergence accuracy and rate of the algorithm are improved.

Elite Opposition-Based Learning (EOBL).
Although SM improves the convergence accuracy of the LSA, it is easy to fall into the local optimum, so the EOBL strategy is introduced to increase the diversity of population and expand the search space.
Opposition-based learning (OBL), formally introduced by Tizhoosh [46], is a new model of machine intelligence that takes into account current estimate and its opposite estimate at the same time to achieve a better solution.It has been proved that an opposite candidate solution has a higher chance to be closer to the global optimal solution than a random candidate solution [47].Elite oppositionbased learning (EOBL) [48] is based on the elite step leader using OBL principle to generate elite opposition-based population to participate in competitive evolution, so as to improve population diversity of LSA.The step leader with the best fitness value is defined as elite step leader   = ( ,1 ,  ,2 , . . .,  , ); elite opposition-based solution of step leader   = ( ,1 ,  ,2 , . . .,  , ) can be defined as    = (  ,1 ,   ,2 , . . .,   , ) using the following equation: where  is the population size,  is the dimension of ,  ∈ (0, 1), and (Lb  , Ub  ) is the search bound.If the elite opposition-based step leader   , exceeds the search boundary, the processing formula is used as follows: According to the above principles, the EOBL procedures are as follows.
Step 1. Define an elite step leader   with the current best fitness value, and then use (12) to generate an elite oppositionbased population with size .
Step 2. If the elite opposition-based step leader exceeds the search boundary, then ( 13) is used.
Step 3. 2 step leaders are involved in evolutionary competition, and  step leaders with better fitness values are selected to enter the next generation.
Two strategies of SM and EOBL are added to the LSA to balance the exploitation and exploration of the algorithm; the flowchart of the LSA-SM is shown in Figure 4. From the above analysis, it can be concluded that the time complexity of LSA-SM is (Max_iter * ( *  +  * Cof)) and the space complexity is (2 * ), where Max_iter is maximum number of iterations,  is the population size,  is the dimension of the problem, and Cof is the cost of the objective function.

Simulation Experiment and Result Analysis
To verify the performance of LSA-SM, the following will be a variety of numerical contrast experiments, using the benchmark functions to test LSA-SM and compare the results to some well-known algorithms.In this section, 18 benchmark functions are applied to test the LSA-SM, which are classic test functions used by many researchers [11,28,[49][50][51].These benchmark functions are listed in Tables 1-3 where Dim indicates dimension of the function, Range is the boundary of the function search space, and  min is the global optimum.Generally speaking, the 18 benchmark functions used are minimization functions and can be divided into three groups:  01 to  05 are high-dimensional unimodal functions as group 1,  06 to  10 are high-dimensional multimodal functions as group 2, and  11 to  18 are fixed-dimension multimodal functions as group 3. Unimodal functions have only one optimal value, and they are suitable for testing the exploitation of algorithms.In contrast, multimodal functions have many local optima, and the number of local optima increases exponentially with dimension, which makes them suitable for exploration.

Experimental Setup.
All numerical comparison experiments were run under MATLAB R2012a using an Intel(R) Xeon(R) CPU E5-1620 v3 @ 3.50 GHz processor and 8.00 GB memory.
The results of the comparative experiments for the 20 independent runs of the algorithms are summarized in Tables 4-6, which are Best, Worst, Mean, and Std., four evaluation indices that represent the optimal fitness value, worst fitness value, mean fitness value, and standard deviation, respectively.For each benchmark function, the eight algorithms are sorted by standard deviation, and the standard deviation is as small as possible.The last column in Tables 4-6 is the sorting result of LSA-SM.The minimum, best median, and the minimum standard deviation values among the eight algorithms of each benchmark function are shown in bold.
In order to verify the validity of the above values, nonparametric Wilcoxon rank tests were performed on the results of LSA-SM and other comparison algorithms running 20 times for 18 benchmark functions.The test results are shown in Table 7, where the value of ℎ being 1 indicates that the performance and comparative method are statistically different with 95% confidence and 0 implies there is no statistical difference.That is, when _value ≥ 0.05, then ℎ = 0.As shown in Table 7, "ℎ = 1" accounts for the vast majority of the results, which implies that the proposed LSA-SM has different statistical significance from other algorithms.
According to the results in Table 4, LSA-SM can find the exact optimal value in unimodal benchmark functions, indicating that LSA-SM has strong robustness and high convergence accuracy.For functions  01 - 04 , the optimal fitness value, mean fitness value, and standard deviation are better than other algorithms.For function  05 , the results of LSA-SM are the same as GSA, GWO, and WOA and better than LSA, BA, FPA, and GOA.The above analysis shows that LSA-SM has good stability and strong exploitation ability to solve unimodal benchmark functions.
In the results of multimodal functions in Tables 5 and  6, LSA-SM can find the global optimal solution or locate a near-global optimal value, indicating that LSA-SM has the ability to escape from poor local optimum.Table 5 is the experimental results of high-dimensional multimodal functions.For functions  07 - 10 , LSA-SM is ranked first among the eight algorithms, and its optimal fitness value, mean fitness value, and standard deviation have the highest accuracy.For function  07 , LSA-SM and WOA reach the best global minimum and the standard deviation is zero; GWO also obtains the best global minimum, but its result is not stable.For function  08 , LSA-SM and WOA get the same optimal fitness value, but the average and standard deviation of LSA-SM is better than WOA.For function  09 , LSA-SM, GWO, and WOA can find the exact optimal value, but LSA-SM has the strongest stability.For function  10 , the LSA-SM results are much better than other algorithms.For function  06 , although the standard deviation of LSA-SM is worse than that of FPA and GSA, the optimal fitness value and mean fitness value are better than the seven comparison algorithms.Therefore, LSA-SM has strong global exploration ability and high computational accuracy for solving high-dimensional multimodal functions.
Table 6 shows the experimental results for fixed-dimension multimodal functions (also called low-dimensional multimodal functions).It can be seen that LSA-SM and some other algorithms are able to search for the exact optimal value for functions  11 - 18 , indicating that LSA-SM has high convergence accuracy to solve the low-dimensional multimodal functions.For functions  11 ,  13 , and  16 - 18 , LSA-SM has the least standard deviation, which indicates that LSA-SM is more stable than other algorithms.For function  12 , the four evaluation indices of LSA-SM are the same as LSA and GSA, and standard deviation of the three algorithms is zero.For function  14 , the standard variance of LSA-SM is only worse than GSA.For function  15 , LSA-SM is ranked third, and its mean fitness value and standard deviation are worse than those of GSA and FPA.In short, LSA-SM has certain advantages in optimizing the low-dimensional multimodal functions.
In summary, in the unimodal benchmark functions, LSA-SM can converge to the exact solution, which shows that LSA-SM has high exploitation ability.In the multimodal benchmark functions, LSA-SM can avoid local optimal solution and has high global exploration ability.Moreover, from the "Rank" column in Tables 4-6, it can be seen that LSA-SM has a strong stability.
For the three groups of benchmark functions, Figures 5-22 are the convergence curves drawn from the logarithm of mean minimum, which makes the performance of algorithms more convincing.And Figures 23-40 are the ANOVA test of the global minimum from 20 independent runs.As can be seen from Figures 5-22, the convergence accuracy of LSA-SM In this section, in order to verify the optimization performance of the LSA-SM algorithm, 18 benchmark functions are used to simulate the experiment, and the results obtained by LSA-SM are compared with LSA, BA, FPA, GSA, GWO, WOA, and GOA.These benchmark functions include highdimensional unimodal, high-dimensional multimodal, and fixed-dimension multimodal functions to verify the convergence, exploration, exploitation, and local optimal avoidance of the algorithms.The nonparametric Wilcoxon rank test is performed on the results of LSA-SM and other comparison algorithms to determine whether the proposed LSA-SM has different statistical significance from other algorithms.At the  same time, the convergence curves and ANOVA test graphs of the algorithms are given.
From the results of the unimodal test functions, it is known that LSA-SM obtains the exact solution and the standard deviation is zero, which indicates that LSA-SM has high exploitation and strong stability.The LSA-SM achieves global minimum or near-global minimum in multimodal test functions, which proves that it has high exploration and avoids local optimum.From the convergence curves  of test functions, it is concluded that LSA-SM has faster convergence rate than LSA and other algorithms.Moreover, in the standard deviation ranking, LSA-SM ranked first in the majority, indicating that the stability of LSA is very strong.This result is also clearly observed in the ANOVA test, where LSA-SM has a narrow interquartile range and the number of outliers is less than other algorithms.In addition, the nonparametric Wilcoxon rank test results show that LSA-SM has different statistical significance from other algorithms, which proves the effectiveness of LSA-SM results.

Mean best fitness
LSA-SM achieves better test results, mainly because SM and EOBL improve the exploration and exploitation of the   hybrid algorithm.SM iteratively optimizes the step leaders in the worst position and promotes the local search of the algorithm, which makes the algorithm faster and closer to the global optimal solution.EOBL extends the search space to enhance the global search of the algorithm and avoid the algorithm falling into local optimization.In addition, according to the results in Tables 4-6, LSA, BA, FPA, and GOA are poor in high-dimensional unimodal and highdimensional multimodal test functions, but the best value or approximate optimal value can be obtained in lowdimensional multimodal test functions, indicating that LSA, BA, FPA, and GOA have low exploitation and poor performance in optimizing high-dimensional functions.GSA,   GWO, and WOA have achieved good results in most test functions, but their convergence accuracy needs to be further improved by strengthening local search.Moreover, from the convergence curve graphs, the early convergence rate of these seven algorithms is not very fast, mainly because the early search range of the algorithms is not large.In the ANOVA test graphs, the seven algorithms have a wider interquartile range and more outliers in some functions, indicating that the stability of these algorithms is not strong, mainly because they do not have a good balance between exploration and exploitation.In conclusion, the proposed LSA-SM in this paper has the characteristics of fast convergence rate, high convergence accuracy, and strong stability.

Constrained Engineering Design Optimization
Structural engineering optimization problems are complex; sometimes even the optimal solutions of interest do not exist [31].These problems are usually nonlinear and have several equality and inequality constraints.In order to evaluate the performance of LSA-SM to solve practical application problems, we employed five classic constrained engineering design problems: tension/compression spring, cantilever beam, pressure vessel, welded beam, and three-bar truss.

Tension/Compression Spring
Design.This problem is from Arora and Belegundu, which aims to minimize the weight of a tension/compression spring, as shown in Figure 41.The minimum weight is subject to four constraints on minimum deflection ( 1 ), shear stress ( 2 ), surge frequency ( 3 ), and limits on outside diameter ( 4 ).There are three design variables: wire diameter ( 1 ), mean coil diameter ( 2 ), and the number of active coils ( 3 ) [23].The mathematical formulas involved are as follows:

Mean best fitness
where the variable regions are 0.05 ≤  1 ≤ 2.00, 0.25 ≤  2 ≤ 1.30, and 2.00 ≤  3 ≤ 15.0.Table 8 compares the   best optimization results obtained using LSA-SM with those reported in other literatures using different methods.We can see that the minimum weight obtained by LSA-SM to solve the tension/compression spring design problem is better than most of the methods except ABC, in which the minimum weight of LSA-SM is 0.01266524 and that of ABC is 0.012665.

Cantilever Beam Design.
Cantilever beam consists of five square hollow blocks, as shown in Figure 42.The beam is rigidly supported at node 1, and there is a given vertical force acting at node 6.The problem is related to weight optimization, and the design variables are the height (or width) of the different beam elements, the number of which   is five, and the thickness is constant [31].The detailed description of the cantilever beam problem is as follows:

Mean best fitness
Subject to:  () = 61  3 1 where 0.01 ≤   ≤ 100,  = 1, 2, . . ., 5. Comparison results for cantilever beam design problem are listed in Table 9.Hence,  it can be concluded that the best results of LSA-SM clearly outperform the other methods and it achieves the overall best design of the 1.339958.In addition, the optimization result of SOS is 1.33996, which is close to the result of LSA-SM.

Pressure Vessel Design.
The pressure vessel design is a cylindrical vessel whose ends are capped by a hemispherical head as shown in Figure 43.The objective is to minimize the total cost.There are four design variables,   (thickness of the shell,  1 ),  ℎ (thickness of the head,  2 ),  (inner radius,  3 ), and  (length of the cylindrical section of the vessel,  4 ) [25], and four constraints.The thicknesses   and  ℎ of the variables are discrete values which are integer multiples of 0.0625 inches, and  and  are continuous values.This question can be described as follows: Subject to: where 1 × 0.0625 ≤    vessel design is shown in Table 10.It can be seen that the minimum total cost of LSA-SM is 5942.6966,which is only worse than IHS in the listed methods.This result indicates that LSA-SM has successfully obtained the solution.
Table 11 compares the optimization results of LSA-SM and other optimization methods found in the literature, where LSA-SM achieves solutions that are superior to all other methods with minimum cost of 1.695247 for welded beam design problem.

Three-Bar Truss Design.
Figure 45 shows a three-bar truss structure with the aim of achieving the minimum weight subjected to stress, deflection, and buckling constraints and evaluating the optimal cross sectional area ( 1 ,  2 ) [27,31].This question has two variables and three constraints, involving the mathematical formulas as follows:

Subject to: 𝑔
where the variable ranges are 0 ≤  1 ,  2 ≤ 1 and  = 100 cm,  = 2 KN/cm 2 , and  = 2 KN/cm 2 .Table 12     In summary, LSA-SM has effectively solved five classical constrained engineering design problems, and its optimization results are superior to most of the methods in the literatures.These results show that LSA-SM has the outstanding ability to solve the problem of constrained nonlinear optimization in reality.

Conclusions
In the current study, a hybrid LSA-SM is proposed to solve the global optimization problem.The standard LSA algorithm is based on the natural phenomenon of lightning discharge to the ground, which has a fast convergence rate and simulates the process of lightning bifurcation to explore a better solution.In view of the poor convergence accuracy of the original LSA, a simplex method is added to improve its accuracy.Simplex method is a prominent method for local search.The accuracy of the LSA is improved by using the reflection, expansion, and compression operations of the SM to improve the step leaders with the worst fitness value.In addition, the elite opposition-based learning strategy is introduced to expand the diversity of the population, to avoid LSA-SM falling into the local optimum.SM and EOBL are added to the LSA to balance the exploitation and exploration of the algorithm.The performance of LSA-SM is tested by 18 benchmark functions and compared with LSA, BA, FPA, GSA, GWO, WOA, and GOA.The experimental results show that LSA-SM can find the exact solution of most tested functions and has the characteristics of higher computational accuracy, faster convergence rate, and stronger stability.Finally, the hybrid LSA-SM algorithm is used to solve five engineering problems in reality.These problems are tension/compression spring, cantilever beam, pressure vessel, welded beam, and threebar truss design problems.The results obtained by LSA-SM to solve these engineering design problems are much better than most of the algorithms in the literature, which proves that LSA-SM is suitable for solving constrained nonlinear optimization problems.
For future work, LSA-SM needs to further adjust the relevant parameters or increase the improvement strategy because LSA-SM does not find an accurate solution in some test functions.The proposed LSA-SM will be used to solve multiobjective optimization problems.In addition,     Shareef et al. have used LSA to solve the TSP problem with 20 cities [11].In order to verify the performance of LSA-SM to solve the higher dimension problem, it will solve the large-scale TSP problem as the next study.Finally, LSA-SM will be applied to the optimal scheduling of tasks on cloud resources, and the results obtained by LSA-SM are compared with the existing algorithms on the problem.The optimal scheduling of tasks in the cloud computing environment is a hot issue.In cloud computing, a number of tasks may need to be scheduled on different virtual machines in order to minimize makespan and increase system utilization.Task scheduling problem is NP-complete; hence finding an exact solution is intractable especially for large task sizes [55].

Figure 3 :
Figure 3: Simplex method to search for different points.
the eight algorithms are set to 50 and 1000, respectively.The number of the current worst step leaders of LSA-SM is  = 10.The control parameters associated with other algorithm are given below.LSA parameter setting: channel time  = 10; forking rate  = 0.01.The source code of LSA is publicly available at http://cn.mathworks.com/matlabcentral/fileexchange/54181lightning-search-algorithm-lsa-.

Table 4 :
Results of unimodal benchmark functions.

Table 5 :
Results of multimodal benchmark functions.

Table 6 :
Results of fixed-dimension multimodal benchmark functions. 01 - 11 .For functions  12 - 14 , the convergence curves of the eight algorithms finally reach the global minimum.For function  15 , FPA and GSA have better convergence values than LSA-SM, but their values are not much different.For functions  16 and  17 , LSA-SM and FPA eventually converge to global optimum.For function  18 , LSA-SM and GSA converge to the global optimal value.In addition, LSA-SM converges faster than other algorithms in all convergence graphs.As shown in Figures23-40, it is easy to find that the standard deviation of LSA-SM is much smaller except function  06 , and the number of outliers is less than that of other algorithms.Moreover, the median of the LSA-SM (indicated by a red line) is closer to the best solution in  06 .This proves that LSA-SM is highly stable.In conclusion, the proposed LSA-SM in this paper has the characteristics of fast convergence rate, high convergence accuracy, and strong stability.

Table 7 :
Statistical comparison between LSA-SM and the other seven algorithms.

Table 8 :
Best results of the spring design example by different methods.: evolution strategies; UPSO: unified particle swarm optimization; CPSO: coevolutionary particle swarm optimization; CDE: coevolutionary differential evolution; IHS: improved harmony search; MFO: moth-flame optimization algorithm; AFA: adaptive firefly algorithm; NA: there is no relevant data. ES

Table 9 :
Best results of the cantilever beam design by different methods.

Table 10 :
Best results of the pressure vessel design by different methods.
TLBO: teaching-learning-based optimization; NA: there is no relevant data.Subject to:  1

Table 11 :
Best results of the welded beam design by different methods.
CSS: charged system search.
lists the results of solving the three-bar truss design problem through the proposed LSA-SM and other eight different methods.
The optimal target value of LSA-SM is 263.8958, which is close to the optimization results of Ray and Liew, DEDS, PSO-DE, and MBA (263.8958466,263.8958434, 263.8958433, and 263.8958522, resp.), and LSA-SM is better than all the algorithms for solving the three-bar truss design problem.

Table 12 :
Best results of the three-bar truss design by different methods.: a method to solve nonlinear fractional programming; b: a swarm strategy; SC: society and civilization; DEDS: differential evolution with dynamic stochastic selection; PSO-DE: hybridizing particle swarm optimization with differential evolution; MBA: mine blast algorithm. a