Lifecycle-Based Swarm Optimization Method for Numerical Optimization

Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO). Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems andmechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.


Introduction
In nature, biology species are divers and an organism is any living thing (such as animal, plant, or microorganism) [1].All their behaviors can show what kind of biological features they have.Some features are universality, such as foraging, reproduction, mutation, and metabolism.And for some organisms, their features are uniqueness and intelligence [2].The ant possesses division and cooperation behaviors.Bees have special skills in the process of gathering honey.Birds have unique flight principle.The bacterial flagellums play a role of chemotaxis in their moving.Biologic features enable organisms to adapt to the complex living environment in the best way and long-term survival in nature.Real-world optimization problems are similar to biologic survival environment; they all have complex features.Therefore, with the purpose of solving reality complex problem, researchers begin to mimic the biologic phenomena via defining a set of rules and realize those rules on computer [3].Those rules are called bioinspired optimization technique.
All living organisms have lifecycle, either the commonest ants, butterflies, goldfish around us or the uncommon Antarctic penguins, arctic bear or either the ferocious beast or the meek of poultry.Although different organisms have different lifecycle lengths, they all undergo the process from birth to death.When an original life ends, a new life will generate.The biology evolution of nature follows the "cycle relay" pattern, which is a "life and death alternation" cycle process.This process repeated continuously made the endless life on earth, and biologic evolution become more and more perfect.
Inspired by the idea of lifecycle, in 2002, Krink and Løvbjerg introduced a hybrid approach called the lifecycle model that simultaneously applies genetic algorithms (GAs), particle swarm optimization (PSO), and stochastic hill climbing to create a generally well-performing search heuristics [22].In this model, authors consider candidate solutions and their fitness as individuals, which, based on their recent search progress, can decide to become either a GA individual, a particle of a PSO, or a single stochastic hill climber.
In 2008, Niu et al. proposed a lifecycle model (LCM) to simulate bacterial evolution from a finite population of Escherichia coli (E.coli) bacteria [23].In this simulation study, bacterial behaviors (chemotaxis, reproduction, extinction, and migration) during their whole life cycle are viewed as evolutionary operators used to find the best nutrient concentration which is labeled as a potential global solution of the optimization problem.
In 2011, borrowing the biologic lifecycle theory, the Lifecycle-based swarm optimization (LSO) algorithm was proposed for the first time [24].Then, 7 unimodal unconstrained optimization test functions and constrained optimization test functions as well as engineering problems that include vehicle routing problem (VRP) and vehicle routing problem with Time Windows (VRPTW) were adopted to test LSO algorithm performance [24][25][26].The above experiments demonstrate that LSO is a competitive and effective approach.In order to evaluate the LSO performance accurately, this paper uses 23 unconstrained benchmark functions to study the effectiveness and stability of LSO.
The rest of this paper is organized as follows.Sections 2 and 3 describe the proposed Lifecycle-based swarm optimization (LSO) technique.Sections 4 and 5 present and discuss computational results.The last section draws conclusions and gives directions of future work.

Lifecycle-Based Swarm Optimization
2.1.Chemotaxis Operator.Based on the current location, the next movement will be towards the better places.The optimal individual of population selects this foraging strategy.Since the optimal forager in the current iteration possesses the greatest energy, so he has the ability to seek the better location which with more nutrient resources than previous location in global search scope.And the seeking mode taken by optimal foraging individual is not the same as the migration method of nonoptimal individual and also is not a simple migration or position moving, but a rather powerful foraging strategy, such as chaos search.The better solution was found directly using chaos variable.
(1) The current optimization variable is denoted by  0 , and its fitness value is ( 0 ).
(2) Generate  chaotic variables ( 1 ,  2 , . . .,   ) by logistic mapping: (3) Transform the chaotic motion traverse range to optimize variable domain: where  up and  lo are the upper and lower boundary of the search space.

Transposition
Operator.Individuals of selecting nonsocial foraging strategy will randomly migrate within their own energy scope: where  is the migration distance of   ,  2 ∈   is a normal distributed random number with mean 0 and standard deviation 1, ub  and lb  are the search space boundary of the  th individual, and Δ is the range of the global search space.

Assimilation
Operator.Individuals of selecting social foraging strategy will perform assimilation operator.They gain resource directly from the optimal individual in the way of using a random step towards the optimal individual: where  1 ∈   is a uniform random sequence in the range (0, 1),   is the best individual of the current population,   is the position of an individual who performs assimilation operator, and  +1 is the next position of this individual.

Crossover Operator.
In LSO, the crossover operator selects single-point crossover method.One crossover point is selected, string from beginning of individual to the crossover point is copied from one parent, and the rest is copied from the second parent.

Selection Operator.
According to "the survival of the fittest" theory and for ensuring a fixed population size LSO takes a certain method which can make some individuals be retained and the others be eliminated.In this algorithm, the selection operator performs elitist selection strategy.A number of individuals with the best fitness values are chosen to pass to the next generation.

Mutation Operator.
In this algorithm, the mutation operator performs dimension-mutation strategy.Every individual   ∈   ,   = ( 1 ,  1 , . . .,   ), one dimension of an individual who was selected according to the probability will re-location in search space: where ub and lb are the lower and upper boundary of search space.In the -dimension search space, the   is the position of the  th dimension of the  th individual; value  is in [1, ].

Algorithm Description
Lifecycle-based swarm optimization is a population-based search technique, evaluation all individuals fitness value, and establishes an iterative process through implementation of six operators proposed above.Each population is composed of a certain number of individuals and meets the clumped distribution.In each iteration, firstly, all individuals need to select foraging strategy and execute foraging operator based on individual's fitness value and foraging probability generated randomly; then, this is followed by the crossover operation, selection operation, and the mutation operation.Finally, generate the next population which can represent the new solutions.In the optimization process, the optimization operation is random, but the optimize performance shown us are not entirely randomly.It can effectively utilize the historical information to speculate the next solutions, which has the possible of closer to optimum.Such process was repeated from generation to generation and finally converges to the individual and this was the most adaptable process to environments and an optimal solution was obtained.Figure 1 shows LSO algorithm flowchart.In order to verify the efficiency of our approach to settle practical problem and test the goodness of LSO, the mechanical design optimization problem was selected as the testing case, which included pressure vessel and schematic diagram of welded beam problem.These are the hybrid system optimization problems.(1) Pressure Vessel.As shown in Figure 2, pressure vessel was designed to minimize the total pressure vessel weight.There are four design variables: the shell thickness   =  1 , the thickness of the head  ℎ =  2 , the inner radius  =  3 , and the length of the cylindrical section  =  4 . 1 and  2 are discrete values which are integer multiples of 0.0625 in and  3 and  4 are continuous.The pressure vessel problem is stated as follows: Minimize:

Experiments Setting
4.2.Settings for Involved Algorithms.We compared the optimization performance of LSO with the well-known algorithms: the standard PSO and the standard GA.In 2006, He et al. proposed group search optimizer (GSO) inspired by the scrounging strategies of house sparrows and employed especially animal scanning mechanism [28].This algorithm appeared to be overpowering compared to the GA, PSO, EP, and ES on 23 benchmark functions used in this paper.Therefore, LSO was also compared with GSO.
The parameter settings of every algorithm were manually tuned.Each of the experiments was repeated 30 runs, and the max iterations in a run  max = 3000.In every run, with the purpose of making the comparison fairly, the initialization populations for all the considered algorithms were generated using the same population which satisfied the normal distribution.The same population size was  = 50.The other specific settings for each of the algorithms are described below.

Results and Discussion
A lot of experimental data come from printed research papers have shown that PSO and GA can find the optimum of some functions.But in this paper, it becomes powerless.The cause is that the way of generating initialization population is changed from random distribution method to clumped distribution method.In a sense, the clumped distribution is the special form of random distribution.But as stated before, random distribution is rare in reality and clumped distribution is the commonest.So, the finally optimum solution generated via initialization population of random distribution cannot be applied to illustrate the algorithms perform for solving reality and complex optimization problems.Generally speaking, unimodal benchmark functions  1 to  7 are relatively easy to be optimized.They were mainly used for testing the convergence rate of algorithm, and the satisfactory accuracy is not a major issue.On this kind of functions, LSO has the best optimization accuracy and the fastest convergence rate.It can converge exponentially fast toward the fitness optimum.This conclusion can be illustrated via Figure 4, which shows the progress of the mean best solutions found by these algorithms over 30 runs for all unimodal functions, expect for function  5 .From Figures 4(a) to 4(f), it can be seen that LSO has the best convergence speed, followed by GSO, GA, and PSO.From the beginning of iterations, the convergence rate of LSO is faster and the convergence curve declined rapidly.At the 500th iteration, LSO has found the optimum solution.Moreover, with increasing the number of iterations, the optimum solution was also approached continuously by LSO at a fast rate.Either the convergence curve of other algorithms is much slower or looks like a horizontal line and seems to stagnate.

Multimodal Functions with Few
Local Minima  14 to  23 .Functions  14 to  23 are multimodal functions with few local minima and possess rather unique features, which can verify the adaptation of algorithms to the different optimization environment.Table 3 presents the optimization results for functions  14 to  23 .It can be concluded from Table 3 that the order of the search performance of these four algorithms is LSO > GSO > PSO > GA.
For these ten functions, in terms of testing the indicators, LSO was ranked the first on functions  15 ,  17 ,  21 ,  22 , and  23 .For example, the problem  21 shown in Figure 6(a) has  five extremes; the bottom point at the deepest hole is the global optimal position and the other holes are deceptive.Figure 6(b) shows convergence results of four algorithms.All algorithms have been quickly in the early iterations.But the GSO, PSO, and GA stagnate before finding the global optimum, and LSO stagnates until it finds it.At the beginning of the searching, there is a number of promising "fox holes, " so the convergence rate of these algorithms is fast.But after a short period, owing to lacking the ability of jumping out of the local extreme, the solutions obtained by GSO, PSO, and GA fall into the "fox holes" deeply, and the evolutionary curve tends to stop.The optimization tactics make LSO escape from the deceptive region and migrate towards the global one.The properties of function  22 and  23 are similar to that of function  21 .Figure 7 shows the same convergence results on functions  22 and  23 .Functions  14 and  16 are all easy problems, and all algorithms can find the exact optimum solutions.On functions  18 and  19 , LSO, GSO, and PSO yield the exact optimum, while GA yielded the approximate optimum.All algorithms come very close to the global optimum on  20 .Figures 8(a) and 8(c) show the convergence results on functions  14 and  18 .Moreover, we can see that LSO has the fastest convergence speed from Figures 8(b) and 8(d).

Mechanical Design Optimization Problem.
Mechanical design optimization plays an important role in engineering and manufacturing enterprises.In this field, one of the most difficult parts encountered is constraints handling and optimization variables.First, on the test results of proposed algorithm, the best feasible value on these two problems is 6059.72 and 1.7107, respectively.The best feasible solution found by our approach is better than those solutions found by other techniques, listed in other literature [29].In addition, the standard LSO employed a penalty function to preserve feasibility of the encountered solutions.This proposed method is relatively simple compared to other algorithms introduced to solve constraint the problem, such as the multiobjective evolutionary method, the collaborative evolutionary particle swarm optimization algorithm, dynamic penalty function method, annealing penalty function method, information feedback adaptive penalty function method, multilayer social culture algorithm, and combination of global and local topology particle swarm algorithm.

Conclusions
This paper proposed a novel optimization algorithm, LSO, which is based on biologic lifecycle theory.Based on these features of lifecycle, LSO designed six optimization operators:  chemotactic, assimilation, transposition, crossover, selection, and mutation.Population is the basic unit of biologic existence.Clumped distribution of population spatial is the commonest pattern.This paper borrows the clumped distribution pattern to generate initialization population.A set of 23 unconstrained benchmark functions and mechanical design optimization problems have been used to test LSO in comparison with GSO, PSO and GA, respectively.It is worth mentioning that LSO cannot find the optimum on function  5 .Function  5 is a nonconvex function; its global minimum is inside a long, narrow, parabolic shaped flat valley.However, even though the valley is easy to find, convergence to the global minimum is difficult.So our future work would study how to make LSO has the ability of moving quickly along the narrow valley in the local area to the objective function minimum.For instance, gradient-based method is incorporated in the late stage of optimization.
As part of our future work, LSO also could be studied and tested on real-world problems, such as location problem of manufacturing systems, network routing problem of computer engineering, parameter identification problem of industrial engineering, electrical engineering problem, aerospace engineering problem, and bioengineering problem.

1875, 10 ≤ 7 )( 2 )
3 ,  4 ≤ 200.(Schematic Diagram of Welded Beam.As shown in Figure 3, schematic diagram of welded beam problem was designed to minimize the total cost of welded beam materials.There are four design variables: the welding thickness ℎ =  1 , weld joint length  =  2 , the width of the beam  =  3 , and the thickness of the beam  =  4 . 1 and  2 are discrete values which are integer multiples of 0.0625 in and  3 and  4 are continuous.The schematic diagram of welded beam problem is stated as follows:

Figure 6 :
Figure 6: Function  21 .(a) Function graphs with a dimension of 2 and (b) convergence results of all algorithms.
[27]ully evaluate the performance of the LSO algorithm without bias, we employed 23 benchmark functions which were tested widely in evolutionary computation domain to show the quality solution and the convergence rate[27].These test functions were listed in appendix.In those functions, functions  1 to  7 are unimodal functions, functions  8 to  13 are multimodal functions with many local minima, and functions  14 to  23 are multimodal functions with few local minima.

Table 1 :
Results for all algorithms on benchmarks functions   to  7 .
1 to  7 .Table1presents the optimization results for unimodal functions  1 - 7 obtained by all algorithms.Obviously, LSO performs best and finds the global optimum or very near optimum in all cases expect function  5 .Functions  1 - 4 have consistent performance pattern across all algorithms.LSO is the best, GSO is almost good, and PSO and GA failed.Function  6 is the step function and consists of plateaus, which has one minimum and is discontinuous.It is obvious that finding the optimum solution by LSO and GSO is easy, but it is difficult for PSO.Function  7 is a noisy quartic function, where random [0, 1) is a uniformly distributed random variable in [0, 1).On this function, LSO can find the exact optimum, whereas other algorithms cannot do so.

Table 2 :
Results for all algorithms on benchmarks functions  8 to  13 .

Table 3 :
Results for all algorithms on benchmarks functions  14 to  23 .