Chicken Swarm Optimization Based on Elite Opposition-Based Learning

Chicken swarm optimization is a new intelligent bionic algorithm, simulating the chicken swarm searching for food in nature. Basic algorithm is likely to fall into a local optimum and has a slow convergence rate. Aiming at these deficiencies, an improved chicken swarm optimization algorithm based on elite opposition-based learning is proposed. In cock swarm, random search based on adaptive 𝑡 distribution is adopted to replace that based on Gaussian distribution so as to balance the global exploitation ability and local development ability of the algorithm. In hen swarm, elite opposition-based learning is introduced to promote the population diversity. Dimension-by-dimension greedy search mode is used to do local search for individual of optimal chicken swarm in order to improve optimization precision. According to the test results of 18 standard test functions and 2 engineering structure optimization problems, this algorithm has better effect on optimization precision and speed compared with basic chicken algorithm and other intelligent optimization algorithms.


Introduction
Many problems in areas of scientific computing, engineering science, and business management can be concluded as global optimum problems.The consumption time of traditional accurate computation approaches for solving large-scale optimization problems increases exponentially.So this approach cannot meet the real requirements.To solve these problems, many scholars, by simulating life habits of creatures in the nature, presented intelligent swarm optimization algorithms including particle swarm optimization (PSO) [1], ant colony optimization (ACO) [2], artificial bee colony (ABC) [3], invasive weed colonization optimization (IWO) [4], firefly algorithm (FA) [5,6], cuckoo search (CS) algorithm [7,8], fish swarm algorithm (FSA) [9], bat algorithm (BA) [10,11], monkey algorithm (MA) [12], krill herd (KH) algorithm, and flower pollination algorithm (FPA) [13]; all these have gained favorable results.As a new kind of burgeoning metaheuristic algorithm, intelligent swarm optimization algorithms have characteristics of high precision, fast convergence rate, and good stability and can obtain the exact solution or approximate solution of large-scale optimum problems within limited time.
Chicken swarm optimization (CSO) is a new intelligent bionic algorithm proposed by Meng et al. [14] in 2014, which simulates chickens swarm hierarchy and their food search behavior.The whole chicken swarm is divided into cock swarm, hen swarm, and chick swarm.Chickens with highest fitness values and lowest fitness values are taken as cock swarm and chick swarm, respectively, and the rest are taken as hen swarm.When solving optimization problems, each chicken in the swarm corresponds to a solution.Different search strategies are adopted for different subswarm according to different population.In contrast with standard particle swarm optimization, differential evolution, and bat algorithm, chicken swarm has advantage in either searching precision or convergence rate [14,15].In basic chicken swarm algorithm, a random search strategy based on Gaussian distribution is adopted for particles of cock swarm.This search strategy has strong ability of local development, but its global 2 Mathematical Problems in Engineering development ability is weak to make it liable to fall into a local optimum.For hen swarm particles, the searching is done by common guidance of cocks of their own population and other populations, which helps hen particles close to global optimum.However, the population diversity will lack and the hen particles will fall into a local optimum when most cock particles (with cock particles of own race and other races) are in a local optimum.Consequently, the global optimal solution cannot be obtained and its search performance will be affected for basic chicken swarm algorithm.
In this paper, a chicken swarm optimization algorithm on the basis of elite opposition-based learning (EOCSO) is presented to solve global optimum problems.A random search strategy based on dynamic adaptive  distribution is adopted in this algorithm for cock swarm to replace the random search based on Gaussian distribution.The local exploitation ability and global development ability are balanced.In order to improve the optimization precision and convergence rate of the algorithm, an opposition-based learning method is used to improve population diversity for hen swarm and a greedy dimension-by-dimension search mode is applied to individual of optimal chicken swarm for local search.Through experiments of 18 basic test functions and 2 engineering structure optimization measurement problems, the comparison among improved chicken swarm algorithm, basic chicken swarm algorithm, and other typical intelligent algorithms is conducted to show that improved chicken swarm algorithm has more excellent optimization precision, convergence rate, and robustness.

Chicken Swarm Algorithm
Chicken swarm optimization is a new intelligent bionic algorithm proposed according to various behaviors of cocks, hens, and chicks in the process of searching food.In this algorithm, chicken swarm in searching space is mapped as specific particle individual.Cock particle swarm, hen particle swarm, and chicken particle swarm are sorted according to fitness value of particle, and each subswarm uses different searching mode [16,17].
In this algorithm, several particles with best fitness are selected as cock particle swarm, which is given by  +1  , =   , + randn (0,  2 ) ⋅   , , where  +1 , and   , are the position of th dimension of particle  in +1 and  iterations, respectively, and randn(0,  2 ) is a random number of Gaussian distribution whose variance is  2 .The parameter  2 can be calculated by where ,  ∈ [1, 𝑟size] and  ̸ = .size represents the number of cock swarms.fit  and fit  are the fitness values of cock particle  and , respectively;  represents a number which is small enough.
Moreover, most particles with good fitness are selected as hen swarm.Its random search is done via cocks of population of hen and that of others, which can be expressed as where   1, and   2, are the position of cock individual 1 in the population of hen   and cock individual 2 in the other population, respectively.rand is an uniform random number over [0, 1].1 and 2 denote the weight calculated by where fit 1 and fit 2 are, respectively, the fitness value of cock individual 1 in the population of hen   and cock individual 2 in the other population.All individuals, except for cock swarm and hen swarm, are defined as chick swarm.Its search mode follows that of hen swarm, which is given by where FL stands for a parameter, meaning that the chick would follow its mother to forage for food.  , represents the position of the th chick's mother ( ∈ [1, ]).

Chicken Swarm Optimization Based on Elite
Opposition-Based Learning The  distribution, also called student distribution [18], owns degree of freedom parameter ; its probability density function is as follows: where Γ is the Gamma function and Γ( + 1/2) = (2)!√/()!4  .Apparently, the  distribution is defined as Cauchy distribution density function (( = 1) = (0, 1)) when degree of freedom parameter  = 1 and is defined as Gaussian distribution density function when  = ∞ (( → ∞) → (0, 1)).Thus, Cauchy distribution and Gaussian distribution are two special boundary cases of  distribution.The characteristics of Cauchy distribution and Gaussian distribution are integrated into  distribution, and different mutation range can be obtained via changing degree of freedom parameter .
In the algorithm, random search based on dynamic adaptive  distribution is selected for the position of cock particles   = ( ,1 ,  ,2 , . . .,  , ), which is expressed as where   , is the position after random search of dynamic adaptive  distribution used by th dimension of cock particle ,  , is the position of th dimension of th cock particle, () is random function of  distribution with iteration  as its degree of freedom, and  is mutation control factor.Each dimension value performs differently in searching process of multidimensional objective function.Thus, the mutation control factor  has the same control strategy to each dimension, and it cannot meet the requirement of algorithm obviously.In this paper, dynamic adaptive mutation control factor is used to adjust mutation range, as given by where  is proportionality coefficient,  is the number of cock members, (1/) ⋅ ∑  =1  , is mean value of th dimension of cock particle in the current iteration, and  best, is a mutated value of th dimension of cock particle in optimal position of the current iteration.
In random search of dynamic adaptive  distribution, iteration  is taken as degree of freedom parameter, and dynamic adaptive mutation control factor is used to adjust searching range.In the early stage of evolution, the search is conducted by  distribution (which is similar to that via Cauchy distribution) and maintains population diversity due to small .Thus, the algorithm has good ability of global exploitation.With increase of , random search of  distribution transits gradually from that of Cauchy distribution to that of Gaussian distribution, and random search of  distribution is similar to that of Gaussian distribution with good ability of local development in the later stage.[19] is a new technology applied to intelligent computing area, and it has been successfully applied in many intelligent algorithm optimizations [20][21][22].Theoretically verified by Zhong and others [23], opposition-based learning can acquire a solution that closes to global optimal solution with a higher probability.In the paper, elite opposition-based learning is adopted to enlarge search range for hen swarm and enhance population diversity so as to improve searching performance of the algorithm.The basic thoughts are as follows.

Elite Opposition-Based Learning. Elite opposition-based learning proposed by Tizhoosh
Set the optimal cock position and its opposite position in th iteration as   best = ( where   best, ∈ [  ,   ],  is a coefficient which denotes (0, 1) uniformly distributed random variable, and [  ,   ] is the boundary of th dimension of search space in the current population.The result is obtained by where  ∈ (1, 2, . . ., size) and  ∈ (1, 2, . . ., ).Equation ( 10) is applied to replace fixed boundary, which ensures that cocks generate opposite solution in decreased space range.Besides, threshold value of the opposite solution may go beyond the range of [  ,   ] to form a nonfeasible solution.Traditional handling is to set the searching particle which goes beyond borders as boundary value.As a result, numerous searchers gather on the border.To avoid above problem, border buffering wall is used for handling, and its basic thoughts are as follows.
Suppose that   is upper bound of th dimension of search range in the population and   is the lower bound.If opposite solution of the optimal individual in the population is op  best, , the value after border buffering wall handling will be Sort all population individuals according to their fitness.(7) Divide total population individuals into three subpopulations (called rooster population, hen population, and chickens population) according to their sort criteria, and establish the relationship between the chickens and its mother (hens).( 8) end (9) Update the rooster population individuals according to Eq. ( 7) and calculate their fitness.(10) Update the hen population individuals according to Eq. ( 12) and calculate their fitness.(11) Update the chicken population individuals according to Eq. ( 5) and calculate their fitness.(12) Update the personal best position  *  and the global optimal position  best .(13) Perform local search for the global optimal individual according to Section 3.3.( 14) end (15) Output the best solution  best Algorithm 2 dynamically according to the practical searching conditions and can better overcome deficiencies brought by traditional handling.The search mode of hen swarm using elite opposition-based learning can be expressed as follows: where 1, 2, and 3 are weights, 3 and 2 have the same value, rand is an uniform random number over [0, 1], and op  best, is the opposite solution of the optimal particle of the population of chick swarm particle  in th iteration.

Local Search via Greedy Dimension-by-Dimension Search.
In basic chicken swarm optimization algorithm, the cock swarm, the hen swarm, and the chick swarm use different random search modes to acquire comparatively high-quality solution and convergence rate.But the three subswarms are all assessed by overall upgrading assessment method.
For high-dimensional global optimization problem, overall upgrading assessment method will affect the quality of solution and convergence rate due to the interference among each dimension [24].Suppose that the objective global optimization function is (  ) = ∑ 3 =1  2 , (sphere test function) and independent variable equals   = (0.2, 0.2, 0.2).Then, (  ) = 0.12.In iterations, the independent variable is updated as    = (0.1, 0.3, 0.4) after overall upgrading.Then, (   ) = 0.26, and    = (0.1, 0.3, 0.4) will be abandoned in chicken swarm optimization as (   ) > (  ).The overall upgrading assessment method will cause a slow convergence rate for the algorithm as the first-dimensional contribution is ignored due to deterioration of the second-and thirddimensional contributions.
The greedy dimension-by-dimension search mode in this paper makes full use of updated achievement of each dimension.The basic idea is as follows.(1) Suppose that   , is the value after updating of the value of th dimension of chicken swarm particle   = ( ,1 ,  ,2 , . . .,  , , . . .,  , ) (fitness value is fit(  )).Then,   , and the values of other dimensions of chicken swarm particle   before updating are integrated into a new chicken swarm particle  = ( ,1 ,  ,2 , . . .,   , , . . .,  , ).
(2) Calculate its fitness value fit().If fit() < fit(  ), then the updated result will be saved.Otherwise, it will be abandoned and the updating of ( + 1)th dimension will be conducted.The procedures are shown in Algorithm 1.
To ensure that each dimension of chicken particles in initial period of iteration updates in a relatively big steplength and in a small step-length in later period of iteration, dynamic adaptive step-length updating as (*) in Algorithm 1 is adopted to further improve the algorithm's convergence rate and quality of solution.

Algorithm Flow.
Based on Sections 3.1-3.3,the procedure of the chicken swarm optimization algorithm is shown in Algorithm 2.

Parameter Setting.
To test the algorithm's performance, seven typical intelligent algorithms, bat algorithm (BA) [10,11], particle swarm optimization (PSO) [1], differential evolution (DE) [25], ant colony optimization (ACO) [2], cuckoo search (CS) algorithm [7], flower pollination algorithm (FPA) [13], and chick swarm algorithm (CSO) [14], are selected to do contrast experiments with EOCSO algorithm.The parameters of EOCSO algorithm are set as follows.Population size is set as 100, and each of the scale of cock swarm and that of chick swarm occupies 15%, while the scale of hen swarm takes up 70%.All of the common parameters of these algorithms (including the population size, dimensions, and maximum number of generations) are set to be the same for a fair comparison.Other parameters are described in Section 3. The parameters set of all other five algorithms is shown in Table 1.

Test Function.
To verify the optimization precision and convergence rate of the proposed algorithm, 18 standard test functions in [26,27]

Impact Analysis of Control Factor 𝜓.
Other parameters remain invariant, 0.2, 0.5, and 0.8, and dynamic adaptive change is used as control factor  for Gaussian distribution algorithm.Test function parameters are set the same as Section 4.1, and test result is assessed from optimal value, mean value, worst value, and standard variance.Table 3 shows the statistical result of different  on the test functions.
As can be seen from Table 3, search effect is better when  is smaller.But it is not quite ideal for Gaussian distribution with fixed  to improve algorithm performance.Through dynamic adaptive parameter setting, the algorithm can be ensured to maintain a large search step size in initial search stage.In later stage, dynamic adaptive step size of control factor  is reduced, while the local search ability is enhanced.The test result proved that dynamic adaptive control factor has more advantages than single fixed control factor.

Comparison of ATD-CSO, AGD-CSO, and O-CSO.
Table 4 indicates the statistical result of dynamic adaptive  distribution, dynamic adaptive Gaussian distribution, and original CSO algorithm on three different kinds of test function.As the data shows, in terms of optimal values, 6 and 4 orders of magnitude are improved for  3 and  13 separately by ATD-CSO compared with AGD-CSO.For function  5 , searching precision of ATD-CSO is higher than that of AGD-CSO.For  11 ,  15 , and  17 , optimal solution is searched by (  , 10, 100, 4)    However, the convergence rate of O-CSO is slower than that of ATD-CSO and AGD-CSO.Therefore, ATD-CSO has some advantages in terms of solution precision, robustness, and convergence rate generally.improve the diversity of population.As a result, ATD-EO-CSO is superior in solution precision.The test statistical results of functions  1 ∼  7 are shown in Table 7.As can be seen from Table 7, the best value, mean value, standard variance, and the worst value of EOCSO are all superior to those of other intelligent algorithms (including BA, PSO, DE, ACO, CS, FPA, and CSO).Particularly, global minimum of the test function  5 () lies in the bottom of parabola, which is quite reliable to fall into a local optimum in the searching process.The optimal result of EOCSO is 9.53224 − 09, which is 8, 11, 10, 10, 7, 3, and 10 orders of magnitude higher than that of BA, PSO, DE, ACO, CS,  8, from which we can find that EOSCO algorithm can obtain global optimal extreme value for  8 ,  7 , and  11 , which is free from interference of local extreme value.For other algorithms, only CSO can obtain global optimum in searching of  9 and  11 .For  10 , the result is almost equivalent to that of EOCSO, CSO, and CS.Therefore, the original CSO is suitable to solve this function; the improvement of the solution after optimization is not obvious.With regard to  12 ∼  13 , EOSCO does better than five other algorithms in precision and standard variance of its solution, which reflects the advantage of EOCSO.In Table 9, the test statistical result of low-dimension multimodal functions  14 ∼  18 is described.EOCSO and seven other algorithms can be seen to be able to get the global optimum in searching process, but EOCSO has fixed advantages in its mean value, worst value, and standard variance."Error" on -axis is the error between actual global minimum and converged minimum.The convergence graphs of the 6 algorithms are expressed in  In EOCSO algorithm, its convergence rate of all 18 functions is more excellent than that in the other five algorithms, especially for  1 ∼  4 ,  6 , and  8 ∼  13 .The convergence curve is quite smooth and drops rapidly, reflecting its good convergence rate.
The best results of the various methods for solving the speed reducer design problem are shown in Table 7. Table 8 shows the statistical results of the different methods.
It can be seen from Table 10 that the optimal solution of the EOCSO algorithm is better than the other 4 kinds  of algorithms.From Table 11, the optimal value, the average value, and the worst value of EOCSO are better than those of the other 4 algorithms.In terms of stability, the standard deviation of EOCSO is smaller than those of MBA and HEAA algorithms by 6 and 4 orders of magnitude, respectively.
The approaches have previously been applied to solve this problem including many different numerical optimization techniques, such as the new modification approach on bat algorithm (EBA) [35], the cuckoo search (CS) algorithm [7], interior search algorithm (ISA) [36], the mine blast algorithm (MBA) [32], the improved ant colony optimization (IACO) [37], an effective hybrid cuckoo search algorithm for constrained global optimization (HCS-LSAL) [38], the hybrid Nelder-Mead simplex search and particle swarm optimization (NM-PSO) [39], an improved accelerated PSO algorithm [40], and crow search algorithm (CSA) [41].The best solutions obtained by the various methods are reported in Table 12.Table 13 shows the statistical results.It

Mathematical Problems in Engineering
Number of iterations  is shown in Tables 12 and 13 that EOCSO performance for the pressure vessel design problem surpassed the other 10 methods in terms of the minimum obtained value, solution average, and the standard deviation.

Conclusion
In this paper, an improved chicken swarm algorithm based on elite opposition-based learning is proposed to overcome the disadvantages of the deficiencies of lack of population diversity, being easy to stick to "premature," and low searching precision in later stage of evolution in chicken swarm algorithm.Search mode of dynamic adaptive  distribution is applied for cock swarm to balance the global development

4. 7 .
Structural Design Examples.In order to validate the performance of proposed method for constraint problems, EOCSO is examined by solving constrained engineering design problems, such as speed reducer design problem and pressure vessel design problem.4.7.1.Speed Reducer Design Problem.Speed reducer designproblem is proposed by the famous scholar Mezura-Montes, which is a classic constrained optimization and is used to verify the design engineering constrained optimization algorithm performance, as shown in Figure25.There are 7
[28][29][30]or contrast experiments.1 ∼  7 are high-dimensional unimodal functions and are extremely hard to converge to a global optimum, which are used to inspect the searching precision.8∼13arehigh-dimensionalmultimodal functions with several local extreme points.So they are used to test the global searching performance and avoid premature of the algorithms[28][29][30]. 14 ∼  18 are low-dimensional multimodal functions.The standard test functions are seen in Table2.4.3.Influence of Dynamic Adaptive  Distribution SearchStrategy on Bird Swarm Algorithm.Influence of dynamic adaptive  distribution search strategy on bird swarm algorithm contains two parts.The first part is the influence of different parameter values of variable control factor  on Gaussian distribution algorithm.The second part is the empirical comparison and analysis of CSO under dynamic adaptive  distribution (ATD-CSO) and dynamic adaptive Gaussian distribution (AGD-CSO) as well as the original algorithm (O-CSO).High-dimensional unimodal functions ( 3 ,  5 ), high-dimensional multimodal functions ( 11 ,  13 ), and low-dimensional multimodal functions ( 15 ,  17 ) are selected as test functions.

Table 3 :
Influence of various  on the test functions.

Table 4 :
Influence of different distributions on test functions.
both ATD-CSO and AGD-CSO.Also, for mean value, worst value, and standard variance,  11 and  15 obtain the same through these two algorithms.But, for the standard variance for  17 , ATD-CSO is better than AGD-CSO.Compared with O-CSO, ATD-CSO produces optimal mean value, worst value, and variance.It can be seen from Figures1-6that, in early evolution period, convergence rate of ATD-CSO is faster than that of AGD-CSO when calculating the above 6 functions.In later period, the convergence rate is similar, but the overall convergence rate of ATD-CSO is more optimal.

Table 5 :
Statistical results of the influence of dynamic adaptive  distribution and opposition-based learning on CSO.

Table 5 ,
the best value, mean value, worst value, and variance of  3 ,  5 ,  13 ,  15 obtained by dynamic adaptive  distribution and elite opposition-based learning are apparently better than those obtained by original CSO.
value, worst value, and variance, the results will be better based on cowork of both  distribution and elite oppositionbased learning.Since searching can be done more widely via elite opposition-based learning, boundary buffering wall is adopted to deal with individuals crossing the border, avoiding the deficiency that individuals gather on boundary value to

Table 6 :
Influence of various step values on test functions.
value, mean value, standard deviation, and the worst value produced by the test result.The number of iterations of  14 ∼  18 and  1 ∼  13 is 50 and 1000 in each trial, respectively.The algorithm is examined by Matlab 2012a on the platform with Win 8 OS, Intel Core i5-4210U 2.4 GHZ CPU, and 4 GB memory.The test statistical results are shown in Tables7, 8, and 9.
seven other intelligent algorithms in terms of the worst value and standard variance.It comes to a conclusion that EOCSO algorithm has high precision and good robustness in searching high-dimensional unimodal function.The test statistical result of functions  8 ∼  13 is demonstrated in Table

Table 8 :
Test statistical results of functions

Table 9 :
Test statistical results of functions

Table 10 :
Comparison of the best solutions obtained by different methods for speed reducer design problem.

Table 11 :
Comparison of statistical results for speed reducer design problem by the various algorithms.

Table 12 :
Comparison of the best solutions for pressure vessel design problem by different algorithms.