A Novel PSO Model Based on Simulating Human Social Communication Behavior

,


Introduction
Particle swarm optimization PSO , originally introduced by Kennedy and Eberhart 1 , has proven to be a powerful competitor to other evolutionary algorithms e.g., genetic algorithms 2 .In PSO, these individuals, instead of being manipulated by the evolution operator such as crossover and mutation, are "evolved" by the cooperation and competition among the individuals through generations.Each individual in the swarm is called a particle a point with a velocity that is dynamically adjusted in the search process according to its own flying experience and the best experience of the swarm.When solving the unconstraint optimization problem, PSO has empirically turned out to perform well on many optimization problems.However, when it comes to solving complex multimodal problems, PSO may easily get trapped in a local optimum.In order to overcome this defect and improve PSO performance, some researchers proposed several methods 3-20 .In this paper, we present an improved PSO based on human social communication.This strategy ensures the swarm's diversity against the premature convergence, especially when solving the complex multimodal problems.
This paper is organized as follows.Section 2 presents an overview of the original PSO, as well as a discussion of the previous attempts to improve the PSO performance.Section 3 proposed an improved PSO based on simulation of human communication.Section 4 gives the test functions, the experimental setting, and results.Finally, some conclusions and the future works are discussed in Section 5.

The Original PSO (OPSO)
The original PSO algorithm OPSO was inspired by the search behavior of the biological organisms, where each particle moves through the search space by a combination of the best position found so far by itself and its neighbors.In the PSO domain, generally there are two main neighborhood topologies, namely, the global and the local neighborhood that are shown in Figures 1 a and 1 b , respectively.
The two neighborhood topologies derive two classical PSO variants, namely, the global PSO GPSO and the local PSO LPSO in which the behaviors of the particles are described by 2.1 and 2.2 , respectively.In HSCPSO, the neighborhood topology is somewhat similar to four clusters 4 , but not the same as one, as can be seen in Figure 1 c .Here, the black dots represent the particles in each social circle SC , the circles represent the social circle, and the arrows represent the relationship between SCs.Note that in HSCPSO each particle's neighborhood is the set of the particles of all SCs that it is a member of see Section 3.1 .

Consider the following:
2.1 v i t w v i t 1 ϕ 1 r 1 p i x i t 1 ϕ 2 r 2 p neighbor i x i t 1 ,

2.2
where i 1, . . ., ps, ps is the population size; t is the iteration counter; n is the dimension of the search space; ϕ 1 and ϕ 2 denote the acceleration coefficients; r 1 and r 2 are random vectors with the components uniformly distributed in the range of 0, 1¥; represents the position of the ith particle at iteration t; v i t v i 1 , v i 2 , . . ., v i n represents the velocity of the ith particle; p i is the best position yielding the best fitness value for the ith particle; p g is the best position discovered by the whole population; p neighbor i is the best position achieved within the neighborhood of current particle i. Note, in this paper the original PSO represent two PSOs, one is PSO with inertia weight and ring topology, and the other is PSO with inertia weight and global topology.

Related Works
Since the introduction of PSO, PSO has attracted a great deal of attention.Many researchers have worked on improving its performance in various ways and derived many interesting variants.In 3 , Clerc and Kennedy indicated that a constriction factor χ may help to the convergence of PSO.The velocity and position of the ith particle are updated as follows: where χ 2 Ë2 φ ÷ φ 2 4φË and φ φ 1 φ 2 , φ 4. Kennedy and Mendes 4 claimed that PSO with a small neighborhood might perform better on the complex problems, while the one with a large neighborhood would perform better on the simple problems.In 5 , the author proposed a quantum-behaved particle swarm optimization QPSO to improve PSO performance.Some researchers 6-9 also combined some techniques to improve PSO performance e.g., evolutionary operators .However, these improvements are based on a static neighborhood network, which greatly decreases the swarm diversity due to that the particle only learns from the fixed neighborhood.Hence, Suganthan 10 and Hu and Eberhart 11 proposed a dynamic neighborhood topology which transforms gradually from acting like the local neighborhood in the early stage of the search to behaving more like the global neighborhood in the late stage of the search.Liang et al. 12 proposed an improved PSO called CLPSO, which used a novel learning strategy where all particles' historical best information is used to update a particle's velocity.In 13 , the author proposed the fitness-distance-ratio-based PSO FDR-PSO where FDR-PSO selects another particle p i which is supposed to be a higher fitness value and the nearest to the particle being updated.Mohais et al. 14 proposed a randomly generated directed graph to define neighborhood which is generated by using two methods, namely, the random edge migration method and the neighborhood restructuring method.Janson and Middendorf 15 arranged the particles in a hierarchy structure, where the best performing particles ascend the tree to influence more particles, replacing relatively worse performing particles which descend the tree.In 16 , clubs-based particle swarm optimization C-PSO algorithm was proposed, whose membership is dynamically changed to avoid premature convergence and improve performance.In 17 , Mendes and Kennedy introduced a fully informed PSO where instead of using the pbest pbest personal best : personal best position of a given particle, so far and gbest gbest global best : position of the best particle of the entire swarm positions in the standard algorithm, all the neighbors of the particle are used to update the velocity.The influence of each particle on its neighbors is weighted based on its fitness value and the neighborhood size.In 18 , the authors proposed a cooperative particle swarm optimizer CPSO-H .Although CPSO-H uses one-dimensional 1D swarms to search each dimension separately, the results of these searches are integrated by a global swarm to significantly improve the performance of the original PSO on multimodal problems.In recent works 19, 20 , the authors proposed the dynamic neighborhood topology where the whole population is divided into a number of sub-swarms.These subswarms are regrouped frequently by using various regrouping schedules and information exchange among all particles in the whole swarm.
The main differences of our approach with respect to the other proposals existing in the above literatures are as follows.
i In HSCPSO, we create the social circle SC for the particles analogous to our communities where people can communicate and study to widen the knowledge to each other.
ii The learning exemplars of each particle include three parts, namely, its own best experience, the experience of the best performing particle in all SCs, and the experiences of the particles of all SCs it is a member of.This strategy greatly ensures the swarm diversity against the premature convergence.
iii The parallel hybrid mutation is used to improve the particle's ability to escape from the local optima.

Updating Strategy of Particle Velocity
Based on the simulation of the human social communication behavior, each particle can join more than one SC, and each SC can accommodate any number of particles.Vacant SCs also are allowed.Firstly, each particle randomly joins a predefined number of SCs, which is called the default social circle level DSC .At each run, the worst performing particles are more socialized through joining more SCs to improve their knowledge, while the best performing particles are less socialized through leaving an SC to reduce their strong influences on other members, which leads to the fact that DSC is dynamically adjusted in terms of the particle's performance.During this cycle of leaving and joining SCs, the particles that no longer show the extreme performance in its neighborhood will gradually return to DSC.The speed of regaining DSC will decide the algorithm performance, so the check is made every n iteration here, n is called the gap iteration for adjusting DSC to find the particles that have SC level above or below DSC, and then take them back to DSC if they do not show the extreme performance.Thus, the gap iteration for adjusting DSC needs a suitable value to ensure the performance efficiency of HSCPSO.
In order to control when a particle joins or leaves an SC, we designed a mechanism to control the process.If a particle continues to show the worst performance compared with the particles in the SCs that it is a member of, then it will join more SCs one after the other until the number of SCs reaches the maximum of allowed SC number defined by the user, while the one that continues to show the superior performance in the SCs that it is a member of will leave SCs one after another until the number of SC reaches the minimum of allowed SC number.The methods to determine DSC, the gap iteration for adjusting DSC, the maximum of allowed SC number, and the minimum of allowed SC number will be discussed in the following.
Figure 2 a shows a snapshot of the SCs during an execution of HSCPSO at iteration t.In this example, the swarm consists of 7 particles, and there are 6 SCs available for them to join.The minimum of allowed SC number, DSC, and maximum of allowed SC number are 2, 3 and 5, respectively.Particle 1 will leave social circle 1 SC 1 , SC 3 , or SC 6 because it is the best performing particle in its neighborhood.Particle 3 will join SC 2 or SC 4 , because it is the worst performing particle in its neighborhood.Particle 4 will leave SC 2 , SC 3 , SC 4 , or SC 5 to go back to DSC, while Particle 2 will join SC 1 , SC 3 , SC 4 or SC 6 to return to DSC. Figure 2 b gives the pseudocode of SCs during an execution of HSCPSO, where neighbor i is the set of the neighbors of particle i note that in HSCPSO each particle's neighborhood is the set of the particles of all SCs that it is a member of , and Ëmembership i Ë is the set of the SCs that particle i is a member of.
The thought of HSCPSO is somewhat similar to C-PSO 16 , but not the same as one.In our algorithm, when updating the particle's velocity, a particle does not learn from all dimensions of the best performing particle in its neighbors, but learns from the corresponding dimension of the best performing particles of the SCs that it is a member of.In order to compare with the difference of the two strategies, the experiment was conducted as follows: HSCPSO and C-PSO are run 20 times on Sphere, Rosenbrock, Ackley, Griewanks, As all test functions are the minimization problems, the smaller the mean value is, the better the performance works.From Figure 3, we can observe that the learning strategy of C-PSO suffers the premature convergence, while the strategy of HSCPSO not only ensures the diversity of the swarm, but also improves the swarm ability to escape from the local optima.
In the real world, everyone has the ability to make himself be one member in any SC to widen the visual field.In a similar way, we hypothesize that each particle in the swarm has the same ability as human in the society.Based on our previous work 20 , the following updating equation of the velocity and position is employed in HSCPSO: where p i p i 1 , p i 2 , . . ., p i n is the best position for the ith particle; p g denotes the experience of the best performing particle in all SCs; p bin i is called the comprehensive strategy in which the particles' historical best information of the SCs that particle i is a member of is used to update a particle's velocity.Here, ϕ 1 0.972, ϕ 2 0.972, ϕ 2 0.972, and w 0.729.In the comprehensive strategy, the pseudocode of the learning exemplar choice is shownin Algorithm 1.

Parallel Hybrid Mutation
In 4 , the author has concluded that PSO converges rapidly in the first search process and then gradually slows down or stagnates.The phenomenon is caused by the loss of diversity in the swarm.In order to conquer the default, some researchers 7-9 applied the evolutionary % SC i denotes the set of the particles of all SCs that particle i is a member of % fit p represents the corresponding fitness value of an array p 1 Function comprehensive learning 2 For each particle i 1 ¢ ps 3 For each dimension k 1 ¢ n %n particle dimension 4 flag i, k 0 5 For each particle j 1 ¢ ps %ps-swarm size 6 If j is the member in SC operator to improve the diversity of the swarm.In this paper, we proposed a parallel hybrid mutation which combines the uniform distribution mutation and the gaussian distribution mutation.The former prompts a global search in a large range, while the latter searches in a small range with the high precision.The process of mutation is given as follows.
i Give the following expression to set the mutation capability mc i value for each particle, mc i 0.05 0.45 exp 5 i 1 ps 1 1 exp 5 1 .ii Choose the mode of the mutation, as follows in Algorithm 2. Here, p u is called the mutation factor that denotes the ratio of the uniform distribution mutation and correspondingly, 1 p u is the ratio of the gaussian distribution mutation.The Gaussian σ returns a random number drawn from a Gaussian distribution with a standard deviation σ. ceil p rounds the elements of p to the nearest integers greater than or equal to p. Here, three main mutation factors Linear, Exponential, and Sigmoid defined in 3.3 , 3.4 and 3.5 are adopted, and their shapes with maximum generation 2000 are shown in Figure 4 a :

3.2
where t denotes the current generation; gen is the maximum generation.
In order to test which mutation factor is appropriate to our algorithm, the following experiment is conducted.On Sphere, Rosenbrock, Griewanks, Ackley, Rastrigin noncont, and Rastrigin, HSCPSO with the different mutation factor is run 30 times on each function, and the iteration is set as 2000.The experimental results are standardized to the same scale with 3.7 .The results are presented in Table 1, where p u represents the type of mutation factor; X end is the standardized value after 2000 iterations; Linear represents the linear mutation factor; Exponent denotes the exponential mutation factor; Sigm is Sigmoid mutation factor; No denotes no mutation factor.We can observe that HSCPSO with linear mutation factor achieves the best result, thus the linear mutation factor is adopted in HSCPSO.

HSCPSO's Parameter Settings and Analysis of the Swarm's Diversity
In this section, we will discuss four parameters, namely, default social circle level DSC , the gap iteration n for adjusting DSC, and minimum and maximum of allowed SC number.

Default Social Circle Level (DSC)
In order to explore the different DSC effects on HSCPSO, five ten-dimensional test functions are used Sphere, Rosenbrock, Ackley, Griewanks, and Rastrigin functions .When dealing with experimental data, it is impossible to combine the raw results from the PSOs with the different DSC for the different functions, as they are scaled differently.Thus, Mendes's method 17 is used to deal with the raw results which are standardized to the same scale as follows: where x ij , μ ij , and σ ij denote the trial results, mean, and standard deviation of the ith test function in the jth algorithm, respectively.Note that the parameter j represents the algorithms with the different DSC.By trial and error, we found that the different DSC yielded the different results as can be seen in Figure 5 a .As all test functions are the minimization problems, the smaller the standardizing value is, the better the performance works i.e., the larger the DSC is, the better the performance goes .However, given a second thought, we found the phenomenon is not logical because in the real-life people have no enough energy to take part in all social activities.Therefore, another experiment the experiment setting and the method of the data processing are the same as one in Figure 5 a is conducted to test the impact of DSC. Figure 5 b gives the simulation results, and we found that the smaller the DSC is, the slower the convergence speed is, and vice versa.The slower convergence rate, rather than the faster convergence speed, is obviously beneficial to the ability to escape from a local optimum 4 .Based on the above analysis, the parameter DSC is set as 15 in our algorithm.Furthermore, in order to ensure the effectiveness of the DSC choice statistically, we adopted the nonparametric Wilcoxon rank sum tests to determine whether the difference is significant or not.The criterion is the following: If p is less than a, two group numbers are statistically different.If p is equal or greater than a, it means that two group numbers are not statistically different.From Table 2, we can observe that the performance of DSC-15 is statistically different from that of the other DSC values except for DSC-10.

The Gap Iteration (n) for Adjusting DSC
This section mainly discusses the effect of the gap iteration n for adjusting DSC on the algorithm.Six different thirty-dimensional test functions Sphere, Rosenbrock, Griewanks, Ackley, Rastrigin noncont, and Rastrigin are used to investigate the impact of this parameter.HSCPSO is run 20 times on each function the iteration of each run is 1000 , and the results are also standardized to the same scale using 3.7 .The mean values of the results are plotted in Figure 6 a .It clearly indicates that the gap iteration n can influence the results.When n is 25, that is about 1/40 of the total iteration, a faster convergence speed and a better result are obtained on all test functions.Furthermore, the standardizing diversity in Figure 6 b also supports the conclusion.

Minimum and Maximum of Allowed SC Number
At each run, the number of the SCs that a particle is a member of is dynamically changed according to its own performance, which will influence the HSCPSO performance.Additionally, in PSO, as the first search process for the global best position requires the exploration of the possible solutions, the LPSO algorithm is chosen in this stage.While the later search process requires the exploitation of the best found possible solutions by the early search process, the GPSO algorithm is chosen to meet this requirement.Thus, given the above characteristics, in our algorithm, we made the minimum and maximum of allowed SC number dynamically change with the iteration elapsed and empirically proposed 3.9 to set them, respectively:

3.9
where the floor operation is round towards minus infinity; t is the iteration counter; ËMaxgenË is the total number of the iteration.Figure 7 a represents the minimum and maximum of allowed SC number with the iteration elapsed, respectively, and we can observe that the dynamic minimum and maximum of allowed SC number dynamic max-min improves the swarm diversity compared with the fixed max-min.Note that the measure method of the diversity is presented in 20 .

Search Bounds Limitation
There are the bounds on the variables' ranges in many practical problems, and all test functions used in this paper have the bounds.Thus, in order to make the particles fly within the search range, some researchers use the equation x i t min x max , max x min , x i ¥ to restrict a particle on the border.Here, a different method, but similar to the above-mentioned one, is presented; that is, only when a particle is within the search range, we calculate its fitness value and update its p bin i , p g , and p i .As all learning exemplars are within the search range, a particle will eventually return to the search range.

Analysis of Swarm's Diversity
In order to compare the particle diversity of OPSO with that of HSCPSO, we omit the previous velocity w v i t 1 and make ϕ 1 and ϕ 2 equal to one.Thus, the following velocity updating equation is employed in the experiment:

3.10
In terms of 3.10 , we run HSCPSO and OPSO on Rosenbrock and Rastrigin functions to analyze the number of the best particle NBP at each iteration.Figure 8 gives the scatter plot about the index of the best performing particles associated with the numbers of the iteration.For example, a dot 20, 4000 represents that the index 20's particle has the best global fitness in the iteration number 4000.The velocity updating strategy of HSCPSO has more NBP than OPSO, which shows that the strategy of HSCPSO increases the swarm's diversity compared with OPSO 16 .The pseudocode of the HSCPSO is shown in Algorithm 3.

Test Functions and Parameter Settings of PSO Variants
To compare with the performance of all algorithms, Sphere, Rosenbrock, Acley, Griewanks, Rastrigin, and Rastrigin noncont functions are selected to test the convergence characteristics, 1 Begin 2 Initialize particles' position, velocity, pbest i , gbest and the related parameters 3 Initialize default social circle of which the particle i is a member 4 V max 0.25 X max X min 5 Initialize p u 6 while fitcount Max FES && k iteration 7 For each particle 8 Locate the best particle position of the SCs that particle i is a member of 9 Computer p bin i in terms of 3.1 .10 For each dimension 11 Updating the particles' velocity and position in terms of 3. and Ackley, Rastrigin, Rastrigin-noncont, and Rosenbrock functions are rotated with Salomon's algorithm 21 to increase the optimization difficulty and test the algorithm robustness here, the predefined threshold is 0.05, 50, 2, and 3 resp. .Table 3 represents the properties of these functions.Note that in Table 3, the values listed in the search space column are used to specify the range of the initial random particles' position; the x denotes the global optimum, and the f x is the corresponding fitness value.
In order to make these different algorithms comparable, the population size is set at 30 for all PSOs, and each test function is run 30 times.At each run, the maximum fitness evaluation FEs is set at 3 10 4 for the unrotated test functions and 6 10 4 for the rotated.The comparative algorithms and their parameters settings are listed as below.
i Local version PSO with constriction factor CF- LPSO 3 , ii Global version PSO with constriction factor CF-GPSO 3 , iii Fitness-distance-ratio-based PSO FDR-PSO 13 , iv Fully informed particle swarm optimization FIPS 17 ,  The fully informed PSO FIPS with a U-ring topology that achieved the highest success rate was used.CPSO-H k is a cooperative PSO model combined with the standard PSO, in which k 6 is used in this paper.ϕ 1 1.492, ϕ 2 1.492, and w 0.729 are used in all PSOs except HSCPSO.

Fixed Iteration Results and Analysis
Tables 4 and 5 present the means and 95% confidence interval after 3 10 4 and 6 10 4 function evaluations.The best results among seven PSO algorithms are presented in bold.In addition, to determine whether the result obtained by HSCPSO is statistically different from the results of the other six PSO variants, the Wilcoxon rank sum tests are conducted between the HSCPSO result and the best result achieved by the other five PSO variants on each test problem, and the test results are presented in the last row of Tables 4 and 5.Note that an h value of one implies that the performance of the two algorithms is statistically different with 95% certainty, whereas h value of zero indicates that the performance is not statistically different.Figures 9 and 10 present the convergence characteristics in terms of the best fitness value of the median run of each algorithm for each test function.From the above experimental results, we can observe that Sphere function is easily optimized by CF-GPSO and CF-LPSO, while the other five algorithms show the slower convergence speed.Rosenbrock function has a narrow valley from the perceived local optima to the global optimum.In the unrotated case, HSCPSO may avoid the premature convergence.Note that in the rotated case, there is little effect on all algorithms.
There are many local minima positioned in a regular grid on the multimodal Ackley.In the unrotated case, HSCPSO takes the lead, and the FDR-PSO performs better than the other five PSO variants.In the rotated case, the FDR-PSO algorithm is trapped in local optima early on, whereas CLPSO and HSCPSO belong to the performance leaders.As a result of the  comprehensive learning strategy in CLPSO, it can discourage premature convergence.At the same time, this learning strategy of HSCPSO can also ensure the diversity of the swarm to be preserved to discourage the premature convergence.Rastrigin function exhibits a pattern similar to that observed with Ackley function.In the unrotated case, HSCPSO performs very well, but its performance rapidly deteriorates when the search space is rotated.In the rotated case, CLPSO takes the lead.Note that CLPSO is still able to improve the performance in the rotated case.
On the unrotated Rastrigin-noncont function, HSCPSO presents an excellent performance compared with other algorithms, and after 2.6 10 4 function evaluation CLPSO has the strongest ability to escape from local optima.In the unrotated case, all algorithms quickly get trapped in a local minimum besides CLPSO and HSCPSO.These two algorithms can avoid the premature convergence and escape from local minima.
Altogether, according to Wilcoxon rank sum tests, we can observe that the HSCPSO algorithm can perform better than the other six algorithms on functions f 2 , f 3 , f 4 , f 5 , f 6 , f 9 , and f 10 .Although HSCPSO has no best performance in f 7 and f 8 , it shows almost the same convergence character as one of CLPSO.

Robustness Analysis
Tables 6 and 7 show the results of the robustness analysis.Here, the "robustness" is used to test the stability of the algorithm optimization ability in different conditions the rotated and unrotated test functions by a certain criterion.In HSCPSO, the criterion is that the algorithm succeeded in the ability of reaching a specified threshold.A robust algorithm is the one that manages to reach the threshold consistently whether in the rotated or the unrotated.The "Success rate" column lists the rate of algorithm reaching threshold in 60 times run.The "FES" column means the number of the function evaluations when reaching the threshold, and only the dates of the successful runs are used to compute "Success rate." As can be seen in Tables 6 and 7, none of all PSOs had any difficulty in reaching the threshold on the Rosenbrock function during 30 runs in any case.CF-GPSO has some difficulties in the unrotated and the rotated Ackley function, but CF-LPSO reaches the threshold on the unrotated Ackley.FDR-PSO and CPSO failed on the rotated Ackley function, but in the unrotated case, they consistently reached the threshold.The FIPS completely failed in both the unrotated and the rotated cases.It is interesting to note that CLPSO reached the threshold during all the runs on the rotated Ackley function.However, in the unrotated case, it did not get a perfect success rate.Only HSCPSO consistently reached the threshold in both the unrotated and rotated cases.
The Rastrigin-noncont function is hard to be solved for the majority algorithms, as can be seen in Tables 6 and 7. CLPSO and HSCPSO consistently reached the threshold in the unrotated case, while in the rotated case, only HSCPSO could achieve a perfect success rate.
On Rastrigin function, HSCPSO and CLPSO consistently reached the threshold on both the unrotated and the rotated cases.CPSO and FDR-PSO reached the threshold in the unrotated case, but they failed in the rotated case.CF-LPSO, FIPS, and CF-GPSO algorithms had some difficulties in both the unrotated and the rotated cases.
Altogether, on Rosenbrock, Ackley, Rastrigin, and Rastrigin-noncont functions, HSCPSO shows the stability of the optimization ability in the different conditions.CLPSO, FDR, CPSO, and FIPS consistently reached the threshold on most of test functions, and they were slightly less robust.CF-LPSO and CF-GPSO seemed to be unreliable on the multimodal benchmark functions.

Conclusions and Future Works
This paper proposed an improved PSO based on the simulation of the human social behavior HSCPSO for short where each particle can adjust its learning strategy in terms of the current condition self-adaptively.From the convergence character and the robustness analysis, we can conclude that HSCPSO significantly improves the ability to solve the complicated multimodal problems compared with the other PSO versions.In addition, from Wilcoxon rank sum tests, these results achieved by HSCPSO are statistically different from the second best result.Although the HSCPSO algorithm outperformed the other PSO variants on most of the test functions evaluated in this paper, it can be regarded as an effective improvement in the PSO domain.
In the future, we will focus on i constructing the model for solving the relevant parameters of PSO, ii testing the proposed algorithm effectiveness with more multimodal test problems and several composition functions that are more difficult to be optimized, iii applying the proposed algorithm to some practices to verify its effectiveness, and iv in the future this proposed algorithm will be tested on CEC 2005 problems.

Figure 1 :
Figure 1: a Global neighborhood; b local neighborhood; c the proposed algorithm neighborhood.

Figure 3 :
Figure 3: Comparison of the different learning strategies.

Figure 4 b
Figure 4 b shows an the example of mc assigned for 40 particles.Each particle has a mutation ability ranging from 0.05 to 0.5.

Figure 4 :
Figure 4: Mutation capability Mc and mutation factors.

Figure 5 :
Figure 5: Effect of the default social circles on the algorithm.

Figure 6 :
Figure 6: Standardizing mean values and diversity with different gap iterations n .

Figure 7 :
Figure 7:The allowed SC number and the effect on the swarm diversity.

Table 3 :
Dimension, search space, and global optimum of the test functions.

Figure 8 :
Figure 8: The best particle in the swarm with iteration elapsed.

Table 1 :
Standardized value of different mutation factors.

Table 2 :
Result of t-tests for the paired comparison of the default social circles.

Table 4 :
Means after 3 10 4 function evaluations for the unrotated test functions.

Table 5 :
Means after 6 10 4 function evaluations for the rotated test functions.

Table 6 :
Robustness analysis after 3 10 4 function evaluations for the unrotated test functions.

Table 7 :
Robustness analysis after 6 10 4 function evaluations for the rotated test functions.