Multispecies Coevolution Particle Swarm Optimization Based on Previous Search

Copyright © 2017 Danping Wang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH) is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional), 10 CEC2005 benchmark functions (30-dimensional), and a real-world problem (multilevel image segmentation problems). Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.


Introduction
Solving numerical optimization problems is a challenging research endeavor in many scientific areas.Many optimization techniques and search algorithms have drawn their motivation from evolution and social behavior.These include ant colony optimization [1], genetic algorithms [2], differential evolution [3], particle swarm optimization [4], and artificial immune systems [5].In swarm intelligence (SI) systems, there are many simple individuals who can interact locally with one another and with their environments.Although such systems are decentralized, local interactions between individuals lead to the emergence of global behavior and properties.All of them have many variants, which have excellent performance.These variants are based on various improvement strategies.
In the last few years, several heuristics have been developed to improve the performance and set up suitable parameters for the PSO algorithm.van den Bergh [6] analyzed the trajectory of particles under different inertia weights and acceleration coefficients.The original structure of PSO model reflects an intuitive idea that a particle takes any position on which fitness value is better than where it currently is as the reference input.However, many experiments have shown that the basic PSO algorithm easily falls into local optima when solving complex multimodal problems with a huge number of local optima.
To overcome this problem, researchers conducted lots of studies.Inspired by the coevolution phenomenon in nature, in Cooperative Particle Swarm Optimization (CPSO) [7], multiple swarms are partitioned uniformly to optimize different components of the solution vector cooperatively.Inspired by this work, a CCPSO integrating the random grouping and adaptive weighting schemes was developed and demonstrated great promise in scaling up PSO on high-dimensional nonseparable problems [8].In [9], a competitive and cooperative coevolutionary PSO (CCPSO) has considerable potential for solving complex optimization problems by explicitly modeling the coevolution of competing and cooperating species.In Multiswarm Self-Adaptive and Cooperative Particle Swarm Optimization (MSCPSO) [10], particles in each subswarms share the only global historical best optimum to enhance the cooperative capability.In [11], Adaptive Cooperative Particle Swarm Optimizer (ACPSO) facilitates cooperation technique through the usage of the Learning Automata (LA) algorithm.Coevolutionary particle swarm optimization (PSO) algorithm associated with the artificial immune principle (ICPSO) [12] adopts an improved immune-clonal-selection operator for optimizing the elite subpopulation and a migration scheme for the information exchange between elite subpopulation and normal subpopulations.
Different from the above works with static-swarms strategy, some cooperative PSO' variants introduce dynamic multiswarms strategy to enhance the ability of exploring local optima.With the aim of maintaining multiple swarms on different peaks, a clustering PSO (CPSO) proposed in [13] employs a nearest neighbor learning strategy to train particles and a hierarchical clustering method to locate and track multiple optima.In hierarchical cluster-based multispecies particle swarm optimization (HCMSPSO) algorithm [14], a swarm is clustered into multiple species at an upper hierarchical level, and each species is further clustered into multiple subspecies at a lower hierarchical level.And in the lower layer, subspecies within the same species are formed adaptively in each iteration during the particle update.In [15], a dynamic multiswarm particle swarm optimizer (DMS-PSO) merges the harmony search (HS) algorithm into each subswarm.These subswarms are regrouped frequently and information is exchanged among the particles in the whole swarm.In a novel optimizer using Adaptive Heterogeneous Particle Swarms (AHPS 2 ) [16], to best take advantage of the heterogeneous learning strategies, an adaptive competition strategy is designed for dynamically adjusting the population size of an independent swarm with comprehensive learning and another one with subgradient learning based on their group performance.Roles of learning swarm and learnt swarm in particle swarm optimization with interswarm interactive learning strategy (IILPSO) [17] swap to maintain the diversity when interswarm interactive learning (IIL) behavior is triggered to adjust sizes of these two swarms.
In order to avoid the different individuals to repetitively exploit the same researched regions or excessive individuals to prematurely search a narrow region, researchers conducted a series of studies.Inspired by biology, niche [18] and speciation [19] techniques are introduced into PSO to prevent the swarm from crowding too closely and to locate as many optimal solutions as possible.Additionally, PSO topological structures are also widely adopted.The ring topology employed in [20] operating as a niching algorithm can drive the particles exploring the search space more broadly.Further, in [21] the multilayer strategy with multiple topologies is adopted to decrease the amount of noneffective exploiting tries.Inspired by ecological cohabitation, chaotic multiswarm particle swarm optimization (CMS-PSO) modifies the generic PSO with the help of the chaotic sequence for multidimension unknown parameter estimation and optimization by forming multiple cooperating swarms [22].Historical memory strategy in HMPSO [23], which estimates and preserves distribution information of particles' historical promising, is helpful in preserving the information of optimum solution space and making a comprehensive learning.
Although all the above works improve the performance of particle swarm optimization, there is shortage of research into the performance of the hybrid algorithm with the dynamic multiple swarms strategy and nonrepetition search approach.
In this paper, we propose a new coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on -means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH).It is shown that when a Binary Space Partitioning tree archive stores the positions and the fitness values of all evaluated solutions, this archive can be treated as an approximation of the fitness function, which can avoid the individuals' revisit phenomenon and improve the search efficiency.Moreover, -means clustering method is introduced to partition whole population into subpopulations frequently.Information communication among subpopulations is implemented in the dynamic repartition process.Therefore, MCPSO-PSH algorithm can accommodate a considerable potential for solving complex problems.To comprehensively evaluate the performance of the proposed algorithm, MCPSO-PSH is compared with other state-of-theart algorithms on three categories of experiments: (1) ten 10-dimensional and 30-dimensional benchmark problems with various properties, such as unimodal and multimodal functions, are employed to test MCPSO-PSO's performance for a diverse set of problems; (2) a set of CEC2005 30dimensional tests including 10 benchmark functions are used to justify MCPSO-PSH's scalability for complex problems; (3) a real-world problem, the multilevel thresholds based on Otsu method for image segmentation, is employed to benchmark MCPSO-PSH's applicability for practical problems.The numerical results demonstrate that MCPSO-PSH significantly improves the performance of PSO and outperforms most of the comparison algorithms during the experiments.The main contributions of the proposed methods lie in the following aspects.
(1) A new way of information communication among multiple swarms is designed.Each particle's position is updated according to not only its personal history information and global information of its locating species but also other global information of other species.(2) A dynamic -means cluster is designed.In the process of decreasing the number of clusters and repartitioning the population into species, the search feature general achieves the transition from global search in the earlier evolution period to local search in the later evolution period.(3) Binary Space Partitioning fitness tree memorizing the previous search history can prevent the individuals from revisiting the unpromising landscapes, which can improve the search efficiency and avoid trapping in local optimum.
The rest of this paper is organized as follows: Section 2 first gives a review of the related works; Section 3 gives a brief explanation of the proposed coevolution particle swarm optimization algorithm; Section 4 presents the experimental studies of the proposed MCPSO-PSH; in Section 5, the performance of MCPSO-PSH based multilevel thresholding for image segmentation is evaluated.Finally, Section 6 concludes the paper.

Related Works
2.1.Particle Swarm Optimization.Particle swarm optimization (PSO) is an evolutionary computation (EC) algorithm paradigm that emulates the swarm behaviors of birds flocking [24].It is a population-based search algorithm that exploits a population of individuals to probe promising regions of the search space.In this context, the population is called a swarm, and the individuals are called particles.Each particle moves with an adaptable velocity within the search space and retains the best position it ever encountered in its memory.The standard version of PSO is briefly described here [6].
Let  be the swarm size,  be the particle dimension space, and each particle of the swarm have a current position vector   , a current velocity vector   , and an individual best position vector   found by the particle itself so far.The swarm also has the global best position vector   found by any particle during all prior iterations in the search space.Assuming that the function () is to be minimized, and describing the following notations in th iteration, the definitions are as follows: () = ( ,1 () ,  ,2 () , . . .,  , ()) ,  , () ∈   ,  = 1, 2, . . ., , where each dimension of a particle is updated using the following equations: , ( + 1) =  , () + V , ( + 1) .
In (2),  1 and  2 denote constant coefficients and  1 and  2 are elements from random sequences in the range of (0, 1).
The inertia weight  plays the role of balancing the global and local searches.It can be a positive constant (e.g., 0.9) or even a positive linear or nonlinear function of time [13,14].Although sometimes proper fine-tuning of the parameters  1 and  2 leads to an improved performance, an extended study of the cognitive and social parameters in [6] suggests  1 =  2 = 2 as default values.

K-Means
Clustering. -means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining.-means clustering aims to partition  observations into  clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster, as shown in Figure 1.This results in a partitioning of the data space into Voronoi cells [25].
The cluster centers are substituted for center positions of food sources and the formula of computing the centers is shown in (4).If the th cluster contains   members and the members are denoted as   1 ,   2 , . . .,    , then the center (cluster center

𝑖
) is determined as The radius  of a cluster is defined as the mean distance (Euclidean) of the members from the center of the cluster.Thus,  can be written as follows: where   is position of the th member and   is the number of members in its corresponding cluster.

Fitness Tree.
Inspired by Binary Space Partitioning (BSP) tree, fitness tree is proposed by Shiu and Chi [26].The establishment of a fitness tree is aimed at partitioning the whole search space into several subspaces (subregions) according to the history positions of members.Each node of fitness tree represents a partitioned subspace of the whole search space.And each node has at most two child nodes, left node and right node.The sum of the search spaces of left node and right node is the search space of the parent node.Such fitness tree is constructed by a series of solutions found by the individuals.The process of building fitness tree is random, which means its topology for information communication is constructed stochastically.The individuals of an evolution algorithm can be regarded as a sequence of solutions sq = {(1), (2), . . ., ()}, where  is the population size.In each cycle,  parent individuals generate  child individuals.The child individual is inserted into the fitness tree as a leaf node.So fitness tree stores all the search positions SQ = {sq 1 , sq 2 , . ..}.This means that there is no revisit for the moment.For normal evolution algorithms, parent individuals with better solutions withstand higher selection pressure, which results in higher probability to revisit the search space.In the extreme case, an individual trapping into a local optimum may be selected over and over, which causes that after several iterations all the individuals are generated from such individual.The phenomenon of revisit leads to a precipitous loss of population diversity.
The pseudocode of fitness tree construction is given in Algorithm 1.Here, a detailed explanation is given as follows.
In the process of inserting a solution into the fitness tree, the current node Curr_node represents the survey node.When the survey dimension of current node is equal to the dimension of inserted node _ = (), the th solution () in the sequence sq = {(1), (2), . ..} is a revisit.
The core idea of fitness tree is that, under finding out the dimension with max difference, the new individual locates as a leaf node or partitions a parent node (subspace) into two binary nodes (sub-subspace).

Multispecies Coevolution Model for Optimization.
Inspired by the mutualism phenomenon, the single population PSO is extended into the interacting multispecies model by enhancing particle dynamics.
In such coevolution model, the information for the particle's position updating includes not only its personal history information and global information of its locating species but also other global information of other species.Information communication simultaneously occurs in and between the local species and neighboring species.
In mathematical terms, the original particle's fly velocity equation and position update equation (please see ( 2) and ( 3)) are improved as follows: , ( + 1) =   , () + V  , ( + 1) , where the superscript  represents the th species,   , denotes the position of the th particle of the th species,  , is the current particle's history best position,   , is the history best position found by its locating species, and   , is the history best position found by the whole population.In (6),  1 ,  2 , and  3 are the constant coefficients.And in this paper,  1 ,  2 , and  3 are set to have the same value,  1 =  2 =  3 = 1.6667;  1 ,  2 , and  3 are the random values in the range of (0, 1).
In (6), ) means the internal communication within the th species, and indicates the symbiotic coevolution between dissimilar species.

Dynamically K-Means Cluster.
In order to implement the multispecies coevolution model, the stochastically generated population should be partitioned into  subspecies.The widely adopted -means cluster method (please see Section 2.2) is employed for achieving the population division.The main drawback of -means cluster is that the effect of population division depends on the initial distribution of stochastically generated individuals and the selection of initial division points.A relatively effective means is to set a large number of clusters.
Input: Fitness tree and new generated individuals Output: Fitness tree (1) All nodes of Fitness tree  tree = { node1 ,  node2 , . ..} and new generated individuals Compute minimum extreme point of each dimension: (5) Compute maximum extreme point of each dimension: ( 7) E n dF o r (11) End For Algorithm 2: Pseudocode of adaptive normalization of population members.
In our proposed algorithm, the number of clusters is predefined as During optimization, each individual in every cluster operates independently as the multispecies coevolution model (please see Section 3.1).After several optimization iterations   , the whole population is repartitioned into  +1 clusters;   and  +1 are orderly got from  = { 1 ,  2 , . . .,   }.Here,  1 + 2 +⋅ ⋅ ⋅+  = , where T is the total number of iterations.As  +1 is smaller than   , the individuals in small clusters may be redistributed into the relatively larger clusters when the number of the clusters changes.
To balance the exploration and exploitation, the value of   is not constant.To enhance the ability of local search especially in early stages of searching,   should be larger than  +1 .Therefore, in our proposed algorithm,   is calculated as follows: Moreover, in the process of optimization, the centers of two or more clusters move too close or overlap each other.In order to avoid this phenomenon and to control every cluster to locally search a part of landscape, the relatively small cluster in two or more adjacent clusters decomposes and the individuals in such cluster redistribute into other adjacent clusters.Evaluation criterion is listed as follows: where Dis clusters is the distance between two adjacent clusters and  cluster1 and  cluster2 are the radiuses of two adjacent clusters (please see ( 5)); the left term (Dis clusters −  cluster1 −  cluster2 ) represents the gap between two adjacent clusters.

Adaptive Normalization of Population
Members.In the proposed multispecies particle swarm optimization, the idea of fitness tree is also employed to overcome the revisit phenomenon.Here, a modified adaptive normalized fitness tree is proposed.
In the process of inserting a new individual into fitness tree (please see Algorithm 1), there is a potential drawback: when determining the comparing dimension, it depends on the maximum difference among the dimensions of two child nodes.
For overcoming this drawback, the normalization method is employed to determine which dimension has the largest distinguishable difference.First, the extreme points of the new generated population  new and constructed fitness tree  tree are determined by identifying the minimum and maximum values  min  and  max  for each dimension  = 1, 2, . . .,  and by constructing the extreme points  min = ( min 1 ,  min 2 , . . .,  min  ) and  max = ( max 1 ,  max 2 , . . .,  max  ).Each dimension value of nodes in fitness tree  node is then translated by subtracting the corresponding dimension value of extreme points and dividing the distribution range of corresponding dimension as follows: The pseudocode of adaptive normalization of population members is listed in Algorithm 2.

Algorithmic Framework.
In this paper, a new multispecies coevolution particle swarm optimization based on previous search history is proposed.The -means clustering is applied to dynamically partition the whole population into multiple species.Fitness tree is employed to avoid that the individuals revisit the previous search regions.
The pseudocode of the proposed algorithm is listed in Algorithm 3. Update the position of individuals according to Eq. ( 6) and ( 7); (12) Normalize the dimensions of each individual as Algorithm 2; (13) Insert new individuals into fitness tree as Algorithm 1; (14) Update some individuals from fitness tree; (15) Calculate fitness values of individuals; ( 16) Algorithm 3: Pseudocode of multispecies coevolution particle swarm optimization based on previous search history.
In all experiments in this section, the values of the common parameters used in each algorithm such as population size and total generation number are chosen to be the same.Population size is set as 100 and the maximum evaluation number is set as 100000.For the continuous testing functions used in this paper, the dimensions are all set as 10D and 30D.
All the control parameters for the EA and SI algorithms are set to be default of their original literatures.
For canonical PSO, the learning rates  1 and  2 are both set as 2.0.
For canonical DE, the zoom factor is set as  = 0.9 and crossover rate is set as CR = 0.9.
For NrGA, the crossover rate of 0.8 and the mutation rate of 0.01 are adopted [26].

Test and Benchmark Problems. According to the No
Free Lunch theorem, "for any algorithm, any elevated performance over one class of problems is exactly paid for in performance over another class" [29].To fully evaluate the performance of the MCPSO-PSH without a biased conclusion towards some chosen problems, a set of ten basic benchmark functions and ten CEC2005 benchmark functions [30] is employed.The formulas of these functions are presented below.

Results for Basic Benchmark Functions.
In this experiment, the proposed MCPSO-PSH algorithm is compared with nonrevisiting genetic algorithm (NrGA), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), the canonical particle swarm optimization (PSO) algorithm, and differential evolution (DE) algorithm.
For testing 10D benchmark test functions f 1 -f 10 in 10 runs, the basic statistical results including the mean and standard deviations of the function values are listed in Table 1.From Table 1, it is obvious that the canonical PSO and DE algorithms demonstrate a worse performance than MCPSO-PSH, NrGA, and CMA-ES.For the low-dimensional (10D) unimodal functions f 1 -f 5 , the uniformity of CMA-ES algorithm is relatively good.For function f 1 , CMA-ES gets the best results.And, for functions f 2 -f 5 , the performance of CMA-ES is rather competitive.Compared with the canonical PSO and other state-of-the-art algorithms, MCPSO-PSH brings in the dynamical multispecies strategy based on Kmeans to enhance the ability of local search and fitness tree to overcome the revisit phenomenon, which results in a huge improvement of performance.The MCPSO-PSH and NrGA algorithms with nonrevisit strategy have nearly the same results for low-dimensional problems.Benefitting from the dynamic K-means clustering and the adaptive normalization of population members, the improved multispecies PSO has a slightly better performance than NrGA on functions f 1 -f 5 .For the low-dimensional (10D) multimodal functions f 6 -f 10 , intuitively, the proposed MCPSO-PSH algorithm has rather smaller mean values and standard deviations than other algorithms for function f 6 .For functions f 7 and f 8 , the MCPSO-PSH and NrGA obtain the global optimum with a high efficiency.It is due to the fact that the nonrevisit strategy based on BSP fitness tree effectively enhances the global search especially when a large number of local optima exist in the search space.For functions f 9 and f 10 , the CMA-ES algorithm also gets the global optimum because the MCPSO-PSH, NrGA, and CMA-ES own their special mechanisms for jumping out of the local optimum.On the whole, regardless of the unimodal functions or the multimodal functions, the performance of the proposed MCPSO-PSH is uniform.
For testing 30D benchmark test functions f 1 -f 10 in 10 runs, the basic statistical results including the mean and standard deviations of the function values are listed in Table 2.With the increase of the dimensions, the convergence rate becomes slow.The results of Table 2 show great disparities in convergence accuracy.For the high-dimensional unimodal problems f 1 -f 5 , the performance of MCPSO-PSH algorithm is more outstanding than the other four algorithms because MCPSO-PSH with dynamic clustering strategy can accelerate the convergence rate especially in the later evolution period.The performance of NrGA algorithm becomes slightly poor although it also has the nonrevisit strategy for improving the search efficiency.Only for functions f 1 -f 5 does CMA-ES outperform NrGA on convergence rate and accuracy.For solving the high-dimensional multimodal problems f 6 -f 10 , the MCPSO-PSH has a good performance.With the dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on BSP fitness tree, the MCSPSO-PSH algorithm can well avoid premature convergence to local optimum in the process of solving the high-dimensional multimodal problems.From the basic statistical results, NrGA also owns the most outstanding performance on testing functions f 6 and f 10 , which means the nonrevisit strategy based on BSP fitness tree plays an important role in guiding search direction.On the whole, the performance of all algorithms tested here is ordered as MCPSO-PSH > NrGA > CMA-ES > PSO > DE.
The convergence curves of the MCPSO-PSH algorithms and other EAs against test functions f 1 -f 10 in 10D and 30D are shown in Figure 2. It can be observed that, with the increase of the dimensions, the convergence rate of the proposed MCPSO-PSH algorithm becomes faster.Predictably, with a further increase of the dimensions of the test functions, the MCPSO-PSH algorithm will have an overwhelming superiority in solving the high-dimensional problems.3 show that MCPSO-PSH has significantly outstanding performance for all these cases except f 14 and f 20 .For f 14 , although all the algorithms provide similar results (except DE), the solutions of MCPSO-PSH are still rather competitive and accessible.For f 20 MCPSO-PSH achieved the second best solution just behind NrGA.The remarkable performance of MCPSO-PSH for f 11 -f 15 indicates that MCPSO-PSH also applies to unimodal functions though it is primarily designed for multimodal functions.For the multimodal functions f 16 -f 19 , MCPSO-PSH outperforms NrGA, CMA-ES, PSO, and DE.In terms of average obtained results, MCPSO-PSH is better than NrGA on f 12 , f 13 , f 15 , f 16 , f 17 , and f 18 , and is somehow equivalent to it on f 11 and f 19 .MCPSO-PSH and NrGA outperform other three algorithms on most cases, which demonstrates great promise of nonrevisiting strategy for maintaining population diversity and improving search efficiency synchronously.Obviously, MCPSO-PSH is superior to NrGA which benefits from its unique dynamic multispecies strategy based on -means clustering.

Results for CEC2005 Benchmark Functions. The statistical results in Table
In order to intuitively depict the search effect, the box plots are employed to demonstrate the statistical performance of five different algorithms, as shown in Figure 3.Although  nonrevisiting strategy is implemented in both NrGA and MCPSO-PSH, the robustness of NrGA is still undesirable.MCPSO-PSH with dynamic multispecies strategy has great enhancement of exploring ability.
From the above observations, the superiority of MCPSO-PSH over other peer algorithms can be easily presumed.

Multilevel Threshold for Image Segmentation
5.1.Otsu Method.The Otsu multithreshold measure [31] proposed by Otsu has been popularly employed in determining whether the optimal threshold method can provide image  segmentation with satisfactory results.The traditional Otsu method can be described as follows.
Let the grey levels of a given image with  pixels range over [0,  − 1] and ℎ() denote the number of the th grey level pixels.() is the probability of .
Otsu defined the between-class variance as the sum of sigma functions of each class by ( 22) where   is the mean intensity of the original image.In bilevel thresholding, the mean level of each class, (  ), can be calculated as follows: where The optimal threshold can be derived by maximizing the between-class variance function Bilevel thresholding based on the between-class variance can be extended to multilevel thresholding.The extended between-class variance function is calculated as follows: where  is the number of thresholds.And The optimal thresholds are found by maximizing the between-class variance function: Equation ( 38) is used as the objective function which is to be optimized (maximized).A close look into this equation will show that it is very similar to the expression for uniformity measure.as medium levels for image segmentation, which means the high-dimensional problem for image segmentation cannot be solved by the traditional search method.
In all experiments, to ensure that the initial values of each random algorithm are equal, we used the MATLAB command rand(, ) * (max  − min ) + min .max  and min  are the maximum and minimum greys of the tested images.Figure 4 presents six original images and their histograms.

Experimental Results of Multilevel Threshold
Case I: Multilevel Threshold Results with  − 1 = 2, 3, 4. We employ (38) as the fitness function to provide a comparison of performances.Since the classical Otsu method is an exhaustive searching algorithm, we regarded its results as the "standard."Table 4 shows the fitness function values (with  − 1 = 2, 3, 4) attained by Otsu methods.In Table 4, the mean computation time and corresponding optimal thresholds are listed.For  − 1 > 4, owing to the unbearable consumption of CPU time, we do not list the correlative values in our experiment.These "standard" results will be used to compare with the results attained by optimization methods.
It is well known that the EA-based Otsu method for multilevel thresholding segmentation can only accelerate the computation velocity.The attained results will just get incredibly close to the optimal results but fail to improve the optimal results.Using the proposed MCPSO-PSH and other evolution algorithms, the mean fitness function values and their standard deviations obtained by the test algorithms (with  − 1 = 2, 3, 4) are shown in Table 5.The bigger mean values and smaller standard deviation listed in Table 5 mean that the algorithm gets the better results.
According to the mean CPU times shown in Table 6, there is no obvious difference between the EA-based Otsu methods.All the EA-based Otsu methods are suitably efficient and practical in terms of time complexity for high-dimensional image segmentation problems.Therefore, in comparison with the traditional Otsu algorithm, all the EA-based Otsu methods effectively shorten the computation time.
Among the EA-based Otsu methods tested listed in Table 5, MCPSO-PSH obtains results equal to or close to that obtained in the traditional methods when  − 1 = 2, 3, 4.Moreover, the standard deviations obtained by MCPSO-PSH are considerably small among the results obtained by the EA-based Otsu methods.This is attributed to the fact that the MCPSO-PSH has a good balance between the global and local search abilities.Therefore, the MCPSO-PSH-based Otsu method is suitable for the low-dimensional segmentation problems ( − 1 = 2, 3, 4).
To further investigate the search performance of each population-based method on high-dimensional segmentation problems, we conduct these algorithms on image segment with  − 1 = 5, 7, 9. Due to the unacceptable computation time for the traditional Otsu method, it does not need to be assessed in this test.The average fitness value and standard deviation obtained by each of the EA-based Otsu methods are listed in Table 7, where the correlative results with the larger values and smaller standard deviations indicate the better achievements.
From Table 7, it is obvious that, with the growth in dimension, there is statistically significant gap among experiments using these EA-based Otsu methods, in terms of both efficiency (fitness values) and stability (standard deviation).Depending on the nonrevisit strategy based on Binary Space Partitioning fitness tree and dynamic multispecies strategy based on -means clustering, MCPSO-PSH owns the best performance and stability on a high dimension, which is more efficient than the canonical PSO and other classical EAs for image segmentation with Otsu method.Moreover, NrGA, which used nonrevisit strategy, also gets more acceptable results than the other algorithms, except MCPSO-PSH.With the increase of dimensions, the fitness values and standard deviation demonstrate that the performance of CMA-ESbased Otsu method has greatly improved.The results shown in Table 7 prove that the MCPSO-PSH-based Otsu method is more suitable for resolving multilevel image segmentation problems.
As mentioned above, the EA-based Otsu methods adopt different strategies to conduct global and local searches.An elaborate optimizer should have a better balance between exploration and exploitation search abilities.The above tests show that the MCPSO-PSH-based Otsu method outperforms the other EA-based Otsu method for multilevel image segmentation, especially on high-dimensional thresholding (with  − 1 = 5, 7, 9).Moreover, compared with the traditional Otsu method, this method shortens the computational time.dynamically partition the population into many clusters and the number of clusters is changing frequently to implement information exchange among the different clusters.The proposed algorithm keeps a good balance between exploration and exploitation search abilities.As a result, the proposed algorithm showed promising results among the compared algorithms.Finally, we applied the proposed algorithm in the multilevel image segmentation problem.The proposed algorithm gets favorable results compared to the other compared methods.The use of the MCPSO-PSH algorithm in other real-world problems merits further research in our future work.While this research is promising, there is a large margin of improvement for future research: (1) The number of clusters  and the repartitioned cycle  are preset in this work.However, according to our preliminary experiments, different values of  and  will impact the results of several functions. and  determine the range and rate of information exchange among species.For getting more suitable results of most test cases, we preset these two values.Therefore, how to adaptively adjust the values of  and  with regard to various functions is to be explored in the next stage.
(2) A nonrevisit strategy based on Binary Space Partitioning fitness tree is employed to partition the whole search space into several subregions according to the history positions of members.According to our preliminary observation of particles' trajectory, most of the global best positions are local under a parent node or a small patch of leaf nodes in the fitness tree, especially in the later evolution period.Most of the nodes in fitness tree have a lack of vitality.Therefore, in the future work, we may modify the static fitness tree into a dynamic one with a node age or a lifecycle feature.
(3) We would also like to introduce the proposed dynamic multiswarm and nonrevisit strategies to other swarm-based algorithms.At the same time, applying MCPSO-PSH to address more real-world complex problems will also be studied in the future work.

Figure 3 :
Figure 3: Box plots of results obtained by all algorithms.Here, 1 to 5 in the horizontal axis are indices of MCPSO-PSH, NrGA, CMA-ES, PSO, and DE, respectively (the box has lines at the lower quartile, median, and upper quartile values.The whiskers extend to the most extreme data points not considered outliers.Outliers (denoted by +) are data with values beyond 100 units of interquartile range).
In this paper, we have presented a new coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on -means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH).With the help of storing and analyzing the positions and the fitness values of all evaluated solutions in Binary Space Partitioning tree archive, MCPSO-PSH can prevent its individuals from revisiting the same researched regions and improve the search efficiency.The proposed algorithm adopts -means clustering method to Set the numbers of clusters  = { 1 ,  2 , . . .,   } and the total number of iteration T; (14)Shifted Schwefel's problem 1.2 with noise in fitness  14 () =  = [ 1 ,  2 , . . .,   ] . 15 () = max {        −       } + _bias 15 ,  = 1, . . ., ,  = [ 1 ,  2 , . . .,   ] .

Table 1 :
Performance of all algorithms on 10D test functions  1 -f 10 .

Table 2 :
Performance of all algorithms on 30D test functions  1 - 10 .

Table 4 :
Objective values and thresholds by the Otsu method.

Table 5 :
Objective value and standard deviation by the compared EA-based methods on Otsu algorithm.

Table 6 :
The mean CPU time of the compared EA-based methods on Otsu algorithm.

Table 7 :
Objective value and standard deviation by the compared EA-based methods on Otsu algorithm.