Improved Biogeography-Based Optimization Algorithm by Hierarchical Tissue-Like P System with Triggering Ablation Rules

BBO is one of the new metaheuristic optimization algorithms, which is based on the science of biogeography. It can be used to solve optimization problems through the migration and drift of species between habitats. Many improved BBO algorithms have been proposed, but there were still many shortcomings in global optimization, convergence speed, and algorithm complexity. In response to the above problems, this paper proposes an improved BBO algorithm (DCGBBO) by hierarchical tissue-like P system with triggering ablation rules. Membrane computing is a branch of natural computing that aims to abstract computational models (P system) from the structure and function of biological cells and from the collaboration of cell groups such as organs and tissues. In this paper, firstly, a dynamic crossover migration operator is generated to improve the global search ability and also increase the species diversity. Secondly, a dynamic Gaussian mutation operator is introduced to speed up convergence and improve local search capabilities. To guarantee the correctness and feasibility of the mutation, a unified maximum mutation rate is designed. Finally, a hierarchical tissue-like P system with triggering ablation rules is combined with the DCGBBO algorithm, making use of evolution rules and communication rules to achieve migration and mutation of solutions and reduce computational complexity. In the experiments, eight classic benchmark functions and CEC 2017 benchmark functions are applied to demonstrate the effect of our algorithm. We apply the proposed algorithm to segment four colour pictures, and the results proved to be better compared to other algorithms.


Introduction
Membrane computing is a new field of natural computing that aims to abstract computing models from the function and structure of living cells and from the coordination of tissues and organs [1]. On the basis of DNA research, because of the inspiration of the protective role played by the cell membrane when the substance in the cell reacted, the calculation model of membrane computing was proposed and called it membrane system or P system [2]. Membrane calculation models are grouped into three main types: (1) cell-like P system [2], (2) tissue-like P system [3], (3) spiking neural P system [4]. Zhang et al. [5] have summarized the research on the efficiency and computing power of the three computing models. e tissue-like P system is applied as the basis, and a hierarchical tissue-like P system with starting ablation rules is proposed to assist the variants of intelligent optimization algorithm BBO to seek out the optimal solution. Since the organizational P system was proposed, countless extended systems have been introduced, and the universality of calculation has been proved, and some are combined with practical problems.
ere are multiple channel states with tissue P systems [6], and new evolutional symport/antiport rules were applied in tissue-like P systems [7], tissue P systems which carried cooperating rules [8], a unidirectional organization P system with promoters [9], timed steady-state tissue-like P systems which introduced evolutional symport/antiport rules [10], and image segmentation with complex chain P system based on evolutionary mechanism [11]. In the system, objects realize calculations through evolutionary rules and communication rules, where the implementation of the rules is uncertain, and the number of rules can be maximized simultaneously, and thereby, the computational efficiency is enhanced stronger.
Biogeography-based optimization (BBO) [12] is one of the intelligent optimization algorithms, such as other optimization algorithms (particle swarm optimization (PSO) [13], artificial bee colony (ABC) [14], ant colony optimization (ACO) [15], differential evolution (DE) [16,17], Grey Wolf Optimizer (GWO) [18], monarch butterfly optimization (MBO) [19], earthworm optimization algorithm (EWA) [20], elephant herding optimization (EHO) [21], moth search (MS) algorithm [22], slime mould algorithm (SMA) [23], and Harris hawks optimization (HHO) [24]); a fixed calculation mode is constructed to solve diverse optimization problems. Gaining-sharing knowledge optimization algorithm (GSK) [25] was proposed, which is based on the concept of acquiring and sharing knowledge in the human life cycle. And there have been many research studies on the combination of metaheuristic algorithms and local search. e mutation operator in differential evolution is introduced into PFA (HPFA) [26]. A new classification for the source of inspiration for nature-inspired algorithms was designed too, and it can be classified into four groups: evolutionary techniques, swarm intelligence techniques, physics-based techniques, and human-related techniques.
e BBO algorithm is one of the swarm intelligence techniques, which is essentially due to changes in external environmental factors, through the evolution of species within the habitat and the migration and communication between habitats, to achieve the enrichment of species diversity and the exploration and exploitation of species, so as to realize the evolution of habitat. Because the calculation mode of standard BBO is simple and single, its optimization efficiency is limited. ence, since BBO was proposed, many scholars have made improvements to the shortcomings of the algorithm. For example, a new migration operator called blended migration was come up with which the feature of immigration is obtained by another solution [27]. An ecogeography-based optimization (EBO) was proposed [28], in which topological structure is introduced and habitat population was combined with it to form ecosystems. BBO-M integrates differential evolution (DE) algorithm and chaos theory to increase the search ability of mutation operators [29]. Worst opposition learning and random-scaled differential mutation BBO (WRBBO) was proposed to obtain global and local search ability [30]. e replacement of mutation operators greatly reduces the complexity of the algorithm. e fireworks algorithm (FA) and the BBO algorithm were crossed; that is, the advantages of the two algorithms are integrated to obtain a higher quality solution [31]. A two-stage biogeography-based optimization which carried differential (TDBBO) was constructed to solve the problem of obtaining a local optimal solution due to premature convergence and reduce the rotation difference [32]. DE/BBO was produced which combined the exploration of DE with the development of BBO effectively, so as to solve numerical optimization problems from a global perspective [33]. For the purpose of strengthening the optimization performance and reducing the complexity of the calculation process, a fused and efficient biogeography-based optimization (EMBBO) algorithm was programmed which utilizes a new example earning approach [34]. In this paper, a novel DCGBBO algorithm is designed, the convergence speed of the algorithm and optimization effect are improved on account of dynamic crossover migration operator and dynamic Gaussian mutation operator, and computational complexity is reduced through the introduction of tissuelike P system with ablation rules. e contributions are expressed as follows: (1) Dynamic crossover migration operator is come up with to strengthen the global search ability and increase the species diversity. A dynamic Gaussian algorithm is produced, and the algorithm is planned into two steps, not only speed up the convergence speed but also increase the local search ability greatly meanwhile. By way of resolving situations that fall into local optimum, the opposition-based learning mechanism is brought in. e above operation improves the exploitation ability and the exploration ability of the algorithm. (2) At the same time, a hierarchical tissue-like P system with triggering ablation rules designed for DCGBBO is introduced to reduce the complexity of the calculation process by the execution of rules in the system.
(3) DCGBBO is tested on 8 classic benchmark functions and CEC 2017 benchmark functions to terrify the optimization efficiency, and the Wilcoxon signedrank test is employed to verify the optimization performance of DCGBBO; the results prove that DCGBBO is more efficient than many state-of-theart BBO variants and other algorithms. (4) At the meanwhile, DCGBBO is applied to segment colour image; the segmentation results reveal that DCGBBO is much better than other competitive algorithms.
e frame structure of the paper is as follows. Section 2 provides the related work and background knowledge which contained basic BBO algorithm and tissue-like P system. e proposed algorithm DCGBBO and hierarchical tissue-like P system with triggering ablation rules designed for DCGBBO are introduced in Section 3. And the experiment and the analysis of the algorithm are described in Section 4. e last Section 5 provides the summary and prospect of this paper.

Related Work: BBO Algorithm and Tissue-Like P System
2.1. BBO Algorithm. Biogeography-based optimization (BBO) was proposed in 2008 by Simon [12]. According to the definition of the BBO algorithm, the habitat is used to represent each individual in the population, that is, the candidate solution in the optimization problem. e combination of many habitats (N candidate solutions) constitutes a population. e fitness index variable (SIV) that measures the habitability of each habitat corresponds to the feature vector (D-dimension) of each candidate solution in the optimization problem. Habitat fitness value (HSI) is used to measure the living environment of each habitat and the fitness function value of each candidate solution in the optimization problem (f(x)). SIVs can be regarded as controllable independent variables and HSI as dependent variables in algorithms.
In the BBO algorithm, if a habitat is suitable for survival, then more species would be contained in there. To renew each habitat, the information between habitats and other habitats would be exchanged through the migration of species. And the number of species in each habitat changes accordingly by calculating the immigration rate, the emigration rate, and mutation rate which are calculated by equations (1)-(3), respectively, as follows: where I and E are on behalf of the maximum immigration rate and maximum emigration rate, respectively, S i denotes the number of species of habitat H i , S max represents the maximum species numbers of every habitat, when the immigration rate and emigration rate have the same value, S 0 represents the number of species at this time. m max is the maximum mutation probability and its value is ensured by users, P i is the probability of species [12], and P max is the maximum probability of species. e migration operator and mutation operator are expressed as the following equations (4) and (5), respectively: In equation (4), H i is the immigration habitat, H e is the emigration habitat, and the value is obtained via roulette wheel selection, and H i (SIV j ) denotes the jth SIV of the habitat H i . In equation (5), H i is the mutation habitat, and ub j and lb j are the maximum and minimum limit values of the jth SIV of the habitat H i , respectively. e main calculation process of the BBO algorithm is as follows: Initialization: randomly initialize the solution population containing N individuals, the maximum number of iterations (MI), the maximum migration rate (I), the maximum migration rate (E), the maximum mutation probability (m max ), and the number of elite solutions K. Calculate the fitness value of each solution according to the designed parameter values, and sort them from the best to the worst. Iteration process: for each iteration, the migration rate and mutation rate of each solution are calculated, K elite solutions are retained, and then migration and mutation operations are performed according to equations (4) and (5). en fitness values are calculated and solutions are sorted again, the K previously retained elite solutions are replaced with the worst K solutions, and finally, these solutions are sorted and the iteration ends. Iteration stop condition: before satisfying the iteration stop condition (usually reaching the designed MI), the algorithm will execute step "Iteration process" in a loop. After satisfying the stopping condition, the firstranked solution is the optimal solution and the algorithm ends.
is is just the result of the algorithm running once.

Tissue-Like P System.
e tissue-like P system [3] is a calculation model abstracted based on the structure and function of the cell population in the organization and is an extension of the cell-like P system. It does not have nonbasic membranes like the cell-like P system; it only has the basic membrane and the environment. Information is communicated by symport/antiport rules and evolutionary rules through the cell and the environment [7]. Similarly, certain rules and cell state requirements are satisfied, and objects can also communicate between cells.
A formal expression of tissue-like P system is as follows: where denotes a nonempty finite alphabet of the system; μ represents the membrane structure of the system; syn ⊆ 1, 2, . . . , q × 1, 2, . . . , q means the attach relationship between cells; ω 1 , . . . , ω q is a number of multisets initially existed in the cell; R is the collection of limited evolution rules; R ′ is a series of limited communication rules; and σ 0 � σ out , σ 0 is the output area that can store the results.

Methods
BBO is a simple optimization algorithm with a single mode. It can find the best local and global solutions to balance exploration and exploitation capabilities. However, there are still shortcomings such as limited exploration capabilities and slow convergence speed. erefore, in connection with the shortcomings of the BBO algorithm, some improvements were made to improve the effect which was DCGBBO. Later, a hierarchical tissue-like P system with triggering ablation rules was proposed, and it was combined with DCGBBO, and the extremely parallel principle of the P system was used in order to improve the efficiency of the algorithm, for the reason that the optimal solution could obtain.

Improved BBO (DCGBBO)
3.1.1. Dynamic Crossover Migration. In order to improve the local search ability preferably, increase the species diversity, and enhance the exploitation ability, dynamic crossover migration operator is designed to replace equation (4) and it is described as following equations: where α is a cross-scaling factor and H R is a habitat that is worked out by the roulette wheel selection. e dynamic crossover migration operator is similar to [30]. In [30], H R is chosen by the example learning selection [35] rather than the roulette wheel selection in this paper. In our opinion, the example learning selection will only choose migration habitat from good habitats, and the lack of randomness of selection will increase the possibility of the algorithm falling into local optimization. Consequently, the roulette wheel selection is preserved. α is inspired by the factor in GWO [16], which can adjust dynamically as the number of iterations changes. e range of α is 0 to 2, which is linearly decreased, and the range of (1 − 2 * rand) values is from − 1 to 1. In the early step of the iterations, the value of α * (1 − 2 * rand) is relatively large, so a relatively large disturbed value can be received, the diversity of characteristics can be increased, and the global search ability can be enhanced simultaneously. Later in the iteration, α * (1 − 2 * rand) is relatively small, the algorithm will continue to optimize in the direction of convergence, and the local ability is strengthened at the same time.

Dynamic Gaussian Mutation.
For mutation strategies, the authors in [36] introduced a novel mutation rule that the population was divided into the best, better, and worst groups, one of each partition to implement the mutation process. In [37], two mutation strategies are introduced and a hybridization framework is proposed to improve the algorithm performance. Gong et al. [38] introduced Gaussian, Cauchy, and Levy mutation operators into BBO for real space. In this paper, we proposed an improved dynamic Gaussian mutation operator to speed up convergence, reduce algorithm complexity, and improve exploration capability.
e probability density function of the Gaussian distribution is described as follows: where μ is the mean and σ is the standard deviation. en, the dynamic Gaussian mutation operator with μ � 0 and σ � 1 is presented in equations (10) and (11), in which a is a userdelimited parameter and a � 0.02 and ub j and lb j is the maximum and minimum limit values of the jth SIV of the habitat H i , respectively.
In the first half of the iteration, δ is a constant value and it depends on the maximum and minimum limit values of the feature, so the convergence is accelerated and the global search capability is enhanced too. As the value of t increases, the iteration enters the second half, and then (1 − (t/MI)) goes down, disturbed value also goes down, the solution continues to search for the optimal value along the direction of convergence in the first stage, and the local search ability and exploration capability are improved. e occurrence of random events is uncertain, so the HSI of the habitat will abruptly change, so there is a mutation operator in BBO. e mutation probability of the original standard BBO algorithm is obtained from species count probabilities; there is a relatively low probability for a small number of species or a large number of species and a high probability when the number of species approaches the equilibrium point. However, as the occurrence of unexpected events is uncertain, obtaining the mutation probability based on species count probabilities cannot guarantee the correctness and feasibility of the mutation, so in this paper, each habitat has the same probability of mutation, that is, maximum mutation rate m max .

Opposition-Based Learning Approach.
Because the setting of the initial solution is relatively random, the diversity of the population is constrained, and the direction and degree of migration and mutation of each solution are different, which could lead the optimization process to get the local optimum value without jumping out of the limit. As a result, the optimal solution cannot be found and the running time of the algorithm is consumed. Some methods [35,39,40] have been produced to avoid the algorithm falling into local optima in some respects. And a new operator is designed in equation (12) where H l (SIV j ) is the last that is the worst habitat.
If the algorithm finds a better value at a certain moment, the value of the solution remains unchanged after subsequent iterations, but it is not the optimal value. At this time, equation (12) can be used to obtain a new value. And then according to the new value, the solution space might be changed or keep the original order. e pseudocode of DCGBBO is described in Algorithm 1.

Hierarchical
Tissue-Like P System with Triggering Ablation Rules (HTPTA) for DCGBBO. In this part, the DCGBBO algorithm based on HTPTA is proposed and named as DCGBBO-HTPTA. We introduced a new trigger mechanism, defined ablation rules and replication rules, and applied the evolutionary rules and communication rules between membranes. In the algorithm, an individual (potential solution) is an object in the membrane of the P system. All objects in a membrane can be regarded as a population, and a cell membrane can be regarded as a subpopulation. erefore, the optimization of the algorithm can be completed by the evolution rules between individuals (potential solutions) in a membrane and the communication rules between the membrane and the membrane. Hierarchical tissue-like P system with triggering ablation rules with 2q + 1 cell membranes, which is designed for the DCGBBO algorithm, is shown in Figure 1. e P system is constructed just as the following form: where is an alphabet of the system, and letters correspond to objects in the DCGBBO-HTPTA. μ represents a 3-layer membrane structure, which contains 2q + 1 cell membranes. σ 0 represents environment. σ 1 , σ 2 , . . . , σ 2q+1 are 2q + 1 cell membranes. σ 2q+1 is the global cell membrane, which store the optimal solution for each subblock after the algorithm ends. ω 1 , . . . , ω q express the collection of initial objects in q cell membrane, and ω i denotes the collection of initial objects in σ 2,i . R 2/3,q represents object-based evolution rules within the membranes. R 2/3,i shows the evolutional rule which is executed in membrane σ 2/3,i , and the specific form is indicate the communication symport rules from membrane σ 2/3,i to membrane σ 1/2/3,i , and the form is 3,i , which means the object u in the membrane σ 2,i entered the membrane σ 3,i . R c 2/3,q represents copy-rules of the object in membrane σ 2/3,i , and the form is [u] 2,i ⟶ [uu] 2,i , which means there are two u in the membrane σ 2,i after applying the copy-rules. β is a trigger which exists in the second and third layer of cell membranes. When a specific condition is reached, there are only K objects left in the membrane σ 2/3,q , and the sequence numbers of these K objects are Algorithm start Initialization: Randomly initialize N solutions, and set up I, E, MI, and m max . Calculate the value of HSI of every solution and then sort every solution from the best to the worst according to their HSIs. Compute the immigration and emigration and preserve K elite habitats. Iteration process: Execute the opposition-based learning approach that is equation (12). end for else for j � 1 to D//migration operator if rand < λ i Carry out the roulette wheel selection to screen out emigration habitat H e , and perform equations (7) and (8) to renew end if if rand < m i //mutation operator Perform equations (10) and (11)  Mathematical Problems in Engineering (N − k + 1, N), the trigger β is activated, and the ablation rules R a is executed, and the form is [u] 2,i ⟶ [] 2,i , which indicates that the object u in membrane σ 2,i is ablated when certain conditions are met. σ out is the output cell membrane, the optimal solution in membrane σ 1 will be output to the output membrane σ out , and the environment is the output area of the algorithm.
For μ, there are one cell membrane σ 1 in the first layer and q cell membranes in the second layer and third layer denoted by σ 2,1 , . . . , σ 2,q and σ 3,1 , . . . , σ 3,q , respectively. e cell membrane corresponding to every 3 layers is a subblock, so there are q subblocks.

Calculation process.
(1) Initialize the number of cell membranes (2q + 1), the number of initial solutions (objects) (N), the value of initial objects in each cell in the second layer, and the number of elite objects K. Calculate the fitness value (f(x)) of each object in the initial cell and sort them, calculate the emigration rate and the immigration rate, and start the iteration. (2) Execute the copy-rule, copy the objects with the serial number 1 − K, and execute the communication rules to send these K objects to the corresponding third layer of cells. (5) e second iteration in the third layer of cells is started until the MI is met; otherwise, repeat steps 2-4. When the calculation stops, the optimal object that is the optimal solution from the membrane will be sent to the environment.

Experiments and Analysis
In order to test the optimal performance of DCGBBO, some experiments are implemented on a series of classic benchmark functions. e operating system of the experiment was Microsoft Windows 7 on PC, with 1.70 GHz CPU and 4 GB memory. e experimental environment is MATLAB R2017b.

Experiment Setting.
e classic benchmark functions used to testify the optimization efficiency include the traditional continuous unimodal functions (f 3 -f 5 ), which are implemented to evaluate the exploitation ability of the algorithm, the multimodal functions (f 6 -f 8 ), which are applied to evaluate the exploration ability of the algorithm, a step function, which just obtain one minimum and is discrete (f 1 ), and a noisy quartic function (f 2 ). e particular information of these classic benchmark functions is demonstrated in Table 1. CEC 2017 contains novel basic problems, composing test problems, rotated trap problems, graded level linkages, and many other problems. In order to further verify the optimization ability of DCGBBO to deal with complex problems, a large number of experiments were carried out on CEC 2017 [41] benchmark functions.

Experiment Results and Analysis on Classic Benchmark
Functions.
e DCGBBO-HTPTA is compared with standard BBO [12] algorithm and three BBO variants which include B-BBO [27], BBO-M [29], and TDBBO [32] on 8 benchmark functions and some CEC 2017 benchmark  Table 2. To be fair, the algorithm's public parameters are set to the same. For classic benchmarks functions, the number of independent runs (Num) of all the algorithms is 30, and the population size is 50 (N � 50). And on the 30D functions, MI is 2500, maximum number of function evaluation (MNFE) is MI * N. On the 50D functions, MI is 3500, and MNFE also is MI * N.
In this experiment part, the mean value (Mean) and the standard deviation value (Std) are used as evaluate indicators to testify the optimization performance of the algorithms. For the algorithm, the Mean value represents the optimization ability and the Std value represents the stability. So the Mean value is the key point of the comparison.
From Table 3, we can reach that DCGBBO is the first of the 7 functions and is the second of 1 function on 30D optimization problem. TDBBO is the first of 2 functions, and B-BBO and BBO-M are the first of 1 function, respectively, as evaluate indicators are equal. In the traditional continuous unimodal function (f 3 -f 4 ), DCGBBO is better than other algorithms except f 5 and is slightly inferior to TDBBO, but in the multimodal functions (f 6 -f 8 ), a step function (f 1 ), and a noisy quartic function (f 2 ), DCGBBO is distinctly better than other algorithms which shows that dynamic Gaussian mutation enhances the global search ability and exploration ability of the algorithm indeed. On function f 1 , DCGBBO gets the optima value (0) which proves that DCGBBO has great convergence performance and optimization performance. From Table 4, we can conclude that DCGBBO is the first of the 7 functions and is the second of 1 function on 50D benchmark functions. B-BBO is the first of 2 functions, and BBO-M and TDBBO are the first of 1 function, respectively, as evaluate indicators are equal. In the traditional continuous unimodal function (f 3 -f 5 ), DCGBBO is better than other algorithms, which shows that the dynamic crossover migration enhances the exploitation ability and local search ability. In the multimodal functions (f 6 -f 8 ), DCGBBO ranks first and the evaluation index is obviously better than that of other algorithms; this shows that dynamic Gaussian mutation enhances the global search ability and exploration ability. On f 1 , all BBO variants can get the best value (0) except standard BBO. On f 2 , DCGBBO is slightly inferior to B-BBO, but the difference in the evaluation index is very small.
We get the convergence curves of 5 comparison algorithms on 8 classic functions shown in Figures 2 and 3. e abscissa "Iteration" gives the number of iterations, and the ordinate "f value" gives the optimization functions value of 8 benchmark functions tested by 5 comparison algorithms. From Figure 2, no matter which function, DCGBBO's convergence performance is better than that of other algorithms. Although on functions f 2 and f 8 , DCGBBO's convergence speed is inferior to B-BBO, DCGBBO's convergence speed is obviously faster than that of other comparison algorithms. From Figure 3, on 50-dimension benchmark functions, the proposed algorithm has the faster convergence speed than other algorithms except f 1 , f 2 , and f 8 , which has a slightly faster speed of convergence than DCGBBO. But the convergence efficiency of DCGBBO is significantly superior to other comparison algorithms. To sum up, DCGBBO has the best convergence performance than other improved algorithms.

Experiment Results and Analysis on CEC 2017
Benchmark Functions. In this paper, for CEC 2017 benchmark functions, the number of independent runs (Num) of all the algorithms is 51, and the population size is 100 (N � 100). Maximum number of function evaluation (MNFE) are 100000, 300000, and 500000 for D � 10, 30, and 50. e mean value (Mean) and the standard deviation value (Std) are used as evaluate indicators to testify the optimization performance of the algorithms. For the algorithm, the Mean value represents the optimization ability and the Std value represents the stability. So the Mean value is the key point of the comparison. Table 5 depicts the comparison of the DCGBBO with considered algorithms on CEC 2017 benchmark functions in terms of mean and best fitness values, and the dimension is 10. From the table, it can be stated that the mean values (Mean) and the standard deviation value (Std) returned by DCGBBO are better than those of the considered algorithms for twenty-four and twenty-one problems out of thirty, Step Mathematical Problems in Engineering respectively. For D � 30 dimensions, the comparative results are shown in Table 6. From the table, we can see that DCGBBO outperforms the considered algorithms on twenty-two problems for the mean values (Mean) and on twenty-one problems for the standard deviation values (Std). And from Table 7 for D � 50 dimensions, DCGBBO outperforms the considered algorithms on twenty-three problems for the mean values (Mean) and on twenty-two problems for the standard deviation values (Std). In short, DCGBBO has better optimization efficiency than other modified BBO variants and standard BBO. In particular, as the dimension of the solution space increases, the performance of the algorithm proposed in this paper gradually stabilizes, except for a few more sensitive functions. Furthermore, the convergence behavior of DCGBBO is analyzed for each class of CEC 2017 benchmark problems. Figure 4 shows the convergence trend of four 10-dimensional benchmark problems, respectively, from F1, F4, F11, and F24. In the same way, Figures 5 and 6 show the convergence trend of four 30-dimensional and 50-dimensional     benchmark problems, respectively. It can be seen from Figures 4-6 that, among the most advanced methods compared in the experiment, DCGBBO's exploration ability is very good at the beginning. e algorithm converges fast and in a consistent direction and finally reaches the optimal solution. e results show that the proposed DCGBBO can balance the local search and global search.

Experiment Results and Analysis on CEC 2017
Benchmark Functions with respect to State-of-the-Art Algorithms. In order to have a fair comparison of the proposed SACS with respect to state-of-the-art algorithms, basic CS [42], basic BBO, and CV1.0 [43] algorithms are used. ese algorithms are highly competitive and have proven their value in various CEC competitions and solving other real-world optimization problems. e results for CS and CV1.0 are taken from [44]. e population is set to 50 (N � 50). e stopping criterion was taken as 10,000 × D total number of function evaluations with 51 runs performed for each test problem. e error values are calculated by finding the difference between the expected and the desired solution, and if the difference becomes less than 10 − 8 , the error is considered as zero. e comparison results are shown in Table 8. e last row of the table gives the values of the Wilcoxon rank-sum test [45]. Here, "+ (win/w)" represents the algorithm under consideration is better than the proposed algorithm, "-(lost/l)" corresponds to the situation that algorithm under test is worse as compared to DCGBBO, and "� (tie/t)" represents that they either have no correlation, or they have the same statistical results and are equal to each other. From Table 8, we can obtain that, for unimodal functions (F1-F3), CV1.0 is highly competitive and attain good results as CS and DCGBBO were not able to converge. For most of the multimodal functions (F4-F10), DCGBBO performs better than basic BBO and basic CS. For the hybrid functions (F11-F20), the CV1.0 algorithm has strong competitiveness, but among them the newly proposed DCGBBO is the best performing algorithm. For the final set of composite functions (F21-F30), the DCGBBO algorithm was again the best performing algorithm among all the algorithms under test. From the results of the last row of Table 8, we can say that the proposed algorithm is highly competitive and future modification in the same approach may lead to much better results.

Parametric Analysis.
In order to test the individual effect of the proposed dynamic crossover migration operator, dynamic Gaussian mutation operator, and a unified maximum mutation rate m max on the performance of DCGBBO, the experiments of the same DCGBBO version without the above improvements were carried out separately. e same DCGBBO version without the proposed dynamic crossover migration operator is called DCGBBO-1, the same DCGBBO version without the proposed dynamic Gaussian mutation operator is called DCGBBO-2, and the same DCGBBO version without the proposed unified maximum mutation rate m max is called DCGBBO-3. Table 8 shows the experimental results of each version on CEC 2017 benchmark functions with dimension 30, the number of independent runs (Num) of all the algorithms is 51, and the From Table 9, we can conclude that there are 25 functions in the CEC 2017 benchmark functions that can reflect the advantages of dynamic crossover migration, which can well increase the global search ability, and the population diversity will also increase. Similarly, there are 23 functions that can show the optimization performance of dynamic Gaussian mutation, which can greatly improve the local search ability of the algorithm, expand the search range of the solution space under the premise of finding the local optimal solution, and accelerate the convergence speed of the algorithm. In addition, there are 29 functions because of the fixed mutation probability m max , the overall optimization effect of the algorithm has been greatly improved. erefore, each improvement can improve the optimization performance of the algorithm.

Friedman Test.
To investigate the performance of DCGBBO, the Friedman test [45] was used for testing statistically the performance of DCGBBO compared with the comparison algorithms. e Friedman hypothesis test is a nonparametric test using rank to implement significant differences in multiple population distributions. Null hypothesis: all the comparison algorithms in the experiment     have the same performance for any benchmark function. In the Friedman test, the 8 benchmark functions are treated as a random sample and each optimization algorithm is considered as a treatment. e mean values (Mean) of the optimization approaches on each benchmark function are ranked from the largest to the smallest [46]. e ranking of the j comparison algorithm on the i test function f(x) can be expressed as r i,j , and the Friedman test statistic χ 2 r is structured as follows: where n is the number of test benchmark functions and p is the number of comparison algorithms. e Friedman test statistic follows a chi-squared distribution with p − 1 degrees of freedom. e 5 optimization algorithms in this paper are tested on 8 benchmark functions and obtained the ranks of the mean values (Mean), as shown in Tables 10 and 11. As for Table 10, the Friedman test statistic χ 2 r is computed, and the result is χ 2 r � 20.95. As for Table 11, the Friedman test statistic χ 2 r is χ 2 r � 21.25. With p − 1 � 4 degrees of freedom, the critical value is χ 2 r � 9.488 at the significance level α � 0.05. e conclusion of the Friedman test, therefore, is to reject the null hypothesis that the 5 optimization algorithms levels on 8 benchmark functions in different functional dimensions are significantly different. In this case, compared algorithms in this paper get significantly different fitness function values.

Wilcoxon Signed-Rank Test.
Wilcoxon signed-rank test is a nonparametric test method [45], and it is used to test whether there are significant differences in the comparison algorithm to test the performance of the algorithm. e p values can be computed by using IBM SPSS Statistics 25.0. Wilcoxon signed-rank test is performed on the 10-dimensional, 30-dimensional, and 50-dimensional CEC 2017 benchmark functions. e data are taken from Tables 6-8,  and the results are shown in Tables 12-14. Among them, "R + " is the sum of ranks for the problems in which DCGBBO outperformed the comparison algorithm, and "R − " is the sum of ranks for the opposite. When the DCGBBO algorithm and the comparison algorithm achieve the same optimization performance, the corresponding ranks are split evenly to "R + " and "R − ." "w" represents DCGBBO algorithm wins the comparison algorithm on w functions, and "l" and "t" represent DCGBBO loses the comparison algorithm on l functions and ties on t functions, respectively. From the results, we can concluded whether on the 10-dimensional, the 30-dimensional, or the 50-dimensional functions, the values of p are all less than 0.05, so the proposed DCGBBO algorithm' optimization performance outperforms the comparison algorithms significantly.

Application of DCGBBO to Image Segmentation.
Image segmentation is the technique and process of dividing an image into several specific areas and proposing objects of interest. It is a key step from image processing to image analysis. In this paper, we take the pixel as the basic unit of image segmentation and concentrate on pixel image segmentation. Furthermore, we use the improved DCGBBO algorithm to segment colour images, which is done based on the features and the distances between the pixels of the colour images. We use the clustering optimization algorithm and Euclidean distance to segment the colour images, and the objective function is constructed by the following equation: where K represents the cluster number, p is a pixel which belongs to C i , C i is the ith cluster, v i is the ith clustering center, and n is the number of pixels of an image. e experiments were done between DCGBBO, BBO [12], B-BBO [27], BBO-M [29], and TDBBO [32]. N is 50, MI is 100, and Num is 10. We used 4 colour images with a pixel size of 481 * 321 as experimental data, include "Church1," "Church2," "sunflower," and "bird." We take the mean value, standard deviation value, the maximum, and the minimum as the final result to compare the performance of the algorithms. e smaller the value, the more efficient the algorithms. e comparison results are exhibited in Table 15. e image segmentation results are shown in Figure 7. From Table 14, we can clearly obtain that the performance of the DCGBBO algorithm is better than that of other algorithms in terms of Mean, Std, Maxvalue, and Minvalue. Although the minimum value of the new algorithm is not as good as the TDBBO algorithm except image "Church2," the performance of DCGBBO is more stable than that of other algorithms. From Figure 7, we can conclude that on "Church1," the segmentation edge of figure f is relatively smooth than that of other figures and the segmentation area is complete. On "Church2" and "Bird," the image segmentation effect of each algorithm is not different, and the target area is relatively completely segmented, but the details

22
Mathematical Problems in Engineering are not good. On "Sunflower," the segmentation effect of each algorithm is well. rough the above analysis, we can conclude that the proposed DCGBBO algorithm is more effective for colour image segmentation.

Conclusions
In this paper, we introduce an improved BBO algorithm (DCGBBO) based on hierarchical tissue-like P system with triggering ablation rules. By making full use of a series of rules defined in the algorithm, the iterative process of the optimization algorithm is completed and the effect of the algorithm is improved. For the DCGBBO algorithm, from the evolutionary principle of the algorithm itself, dynamic crossover migration, dynamic Gaussian mutation, opposition-based learning approach, and maximum mutation rate are designed. e above operations can balance the exploitation and exploration ability of the algorithm and improve the optimization efficiency. For the sake of testifying the optimization performance of DCGBBO, a number of experiments are implemented on eight classic benchmark functions. rough a large number of experimental analysis, the optimization effect of the proposed algorithm is better than that of the comparison algorithms. Finally, through segmenting four colour images, our algorithm is proved to be better than other compared algorithms.
Furthermore, except the BBO algorithm, some computational intelligence algorithms such as particle swarm optimization (PSO), artificial bee colony (ABC), ant colony optimization (ACO), differential evolution (DE), Grey Wolf optimizer (GWO), monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, slime mould algorithm (SMA), and Harris hawks optimization (HHO) can also tested on some benchmark functions and might be used to segment images. For the other two P systems, there may be more combinations to find the optimal solution.

Data Availability
Data supporting the results of this study can be obtained by contacting the authors. e four images tested in this paper are from BSDS300 and BSDS500, which could be found at https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/ grouping/segbench/.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.