In engineering problems due to physical and cost constraints, the best results, obtained by a global optimization algorithm, cannot be realized always. Under such conditions, if multiple solutions (local and global) are known, the implementation can be quickly switched to another solution without much interrupting the design process. This paper presents a new swarm multimodal optimization algorithm named as the collective animal behavior (CAB). Animal groups, such as schools of fish, flocks of birds, swarms of locusts, and herds of wildebeest, exhibit a variety of behaviors including swarming about a food source, milling around a central location, or migrating over large distances in aligned groups. These collective behaviors are often advantageous to groups, allowing them to increase their harvesting efficiency to follow better migration routes, to improve their aerodynamic, and to avoid predation. In the proposed algorithm, searcher agents emulate a group of animals which interact with each other based on simple biological laws that are modeled as evolutionary operators. Numerical experiments are conducted to compare the proposed method with the stateoftheart methods on benchmark functions. The proposed algorithm has been also applied to the engineering problem of multicircle detection, achieving satisfactory results.
A large number of realworld problems can be considered as multimodal function optimization subjects. An objective function may have several global optima, that is, several points holding objective function values which are equal to the global optimum. Moreover, it may exhibit some other local optima points whose objective function values lay nearby a global optimum. Since the mathematical formulation of a realworld problem often produces a multimodal optimization issue, finding all global or even these local optima would provide to the decision makers multiple options to choose from [
Several methods have recently been proposed for solving the multimodal optimization problem. They can be divided into two main categories: deterministic and stochastic (metaheuristic) methods. When facing complex multimodal optimization problems, deterministic methods, such as gradient descent method, the quasiNewton method, and the NelderMead’s simplex method, may get easily trapped into the local optimum as a result of deficiently exploiting local information. They strongly depend on a priori information about the objective function, yielding few reliable results.
Metaheuristic algorithms have been developed combining rules and randomness mimicking several phenomena. These phenomena include evolutionary processes (e.g., the evolutionary algorithm proposed by Fogel et al. [
Traditional GAs perform well for locating a single optimum but fail to provide multiple solutions. Several methods have been introduced into the GA’s scheme to achieve multimodal function optimization, such as sequential fitness sharing [
Using a different metaphor, other researchers have employed artificial immune systems (AIS) to solve the multimodal optimization problems. Some examples are the clonal selection algorithm [
Several studies have been inspired by animal behavior phenomena in order to develop optimization techniques such as the particle swarm optimization (PSO) algorithm which models the social behavior of bird flocking or fish schooling [
Recently, the concept of individual organization [
On the other hand, the problem of detecting circular features holds paramount importance in several engineering applications. The circle detection in digital images has been commonly solved through the circular Hough transform (CHT) [
This paper proposes a new optimization algorithm inspired by the collective animal behavior. In this algorithm, the searcher agents emulate a group of animals that interact with each other based on simple behavioral rules which are modeled as evolutionary operators. Such operations are applied to each agent considering that the complete group has a memory which stores its own best positions seen so far by applying a competition principle. Numerical experiments have been conducted to compare the proposed method with the stateoftheart methods on multimodal benchmark functions. Besides, the proposed algorithm is also applied to the engineering problem of multicircle detection, achieving satisfactory results.
This paper is organized as follows. Section
The remarkable collective behavior of organisms such as swarming ants, schooling fish, and flocking birds has long captivated the attention of naturalists and scientists. Despite a long history of scientific investigation, just recently we are beginning to decipher the relationship between individuals and grouplevel properties [
Animal groups are based on a hierarchic structure [
Recent studies have illustrated how repeated interactions among grouping animals scale to collective behavior. They have also remarkably revealed that collective decisionmaking mechanisms across a wide range of animal group types, ranging from insects to birds (and even among humans in certain circumstances), seem to share similar functional characteristics [
Despite the variety of behaviors and motions of animal groups, it is possible that many of the different collective behavioral patterns are generated by simple rules followed by individual group members. Some authors have developed different models, such as the selfpropelled particle (SPP) model which attempts to capture the collective behavior of animal groups in terms of interactions between group members following a diffusion process [
On other hand, following a biological approach, Couzin et al. [
The dynamical spatial structure of an animal group can be explained in terms of its history [
According to these new developments, it is possible to model complex collective behaviors by using simple individual rules and setting a general memory. In this work, the behavioral model of animal groups is employed for defining the evolutionary operators through the proposed metaheuristic algorithm. A memory is incorporated to store best animal positions (best solutions) considering a competitiondominance mechanism.
The CAB algorithm assumes the existence of a set of operations that resembles the interaction rules that model the collective animal behavior. In the approach, each solution within the search space represents an animal position. The “fitness value” refers to the animal dominance with respect to the group. The complete process mimics the collective animal behavior.
The approach in this paper implements a memory for storing best solutions (animal positions) mimicking the aforementioned biologic process. Such memory is divided into two different elements, one for maintaining the best found positions in each generation (
Like other metaheuristic approaches, the CAB algorithm is also an iterative process. It starts by initializing the population randomly, that is, generating random solutions or animal positions. The following four operations are thus applied until the termination criterion is met, that is, the iteration number
Keep the position of the best individuals.
Move from or nearby neighbors (local attraction and repulsion).
Move randomly.
Compete for the space inside of a determined distance (updating the memory).
The algorithm begins by initializing a set
All the initial positions
Analogously to the biological metaphor, this behavioral rule, typical in animal groups, is implemented as an evolutionary operation in our approach. In this operation, the first
From the biological inspiration, where animals experiment a random local attraction or repulsion according to an internal motivation, we implement the evolutionary operators that mimic them. For this operation, a uniform random number
Following the biological model, under some probability
Once the operations to preserve the position of the best individuals, to move from or to nearby neighbors and to move randomly, have all been applied to all the
In order to update the memory
Dominance concept, presented when two animals confront each other inside of a
In the proposed algorithm, the historic memory
The elements of
Each element
From the resulting elements of
Unsuitable values of
The computational procedure for the proposed algorithm can be summarized as follows.
Set the parameters
Generate randomly the position set
Sort
Choose the first
Update
Generate the first
Generate the rest of the
for
if
{if
}
else if
{
}
end for
where
If
Just after the optimization process has finished, an analysis of the final
Evolutionary algorithms (EAs) have been widely employed for solving complex optimization problems. These methods are found to be more powerful than conventional methods based on formal logics or mathematical programming [
The EAs do not have limitations in using different sources of inspiration (e.g., musicinspired [
Particle swarm optimization (PSO) is undoubtedly one of the most employed EA methods that use biologically inspired concepts in the optimization procedure. Unfortunately, like other stochastic algorithms, PSO also suffers from the premature convergence [
As an alternative to PSO, the proposed scheme modifies some evolution operators for allowing not only attracting but also repelling movements among particles. Likewise, instead of considering the best position as reference, our algorithm uses a set of neighboring elements that are contained in an incorporated memory. Such improvements allow increasing the algorithm’s capacity to explore and to exploit the set of solutions which are operated during the evolving process.
In the proposed approach, in order to improve the balance between exploitation and exploration, we have introduced three new concepts. The first one is the “attracting and repelling movement”, which outlines that one particle cannot be only attracted but also repelled. The application of this concept to the evolution operators (
The second concept is the use of the main individual. In the approach, the main individual, that is considered as pivot in the equations (in order to generate attracting and repulsive movements), is not the best (as in PSO) but one element (
Finally, the third concept is the use of an incorporated memory which stores the best individuals seen so far. As it has been discussed in Section
In order to demonstrate the algorithm’s stepbystep operation, a numerical example has been set by applying the proposed method to optimize a simple function which is defined as follows:
CAB numerical example. (a) 3D plot of the function used as example. (b) Initial individual distribution. (c) Initial configuration of memories
Like all evolutionary approaches, CAB is a populationbased optimizer that attacks the starting point problem by sampling the objective function at multiple, randomly chosen, initial points. Therefore, after setting parameter bounds that define the problem domain, 10 (
The new 10 individuals
The remaining 6 new positions
In order to calculate a new position
Finally, after all new positions
In this section, the performance of the proposed algorithm is tested. Section
In this section, we will examine the search performance of the proposed CAB by using a test suite of 8 benchmark functions with different complexities. They are listed in Tables
the consistency of locating all known optima;
the averaged number of objective function evaluations that are required to find such optima (or the running time under the same condition).
The test suite of multimodal functions for Experiment 4.2.
Function  Search space  Sketch 

Deb’s function  
5 optima  



 
Deb’s decreasing function  
5 optima  



 
Roots function  
6 optima  



 
Two dimensional multimodal function  
100 optima  



The test suite of multimodal functions used in the Experiment 4.3.
Function  Search space  Sketch 

Rastringin’s function  
100 optima  



 
Shubert function  
18 optima  



 
Griewank function  
100 optima  



 
Modified Griewank function  
100 optima  



The experiments compare the performance of CAB against the deterministic crowding [
Since the approach solves realvalued multimodal functions, we have used, in the GA approaches, consistent real coding variable representation, uniform crossover, and mutation operators for each algorithm seeking a fair comparison. The crossover probability
In the case of the CAB algorithm, the parameters are set to
To avoid relating the optimization results to the choice of a particular initial population and to conduct fair comparisons, we perform each test 50 times, starting from various randomly selected points in the search domain as it is commonly given in the literature. An optimum
All algorithms have been tested in MATLAB over the same Dell OptiPlex GX260 computer with a Pentium 4 2.66 G HZ processor, running Windows XP operating system over 1 Gb of memory. Next sections present experimental results for multimodal optimization problems which have been divided into two groups with different purposes. The first one consists of functions with smooth landscapes and welldefined optima (local and global values), while the second gathers functions holding rough landscapes and complex location optima.
This section presents a performance comparison for different algorithms solving multimodal problems
the average of optima found within the final population (NO);
the average distance between multiple optima detected by the algorithm and their closest individuals in the final population (DO);
the average of function evaluations (FE);
the average of execution time in seconds (ET).
Table
Performance comparison among the multimodal optimization algorithms for the test functions
Function  Algorithm  NO  DO  FE  ET 


Deterministic crowding 


7,153 (358)  0.091 (0.013) 
Probabilistic crowding 


10,304 (487)  0.163 (0.011)  
Sequential fitness sharing 


9,927 (691)  0.166 (0.028)  
Clearing procedure 


5,860 (623)  0.128 (0.021)  
CBN 


10,781 (527)  0.237 (0.019)  
SCGA 


6,792 (352)  0.131 (0.009)  
AEGA 


2,591 (278)  0.039 (0.007)  
Clonal selection algorithm 


15,803 (381)  0.359 (0.015)  
AiNet 


12,369 (429)  0.421 (0.021)  
CAB 



 
 

Deterministic crowding  3.53 (0.73) 

6,026 (832)  0.271 (0.06) 
Probabilistic crowding  4.73 (0.64) 

10,940 (9517)  0.392 (0.07)  
Sequential fitness sharing  4.77 (0.57) 

12,796 (1,430)  0.473 (0.11)  
Clearing procedure  4.73 (0.58) 

8,465 (773)  0.326 (0.05)  
CBN  4.70 (0.53) 

14,120 (2,187)  0.581 (0.14)  
SCGA  4.83 (0.38) 

10,548 (1,382)  0.374 (0.09)  
AEGA 


3,605 (426)  0.102 (0.04)  
Clonal selection algorithm 


21,922 (746)  0.728 (0.06)  
AiNet 


18,251 (829)  0.664 (0.08)  
CAB 



 
 

Deterministic crowding  4.23 (1.17) 

11,009 (1,137)  1.07 (0.13) 
Probabilistic crowding  4.97 (0.64) 

16,391 (1,204)  1.72 (0.12)  
Sequential fitness sharing  4.87 (0.57) 

14,424 (2,045)  1.84 (0.26)  
Clearing procedure 


12,684 (1,729)  1.59 (0.19)  
CBN  4.73 (1.14) 

18,755 (2,404)  2.03 (0.31)  
SCGA 


13,814 (1,382)  1.75 (0.21)  
AEGA 


6,218 (935)  0.53 (0.07)  
Clonal selection algorithm  5.50 (0.51) 

25,953 (2,918)  2.55 (0.33)  
AiNet  4.8 (0.33) 

20,335 (1,022)  2.15 (0.10)  
CAB 



 
 

Deterministic crowding  76.3 (11.4) 

1,861,707 (329,254)  21.63 (2.01) 
Probabilistic crowding  92.8 (3.46) 

2,638,581 (597,658)  31.24 (5.32)  
Sequential fitness sharing  89.9 (5.19) 

2,498,257 (374,804)  28.47 (3.51)  
Clearing procedure  89.5 (5.61) 

2,257,964 (742,569)  25.31 (6.24)  
CBN  90.8 (6.50) 

2,978,385 (872,050)  35.27 (8.41)  
SCGA  91.4 (3.04) 

2,845,789 (432,117)  32.15 (4.85)  
AEGA  95.8 (1.64) 

1,202,318 (784,114)  12.17 (2.29)  
Clonal selection algorithm  92.1 (4.63) 

3,752,136 (191,849)  45.95 (1.56)  
AiNet  93.2 (7.12) 

2,745,967 (328,176)  38.18 (3.77)  
CAB 




From the FE measure in Table
To validate that CAB improvement over other algorithms occurs as a result of CAB producing better search positions over iterations, Figure
Typical results of the maximization of
Deterministic crowding
Probabilistic crowding
Sequential fitness sharing
Clearing procedure
CBN
SCGA
AEGA
Clonal selction algorithm
AiNet
CAB
When comparing the execution time (ET) in Table
This section presents the performance comparison among different algorithms solving multimodal optimization problems which are listed in Table
Our main objective in these experiments is to determine whether CAB is more efficient and effective than other existing algorithms for finding the multiple high fitness optima of functions
Table
Performance comparison among multimodal optimization algorithms for the test functions
Function  Algorithm  NO  DO  FE  ET 


Deterministic crowding  62.4 (14.3) 

1,760,199 (254,341)  14.62 (2.83) 
Probabilistic crowding  84.7 (5.48) 

2,631,627 (443,522)  34.39 (5.20)  
Sequential fitness sharing  76.3 (7.08) 

2,726,394 (562,723)  36.55 (7.13)  
Clearing procedure  93.6 (2.31) 

2,107,962 (462,622)  28.61 (6.47)  
CBN  87.9 (7.78) 

2,835,119 (638,195)  37.05 (8.23)  
SCGA  97.4 (4.80) 

2,518,301 (643,129)  30.27 (7.04)  
AEGA  99.4 (1.39) 

978,435 (71,135)  10.56 (4.81)  
Clonal selection algorithm  90.6 (9.95) 

5,075,208 (194,376)  58.02 (2.19)  
AiNet  93.8 (7.8) 

3,342,864 (549,452)  51.65 (6.91)  
CAB 



 
 

Deterministic crowding  9.37 (1.91) 

832,546 (75,413)  4.58 (0.57) 
Probabilistic crowding  15.17 (2.43) 

1,823,774 (265,387)  12.92 (2.01)  
Sequential fitness sharing  15.29 (2.14) 

1,767,562 (528,317)  14.12 (3.51)  
Clearing procedure 


1,875,729 (265,173)  11.20 (2.69)  
CBN  14.84 (2.70) 

2,049,225 (465,098)  18.26 (4.41)  
SCGA  4.83 (0.38) 

2,261,469 (315,727)  13.71 (1.84)  
AEGA 


656,639 (84,213)  3.12 (1.12)  
Clonal selection algorithm 


4,989,856 (618,759)  33.85 (5.36)  
AiNet 


3,012,435 (332,561)  26.32 (2.54)  
CAB 



 
 

Deterministic crowding  52.6 (8.86) 

2,386,960 (221,982)  19.10 (2.26) 
Probabilistic crowding  79.2 (4.94) 

3,861,904 (457,862)  43.53 (4.38)  
Sequential fitness sharing  63.0 (5.49) 

3,619,057 (565,392)  42.98 (6.35)  
Clearing procedure  79.4 (4.31) 

3,746,325 (594,758)  45.42 (7.64)  
CBN  71.3 (9.26) 

4,155,209 (465,613)  48.23 (5.42)  
SCGA  94.9 (8.18) 

3,629,461 (373,382)  47.84 (0.21)  
AEGA  98 ( 

1,723,342 (121,043)  12,54 (1.31)  
Clonal selection algorithm  89.2 (5.44) 

5,423,739 (231,004)  47.84 (6.09)  
AiNet  92.7 (3.21) 

4,329,783 (167,932)  41.64 (2.65)  
CAB 



 
 

Deterministic crowding  44.2 (7.93) 

2,843,452 (353,529)  23.14 (3.85) 
Probabilistic crowding  70.1 (8.36) 

4,325,469 (574,368)  49.51 (6.72)  
Sequential fitness sharing  58.2 (9.48) 

4,416,150 (642,415)  54.43 (12.6)  
Clearing procedure  67.5 (10.11) 

4,172,462 (413,537)  52.39 (7.21)  
CBN  53.1 (7.58) 

4,711,925 (584,396)  61.07 (8.14)  
SCGA  87.3 (9.61) 

3,964,491 (432,117)  53.87 (8.46)  
AEGA  90.6 (1.65) 

2,213,754 (412,538)  16.21 (3.19)  
Clonal selection algorithm  74.4 (7.32) 

5,835,452 (498,033)  74.26 (5.47)  
AiNet  83.2 (6.23) 

4,123,342 (213,864)  60.38 (5.21)  
CAB 




From the FE and ET measures in Table
In order to detect circle shapes, candidate images must be preprocessed first by the wellknown Canny algorithm which yields a singlepixel edgeonly image. Then, the
Circle candidate (individual) built from the combination of points
In order to calculate the error produced by a candidate solution
The objective function
Evaluation of candidate solutions
In order to detect multiple circles, most detectors simply apply a oneminimum optimization algorithm, which is able to detect only one circle at a time, repeating the same process several times as previously detected primitives are removed from the image. Such algorithms iterate until there are no more candidates left in the image.
On the other hand, the method in this paper is able to detect single or multiples circles through only one optimization step. The multidetection procedure can be summarized as follows: guided by the values of a matching function, the whole group of encoded candidate circles is evolved through the set of evolutionary operators. The best circle candidate (global optimum) is considered to be the first detected circle over the edgeonly image. An analysis of the historical memory
In order to find other possible circles contained in the image, the historical memory
Thus, since the historical memory
The fitness value of each detected circle is characterized by its geometric properties. Big and welldrawn circles normally represent points in the search space with higher fitness values whereas small and dashed circles describe points with lower fitness values. Likewise, circles with similar geometric properties, such as radius and size tend to represent locations holding similar fitness values. Considering that the historical memory
The implementation of the proposed algorithm can be summarized in the following steps.
Adjust the algorithm parameters
Randomly generate a set of
Sort
Choose the first
Update
Generate the first
Generate the rest of the
for
if
{if
}
else if
{
}
end for where
If
The element with the highest fitness value
The distinctiveness factor
Step
The number of candidate circles
In order to achieve the performance analysis, the proposed approach is compared to the BFAO detector, the GAbased algorithm, and the RHT method over an image set.
The GAbased algorithm follows the proposal of AyalaRamirez et al. [
CAB detector parameters.






30  0.5  0.1  12  200 
Images rarely contain perfectly shaped circles. Therefore, with the purpose of testing accuracy for a single circle, the detection is challenged by a groundtruth circle which is determined from the original edge map. The parameters
In order to use an error metric for multiplecircle detection, the averaged
Figure
The averaged execution time, detection rate, and the averaged multiple error for the GAbased algorithm, the BFOA method, and the proposed CAB algorithm, considering six test images (shown by Figures
Image  Averaged execution time ± standard deviation (s)  Success rate (DR) (%)  Averaged ME ± standard deviation  

GA  BFOA  CAB  GA  BFOA  CAB  GA  BFOA  CAB  
Synthetic images  
(a)  2.23 ± (0.41)  1.71 ± (0.51) 

88  99 

0.41 ± (0.044)  0.33 ± (0.052) 

(b)  3.15 ± (0.39)  2.80 ± (0.65) 

79  92 

0.51 ± (0.038)  0.37 ± (0.032) 

(c)  4.21 ± (0.11)  3.18 ± (0.36) 

74  88 

0.48 ± (0.029)  0.41 ± (0.051) 

 
Natural images  
(a)  5.11 ± (0.43)  3.45 ± (0.52) 

90  96 

0.45 ± (0.051)  0.41 ± (0.029) 

(b)  6.33 ± (0.34)  4.11 ± (0.14) 

83  89 

0.81 ± (0.042)  0.77 ± (0.051) 

(c)  7.62 ± (0.97)  5.36 ± (0.17) 

84  92 

0.92 ± (0.075)  0.88 ± (0.081) 

Synthetic images and their detected circles for GAbased algorithm, the BFOA method, and the proposed CAB algorithm.
Reallife images and their detected circles for GAbased algorithm, the BFOA method, and the proposed CAB algorithm.
In order to statistically analyze the results in Table
Image 



CAB versus GA  CAB versus BFOA  
Synthetic images  
(a) 


(b) 


(c) 


 
Natural images  
(a) 


(b) 


(c) 


Figure
Average time, detection rate, and averaged error for CAB and HT, considering three test images.
Image  Average time ± standard deviation (s)  Success rate (DR) (%)  Average ME ± standard deviation  

RHT  CAB  RHT  CAB  RHT  CAB  
(a)  7.82 ± (0.34) 



0.19 ± (0.041) 

(b)  8.65 ± (0.48) 

64 

0.47 ± (0.037) 

(c)  10.65 ± (0.48) 

11 

1.21 ± (0.033) 

Relative performance of the RHT and the CAB.
In recent years, several metaheuristic optimization methods have been inspired from naturelike phenomena. In this paper, a new multimodal optimization algorithm known as the collective animal behavior algorithm (CAB) has been introduced. In CAB, the searcher agents emulate a group of animals that interact with each other depending on simple behavioral rules which are modeled as mathematical operators. Such operations are applied to each agent considering that the complete group hold a memory to store its own best positions seen so far, using a competition principle.
CAB has been experimentally evaluated over a test suite consisting of 8 benchmark multimodal functions for optimization. The performance of CAB has been compared to some other existing algorithms including deterministic crowding [
The proposed algorithm is also applied to the engineering problem of multicircle detection. Such a process is faced as a multimodal optimization problem. In contrast to other heuristic methods that employ an iterative procedure, the proposed CAB method is able to detect single or multiple circles over a digital image by running only one optimization cycle. The CAB algorithm searches the entire edge map for circular shapes by using a combination of three noncollinear edge points as candidate circles (animal positions) in the edgeonly image. A matching function (objective function) is used to measure the existence of a candidate circle over the edge map. Guided by the values of such matching function, the set of encoded candidate circles is evolved using the CAB algorithm so that the best candidate can be fitted into an actual circle. After the optimization has been completed, an analysis of the embedded memory is executed in order to find the significant local minima (remaining circles). The overall approach generates a fast subpixel detector which can effectively identify multiple circles in real images despite that some circular objects exhibit a significant occluded portion.
In order to test the circle detection performance, both speed and accuracy have been compared. Score functions are defined by (