^{1}

^{1}

^{1}

^{1}

Bacterial Foraging Optimization (BFO) is a recently developed nature-inspired optimization algorithm, which is based on the foraging behavior of

In the past several decades, research on optimization has attracted more and more attention. The most general unconstrained optimization problem can be defined as

There are different optimization methods and algorithms that can be grouped into deterministic and stochastic [

Nature ecosystems have always been the rich source of mechanisms for designing artificial computational systems to solve difficult engineering and computer science problems. In the optimization domain, researchers have been inspired by biological processes to develop some effective stochastic techniques that mimic the specific structures or behaviors of certain creatures. For examples, genetic algorithms (GA), originally conceived by Holland [

In recent years, chemotaxis (i.e., the bacterial foraging behavior) as a rich source of potential engineering applications and computational model has attracted more and more attention. A few models have been developed to mimic bacterial foraging behavior and have been applied for solving some practical problems [

Several BFO variants have been developed to improve its optimization performance. In [

In order to improve the BFO’s performance on complex optimization problems with high dimensionality, we apply two natural foraging strategies, namely, the producer-scrounger foraging (PSF) and the area concentrated search (ACS), to the original BFO, resulting in two new adaptive bacterial foraging optimization models (ABFOs), namely, ABFO_{1} and ABFO_{2}. Instead of the simple description of chemotactic behavior in BFO, the proposed algorithms can also adaptively strike a balance between the exploration and the exploitation of the search space during the bacteria evolution, by which the significant improvement can be gained. In order to evaluate the performance of the proposed algorithms, extensive studies based on a set of well-known benchmark functions have been carried out. For comparison purposes, the work also implemented a real-coded genetic algorithm (GA), the standard particle swarm optimization (PSO), and the original BFO on these functions respectively. The simulation results are encouraging. The ABFO algorithms have markedly superior search performance when compared to the original BFO, while maintaining the similar or even superior performance compared to PSO and GA in terms of accuracy, robustness, and convergence speed on all benchmark functions. The proposed ABFO_{1} and ABFO_{2} described in this paper enhance previous BFO works in the following aspects:

a new adaptive strategy, namely, the producer-scrounger foraging, to dynamically determine the chemotactic step sizes for the whole bacterial colony during a run, hence dividing the foraging procedure of artificial bacteria colony into multiple explore and exploit phases;

a new self-adaptive foraging strategy, namely, the area concentrate search, to respectively tune the chemotactic step size for each single bacterium during its run, hence casting the bacterial foraging process into heterogeneous fashion;

a comprehensive study comparing ABFO_{1} and ABFO_{2} with another two state-of-the-art global optimization algorithms, namely, GA and PSO, on high dimensional functions;

single and colonial bacterial behaviors in both ABFO_{1} and ABFO_{2} that were simulated respectively in order to analyze in depth the adaptive and self-adaptive foraging schemes in the proposed models;

new results on benchmark functions up to 300 dimensions.

The rest of the paper is organized as follows. In Section

Bacterial foraging algorithm is inspired by an activity called “chemotaxis” exhibited by bacterial foraging behaviors. Motile bacteria such as

The classical bacterial foraging optimization (BFO) system consists of three principal mechanisms, namely, chemotaxis, reproduction, and elimination-dispersal [

In the classical BFO, a unit walk with random direction represents a “tumble” and a unit walk with the same direction in the last step indicates a “run”. Suppose

With the activity of run or tumble taken at each step of the chemotaxis process, a step fitness, denoted as

The health status of each bacterium is calculated as the sum of the step fitness during its life, namely,

The chemotaxis provides a basis for local search, and the reproduction process speeds up the convergence which has been simulated by the classical BFO, while, to a large extent, only chemotaxis and reproduction are not enough for global optima searching. Since bacteria may get stuck around the initial positions or local optima, it is possible for the diversity of BFO to change either gradually or suddenly to eliminate the accidents of being trapped into the local optima. In BFO, the dispersion event happens after a certain number of reproduction processes. Then, some bacteria are chosen, according to a preset probability

In what follows, we briefly outline the original BFO algorithm step by step.

Initialize parameters

dimension of the search space,

the number of bacterium,

chemotactic steps,

swim steps,

reproductive steps,

elimination and dispersal steps,

probability of elimination,

the run-length unit (i.e., the chemotactic step size during each run or tumble).

Elimination-dispersal loop:

Reproduction loop:

Chemotaxis loop:

For

Compute fitness function,

Let

Tumble: generate a random vector

Move: let (

Compute

Swim:

let

while

let

if

else let

Go to next bacterium

If

Reproduction.

For the given

The

If

Elimination-dispersal: for

In order to get an insight into the behavior of the virtual bacteria in BFO model, we illustrate the bacterial trajectories in two separate environments (a unimodal and a multimodal case) by tuning the run-length unit parameter

The first case is the minimization of the 2-d sphere function, which is a widely used unimodal benchmark with a single optimum (0, 0) and the minimum is 0

Figure

Bacteria evolution on

Bacterial trajectories

According to fitness evolution

From the chemotactic motions of two virtual bacteria (represented as the black and blue trajectories, respectively, in Figure

The other simulation case is on the 2-d Ackley function, which has one narrow global optimum basin and many minor local optima. It is a widely used multimodal benchmark with the global optimum (0, 0) and the minimum is 0,

Figure

Bacteria evolution on

Bacterial trajectories

According to fitness evolution

From the chemotactic motions of the two virtual bacteria as in Figure

Obviously, in the context of original BFO model, the bacteria with large run-length unit have exploring ability (i.e., global investigation of the search place) while the bacteria with relatively small run-length unit have exploiting skill (i.e., the fine search around a local optimum). However, it is difficult to decide which values of static run-length unit is best suited for a given problem. Hence, we introduce the preprogrammed change of this parameter during the evolution to balance exploration and exploitation, which results in significant improvement in performance of the original algorithm.

The balance between exploration of the search space and exploitation of potentially good solutions is considered as a fundamental problem in nature-inspired systems. Too much stress on exploration results in a pure random search whereas too much exploitation results in a pure local search. Clearly, intelligent search must self-adaptively combine exploration of the new regions of the space with evaluation of potential solutions already identified.

The most important contribution of the present paper is to define the artificial bacteria in the ABFO models that are capable of self-adapting their exploration and exploitation behaviors in the foraging process. In this section, we will first introduce two adaptive foraging strategies, namely, producer-scrounger foraging and area concentrated search, from which the proposed two ABFO algorithms glean ideas. Secondly, the taxonomy of adaptation methods in natural-inspired algorithms will be given to guide the readers in classifying the kind of algorithms we are dealing with in this work.

In nature, group foraging is a widespread phenomenon since individuals can join groups to improve their foraging success (i.e., the average feeding rate can be increased and its variance can be reduced [

A central problem for the natural predators in the foraging process is how to balance two conflicting alternatives: the exploitation (to search thoroughly in promising areas) and the exploration (to move to distant areas potentially better than the actual one). Through natural selection, some predatory animals have developed the area concentrated search (ACS, also called area-restricted search) strategy, by which a predator is able to respond to variations in prey distributions by varying its searching efforts: following an encounter with food resource, a forager searches intensively in a more circumscribed region while a failure to encounter a resource leads to a more extensive, less circumspect mode of search. That is, ACS assumes that regions dense with preys should be exploited slowly, to maximize the chances of encounter, and less dense regions explored rapidly, to minimize the time spent searching in unprofitable areas [

In this section, we review the well-known classification of adaptive evolutionary algorithms [

First, the classification of the type of adaptation can be made on the adaptive mechanism used in the algorithmic process. In particular, attention is paid to the issue of whether the feedback is used or not while the algorithm is searching for the solution to the problem.

In [

A deterministic adaptation takes place if the strategy parameter is changed according to some deterministic rules, which modify the value of the parameter without taking into account any feedback from the algorithm itself.

An adaptive dynamic adaptation takes feedback from the algorithm itself into account and changes the strategies parameters accordingly.

A self-adaptive dynamic adaptation can be used to implement the self-adaptation of parameters. In this case, the parameters to be adapted are encoded into the representation of the individual.

Furthermore, these dynamic algorithms can be distinguished among different families attending to the level at which the adaptive parameters operate: environment (adaptation of the individual as a response to changes in the environment, e.g., penalty terms in the fitness function), population (parameters global to the entire population), individual (parameters held within an individual), and component (strategy parameters local to some component or gene of the individual). These levels of adaptation can be used with each of the types of dynamic adaptation.

According to this taxonomy on adaptation, this work proposes two variants of adaptive bacterial foraging optimization algorithms, namely, ABFO_{0} and ABFO_{1}, which can be classified into the adaptive dynamic adaptation on the population level and the self-adaptive dynamic adaptation on the individual level, respectively.

In this section, we consider two variants of ABFO aimed at different types of adaptation, namely, dynamic adaptation and self-adaptation. We should note that the basic philosophy behind these two ABFO variants is the producer-scrounger foraging and the area concentrated Search theory, respectively. The bacteria should always pursue the appropriate balance between exploration and exploitation of the foraging process in the search space, and this should be done by dynamically controlling parameters (i.e., the bacterial run-length unit—_{0} (adaptive) and ABFO_{1} (self-adaptive).

Idealized evolution of the parameter

We now present the dynamic adaptive version of ABFO—the ABFO_{0} algorithm. As indicated in Section

In the initial phase, the bacteria colony searches the whole space of the problem with a large run-length unit

This dynamic adaptive strategy is given in Pseudocode

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(11)

(12)

The flowchart of the ABFO_{0} algorithm can be illustrated by Figure

Flowcharts of the proposed algorithms.

ABFO_{0}

ABFO_{1}

In the ABFO_{1} algorithm, we introduce an “individual run-length unit” to the

The proposed ABFO_{1} is inspired by ACS strategy described in Section _{1} model, each bacterium displays alternatively two distinct search states.

Each bacterium in the colony has to permanently maintain an appropriate balance between _{1}, the adaptation of the individual run-length unit is done by taking into account two decision indicators: a fitness improvement (finding a promising domain) and no improvement registered lately (current domain is food exhausted). The criteria that determine the adjustment of individual run-length unit and the entrance into one of the states (i.e., exploitation and exploration) are the following.

This self-adaptive strategy is given in Pseudocode

(1)

(2)

(3)

(4)

(5)

(6)

(7)

(8)

(9)

(10)

(11)

(12)

The flowchart of the ABFO_{1} algorithm can be illustrated by Figure ^{i}

The set of benchmark functions contains four functions that are commonly used in evolutionary computation literature [

Sphere function

Rosenbrock function

Rastrigin function

Griewank function

The first problem is the sphere function, which is a widely used unimodal benchmark and is easy to solve. The second problem is the Rosenbrock function. It can be treated as a multimodal problem. It has a narrow valley from the perceived local optima to the global optimum. The third problem, namely, the Rastrigin function, is a complex multimodal problem with a large number of local optima. Algorithms may easily fall into a local optimum when attempting to solve it. The last problem is the multimodal Griewank function. Griewank has linkage among variables that makes it difficult to reach the global optimum. The interesting phenomenon of the Griewank is that it is more difficult for lower dimensions than higher dimensions. All functions are tested on 2 and 10 dimensions. The search ranges

Parameters of the test functions.

Function | |||
---|---|---|---|

Sphere | [−5.12, 5.12] | [0, 0 | 0 |

Rosenbrock | [−2.048, 2.048] | [1, 1 | 0 |

Rastrigin | [−5.12, 5.12] | [0, 0 | 0 |

Griewank | [−600, 600] | [0, 0 | 0 |

An experiment was conducted to compare the five algorithms, including the original BFO, the real-coded genetic algorithm (GA), the standard particle swarm optimization (PSO) and the proposed ABFO_{0} and ABFO_{1}, on the four benchmark functions with two dimensions and ten dimensions. The experiments were run 25 times respectively for each algorithm on each benchmark function and the maximum generation was set at 1000. The mean values and standard deviation of the results are presented.

The parameters settings for ABFO_{0} and ABFO_{1} are summarized in Table

Parameters of the BFO algorithms.

ABFO_{0} | ABFO_{1} | |||||||||||

Function | ||||||||||||

Sphere | 100 | 0.1 | 100 | 10 | 10 | 10 | 100 | 0.1 | 100 | 20 | 10 | 10 |

Rosenbrock | 100 | 0.1 | 100 | 20 | 10 | 10 | 100 | 0.1 | 100 | 20 | 10 | 10 |

Rastrigin | 100 | 0.1 | 100 | 10 | 10 | 10 | 100 | 0.1 | 100 | 20 | 10 | 10 |

Griewank | 100 | 10 | 100 | 200 | 10 | 10 | 100 | 10 | 100 | 20 | 10 | 10 |

Table

Comparison among ABFO_{0}, ABFO_{1}, BFO, PSO, and GA on 2-D problems.

2-D | BFO | ABFO_{0} | ABFO_{1} | PSO | GA | |

Best | 5.6423 | 4.0368 | 5.0576 | 2.8638 | ||

Worst | 3.3404 | 3.8546 | 2.4892 | 1.0587 | ||

_{1} | Mean | 1.4291 | 1.3068 | 8.5226 | 2.3105 | |

Std | 9.6035 | 7.0343 | 4.5408 | 2.7237 | ||

Rank | 5 | 3 | 2 | 4 | ||

Best | 1.5294 | 7.8886 | 6.8564 | |||

Worst | 9.2935 | 2.1905 | 5.3075 | 3.8578 | ||

_{2} | Mean | 1.7353 | 7.3528 | 3.9893 | 1.9021 | |

Std | 2.1344 | 3.9983 | 1.1947 | 7.0743 | ||

Rank | 5 | 3 | 2 | 4 | ||

Best | 9.0825 | 4.6509 | ||||

Worst | 0.1256 | 2.7349 | ||||

_{3} | Mean | 0.0259 | 1.2655 | |||

Std | 0.0306 | 7.4950 | ||||

Rank | 5 | 4 | 1 | 1 | ||

Best | 0.0296 | 1.5306 | 0.0311 | |||

Worst | 2.9974 | 7.7411 | 3.8791 | 0.0271 | ||

_{4} | Mean | 0.9975 | 5.1542 | 1.1178 | 0.0063 | |

Std | 0.8142 | 1.5892 | 1.0521 | 0.0061 | ||

Rank | 4 | 2 | 5 | 3 | ||

Average rank | 4.75 | 2.5 | 2.5 | 3 | ||

Final rank | 5 | 2 | 2 | 3 |

Convergence results of BFO, ABFO_{0}, ABFO_{1}, PSO, and GA on 2-D benchmark functions. (a) Sphere function; (b) Rosenbrock function; (c) Rastrigin function; (d) Griewank function.

From the results, we observe that our two adaptive BFO algorithms achieved significantly better performance on all benchmark functions than the original BFO algorithm. ABFO_{1} surpasses all other algorithms on function 1, which is the unimodal function that adopted to assess the convergence rates of optimization algorithms. The ABFO_{0} performs better on all multimodal functions 2, 3, and 4 when the other algorithms miss the global optimum basin. The Griewank function is a good example. ABFO_{0} successfully avoids falling into local optima and continues to find better results even after the PSO, GA, and BFO seem to have stagnated. The ABFO_{1} achieved almost the same performance as the ABFO_{0} on functions 2, 3, and 4.

From the rank values presented in Table _{0} > ABFO_{1} = PSO > GA > BFO.

The experiments conducted on 2-D problems are repeated on the 10-D problems. The results and convergence processes are presented in Table _{0} > ABFO_{1} > PSO = GA > BFO.

Comparison among ABFO_{0}, ABFO_{1}, BFO, PSO, and GA on 10-D problems.

10-D | BFO | ABFO_{0} | ABFO_{1} | PSO | GA | |

Best | 0.0131 | 4.6730 | 5.0821 | 9.7085 | 9.2242 | |

Worst | 0.0456 | 2.3559 | 1.3038 | 1.1714 | 0.0654 | |

_{1} | Mean | 0.0325 | 1.2647 | 9.3583 | 7.6021 | 0.0133 |

Std | 0.0086 | 4.3424 | 2.5360 | 2.2319 | 0.0149 | |

Rank | 5 | 2 | 3 | 4 | ||

Best | 7.8436 | 0.1645 | 0.3620 | 0.4578 | 6.6031 | |

Worst | 11.3192 | 0.5903 | 4.3232 | 5.2075 | 10.1496 | |

_{2} | Mean | 9.8819 | 0.3492 | 1.8660 | 0.8852 | 8.5506 |

Std | 1.0079 | 0.0966 | 1.8064 | 1.0667 | 0.7340 | |

Rank | 5 | 3 | 2 | 4 | ||

Best | 17.2125 | 1.1011 | 5.9785 | 2.9849 | 1.0413 | |

Worst | 29.0473 | 9.0694 | 32.0221 | 17.9092 | 12.3464 | |

_{3} | Mean | 22.6397 | 4.8844 | 15.5429 | 10.8119 | 6.0909 |

Std | 3.1330 | 1.6419 | 8.4543 | 3.8555 | 2.9628 | |

Rank | 5 | 4 | 3 | 2 | ||

Best | 39.0662 | 0 | 0.0910 | 52.3209 | 0.1405 | |

Worst | 133.2687 | 0.1328 | 0.7516 | 133.6037 | 0.9791 | |

_{4} | Mean | 88.7932 | 0.0647 | 0.3551 | 100.0376 | 0.4061 |

Std | 25.0455 | 0.0308 | 0.1605 | 17.2171 | 0.1875 | |

Rank | 4 | 2 | 5 | 3 | ||

Average rank | 4.75 | 2.5 | 3.25 | 3.25 | ||

Final rank | 5 | 2 | 3 | 3 |

Convergence results of BFO, ABFO_{0}, ABFO_{1}, PSO, and GA on 10-D benchmark functions. (a) Sphere function; (b) Rosenbrock function; (c) Rastrigin function; (d) Griewank function.

Many real-world optimization problems usually involve hundreds or even thousands of variables. However, previous studies showed that although some algorithms generated good results on relatively low-dimensional benchmark functions, they do not perform satisfactorily for some large-scale cases [

Comparison among ABFO_{0}, ABFO_{1}, BFO, PSO, and GA on 300-d problems.

Mean function value | |||||

Function | BFO | ABFO_{0} | ABFO_{1} | PSO | GA |

_{1}^{300D} | 1.4854e+003 | 7.4192 | 1.0888 | 1.0031 | 9.0737e+001 |

_{2}^{300D} | 2.5891e+004 | 4.6924 | 4.1758 | 3.0752e+002 | 5.7737e+003 |

_{3}^{300D} | 3.8524e+003 | 1.5210 | 9.0428 | 2.0771e+002 | 2.8621e+003 |

_{4}^{300D} | 8.2126e+003 | 8.3185 | 9.2521 | 5.9560e+003 | 1.0287e+003 |

From Table

In order to further analyze the adaptive foraging behaviours of the two proposed ABFO models, we run two simulations. In both simulations, we excluded the reproduction, elimination, and dispersion events in order to illustrate the bacterial behaviours clearly.

In the first simulation, the population evolution of the ABFO_{0} was simulated on 2-D Rosenbrock and Rastrigin functions, which are illustrated in Figures

Population evolution of ABFO_{0} on Rosenbrock function in 500 chemotactic steps.

Population evolution of ABFO_{0} on Rastrigin function in 500 chemotactic steps.

Initially, in Figure

In the second simulation, we demonstrate the self-adaptive foraging behaviour of a single bacterium in the ABFO_{1} model. Figures

Single bacterium self-adaptive foraging trajectories of ABFO_{1} model on 2-D benchmark functions. (a) Sphere function; (b) Rosenbrock function; (c) Rastrigin function; (d) Griewank function.

Figure

In Figure

Dynamics of the run-length unit automatically produced by the self-adaptive criterion during the single bacterium foraging on 2-D benchmark functions. (a) Sphere function; (b) Rosenbrock function; (c) Rastrigin function; (d) Griewank function.

In this paper, we first analyzed the foraging behavior of the bacteria in the original BFO model. Specifically, we have studied the influence of the run-length unit parameter on the exploration/exploitation tradeoff for the bacterial foraging behaviours. That is, the bacteria with the large run-length unit have the exploring ability while the bacteria with the relatively small run-length unit have the exploiting skill. This feature is, to our knowledge, used in this paper for the first time. Then, we introduced two methods of casting bacterial foraging optimization into the adaptive fashion by changing the value of the run-length unit during the algorithm execution. Based on the two adaptive methods and the original BFO, we present two adaptive bacterial foraging optimizers, namely ABFO_{0} and ABFO_{1}, both of which significantly improve the performance of the original algorithm.

The ABFO_{0} algorithm is based on the producer-scrounger foraging model. In ABFO_{0}, the algorithm evolution process is divided into multiple explore/exploit phases, each characterized by the different classes of bacterial individuals—producers and scroungers—depending on the particular run-length unit that they used. In explore phases, the bacterial producer explores the search space using the large run-length unit, which permits them to explore the whole space to locate global optimum and avoid becoming trapped in local optima. While in exploit phases, the bacterial scroungers join the resource (i.e., the best-so-far solutions) found by the producers and perform exploitation of their neighborhood. The ABFO_{1} algorithm is based on the animal searching strategy, ACS. Each bacterium in ABFO_{1} can be characterized by focused and deeper exploitation of the promising regions and wider exploration of other regions of the search space. The algorithm achieves this by decreasing the run-length unit of each bacterium to encourage exploitation when it enters the promising region with high fitness while increasing it to improve the exploration when the bacterium finds difficulties during exploitation.

This automatic alternate shifting between exploitation and exploration during execution is a noticeable feature of the adaptive mechanisms we propose, because it shows how the algorithm decides by itself when to explore and when to exploit the searching space.

Four widely used benchmark functions have been used to test the ABFO algorithms in comparison with the original BFO, the standard PSO and the real-coded GA. The simulation results are encouraging: the ABFO_{0} and ABFO_{1} are definitely better than the original BFO for all the test functions and appear to be comparable with the standard PSO and GA.

There are ways to improve our proposed algorithms. Further research efforts should focus on the tuning of the user-defined parameters for ABFO algorithms based on extensive evaluation on many benchmark functions and real-world problems.

This paper is supported by the National 863 Plans Projects of China under Grants 2008AA04A105 and 2006AA04119-5, and the Natural Science Foundation of Liaoning Province under Grant 20091094.