An advanced chemical reaction optimization algorithm based on balanced local search and global search is proposed, which combines the advantages of adaptive chemical reaction optimization (ACRO) and particle swarm optimization (PSO), to solve continuous optimization problems. This new optimization is mainly based on the framework of ACRO, with PSO’s global search operator applied as part of ACRO’s neighborhood search operator. Moreover, a “finish” operator is added to the ACRO’s structure and the search operator is evolved by an adaptive scheme. The simulation results tested on a set of twenty-three benchmark functions, and a comparison was made with the results of a newly proposed hybrid algorithm based on chemical reaction optimization (CRO) and particle swarm optimization (denoted as HP-CRO). The finial comparison results show the superior performance improvement over HP-CRO in most experiments.
We often encounter optimization problems in scientific and technological research and development. Over the past decades, a series of optimization algorithms have been proposed: genetic algorithm (GA) (see, e.g., [
In general, when faced with an optimization problem, we can always simplify it as follows: a solution
The optimality of
In recent years, CRO has been proposed and attracted an increasing interest from the community of optimization algorithms, a variety of improved algorithms based on CRO has been suggested (see, e.g., [
According to the introduction mentioned above, ACRO seems to be a well-performed optimization algorithm. However, similar to CRO, ACRO is still lacking in convergence speed. In order to avoid weaknesses of ACRO and PSO, we proposed a new algorithm which combines the advantages of both (denoted as AACRO).
The rest of the paper is organized as follows. Section briefly outlines the related works in this paper and gives the inspiration of our proposed algorithm. We explain the modifications of ACRO and introduce the basic framework of AACRO in Section. In Section, we propose the proof on convergence and provide the convergence speed analysis. In Section, we describe 23 benchmark problems. In Section, we present the simulations results and compare the results of AACRO and HP-CRO. In particular, Section presents the experimental environment, parameter settings are shown in Section, and Section gives the comparison results. We conclude this paper and give some potential future works in Section.
A good optimization algorithm must have a good global search performance as well as a good local search performance. However, global search and local search performance are always confined to each other in practice. For example, if an optimization algorithm is good at global search then it must be poor at local search and vice versa. In order to achieve the best performance, the two abilities should be well balanced.
As an improvement of CRO, ACRO is proposed by Yu et al. in 2015 (see, e.g., [
The energy-related class includes
The reaction-related class includes
The real-code-related class includes
After every
Besides these parameter modifications, the framework of ACRO is similar to the canonical CRO. ACRO also satisfies two thermodynamics laws and four basic reactions (i.e., on-wall ineffective collision, decomposition, intermolecular ineffective collision, and synthesis).
An on-wall ineffective collision represents the situation when a molecule collides with the container (i.e.,
Intermolecular ineffective collision takes place when molecules collide with each other and then bounce away. The molecules (assume two) (i.e.,
Decomposition refers to the situation when a molecule hits the wall and breaks into several parts (for simplicity, we only consider two parts, i.e.,
A synthesis happens when multiple (assume two) molecules collide with each other and then fuse together (i.e.,
Based on the framework introduced above, we can formulate ACRO algorithm as Algorithm
( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (
Similar to CRO, the PSO searches the solution space by using a series of particles, which is randomly distributed in initial search space
PSO is famous with its high convergence speed. However, high convergence speed ability may lead to inadequate local search and a high probability of falling into local optimum. Moreover, CRO was proposed as a new algorithm; it demonstrates strong local search ability. As an advanced algorithm based on CRO, ACRO simplifies the CRO’s structure in a wide range and makes the
CRO’s strong local search performance and PSO’s excellent global search performance make the combination of the two algorithms an inevitable trend. The algorithm HP-CRO (see, e.g., [
However, HP-CRO just replaces CRO’s decomposition and synthesis operators with PSO’s search operator, so it has the same global optimization operator as PSO, which brings about that the accuracy of the optimization depends largely on the parameter settings of the PSO algorithm. In other words, if there exits an incorrect parameter setting, the optimization result may be poor, which greatly weakens the performance of the HP-CRO algorithm. Moreover, without an adaptive scheme, the fixed parameter settings in CRO algorithm greatly limit the accuracy of algorithm optimization.
In order to overcome the shortcomings mentioned above, AACRO is proposed. In next section, the detailed algorithm for AACRO is designed.
This section focuses on discussing the infrastructure and basic principles of the AACRO algorithm.
In order to make ACRO and PSO organically combined, we modify 2 parameters of ACRO and introduce a finish operator. The details are as follows.
As we can see in Section,
With the change of the neighborhood search operator, AACRO will have a higher convergence speed, if synthesis operator conducts in a relatively fast frequency then the population diversity cannot be maintained. On the other hand, if there are too many decomposition reactions, the total amount of the molecules will shoot up, which turns the algorithm into an unordered search. So if molecule amount is less than one-half of the original molecule amount, the synthesis operator is suspended; the decomposition operator is prohibited if the number of molecules is more than doubled.
At the end of the iteration, we introduce a finish operator. If the molecules are more than two, we continuously choose two molecules from the existing molecular population randomly and conduct the synthesis operator. The optimal particle is obtained until the number of molecules is reduced to one. The difference is that the new molecule produced by the operator will be reserved if its
( ( ( ( ( ( ( ( (
After some structure changes, we establish the framework of AACRO. Similar to other optimization algorithms, there are also three stages in AACRO: initialization, iteration, and the final stage. Figure
Flow chart of AACRO.
In the first stage, all parameters need to be initialized. In this step, the state space and some constraints are defined first; then we produce the molecule swarm by generating
In the iteration stage, a number of iterations are performed. In each iteration, we first determine whether a unimolecular collision or an intermolecular collision would happen by randomly generating a number
In the final stage, a finish operator is triggered. The existing molecules will continuously take a synthesis reaction until the numbers of the molecules reduce to one. After each synthesis reaction, the new molecule is updated and the old ones are abandoned.
We provide the source code (see, e.g., [
( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (
We can see from Section that the ACRO version is proposed as an improvement of canonical CRO version and in this section we continue to optimize the structure of ACRO, so the AACRO version is actually also an improvement of the canonical CRO version. Therefore, we conclude several modification experiences and analyze the differences between the canonical CRO, ACRO, and ACCRO version.
As an initial version, the canonical CRO version builds its basic framework and shows its relatively balanced global search and local search abilities. However, it adopts fixed step size and unreasonable collision criterions, which results in a low optimization efficiency. So the promoted version ACCRO comes out, it greatly streamlines the structure of the canonical CRO version and adopts several adaptive strategies, which makes the canonical CRO adaptive and speeds up its optimization efficiency to some extent.
The AACRO version inherits most of the adaptive strategies used in ACRO version and makes some further improvements. In order to solve the low efficiency at the beginning of the optimization process, we adopt PSO’s global search operator as part of ACRO’s neighborhood operator and use a new parameter to control whether it is a global search or a local search, which greatly enhances ACRO’s optimization efficiency. However, high optimization efficiency may lead to a premature convergence. To prevent this, we modify ACRO’s synthesis and decomposition criterions again and make some molecule amount restrictions to ensure a relatively high converge speed.
So we can conclude that the ACRO and AACRO versions are both with adaptive strategies; what is more, the AACRO version has a higher convergence speed and owns a better search ability.
Similar to CRO, the operation of the AACRO algorithm is a process of repetitive operation of on-wall ineffective collisions, intermolecular ineffective collisions, decomposition, and synthesis operators. Each iteration process is related only to the state of the current population. Therefore, the AACRO algorithm process can also be modeled as a Markov chain (see, e.g., [
To solve the problem above, we define a minimal step size
Before the proof process, we first provide some basic definitions, assumptions, and corresponding inferences.
A pseudomolecule
The purpose of introducing the pseudomolecule is to keep the numbers of the molecules constant.
Given a problem
For a problem
Let
The optimizing process of AACRO on solving problem
We can see that the state space
Let
Given an absorbing Markov chain
We first prove the sufficiency part. If for
Let
As can be seen from the proof of necessity of Lemma, an absorbing Markov chain will reach the optimal state with probability 1 if iteration time tends to infinity. Furthermore, the AACRO algorithm can also be modeled as an absorbing Markov chain according to Theorem. So we can get AACRO can reach the optimal state with probability as long as the time allowed to evolve is sufficiently long.
As can be seen according to Definitions 11 and 12 in (see, e.g., [
From (
Although it is difficult to derive accurate convergence rate and first hitting time from the above (
This paper mainly studies the effect of
Since the molecules are randomly generated in the state space, the probability that any molecule converges to a region near to the optimal solution is as follows:
It can be seen from (
If we change
We can see from (
It can be seen from the above analysis that the “1/5 success rule” strategy can greatly improve the convergence speed of the algorithm while the algorithm is guaranteed to converge on the global optimal solution.
In order to compare the performance between our proposed AACRO algorithm over the HP-CRO algorithm, we use a set of standard benchmark functions used in (see, e.g., [
23 benchmark functions.
Category | Test function | Name | | | |
---|---|---|---|---|---|
I | | Sphere model | 30 | | 0 |
| Schwefel’s problem 2.22 | 30 | | 0 | |
| Schwefel’s problem 1.2 | 30 | | 0 | |
| Schwefel’s problem 2.21 | 30 | | 0 | |
| Generalized Rosenbrock’s function | 30 | | 0 | |
| Step function | 30 | | 0 | |
| Quartic function with noise | 30 | | 0 | |
| |||||
II | | Generalized Schwefel’s problem 2.26 | 30 | | −12569.5 |
| Generalized Rastrigin’s function | 30 | | 0 | |
| Ackley’s function | 30 | | 0 | |
| Generalized Griewank function | 30 | | 0 | |
| Generalized penalized functions | 30 | | 0 | |
| 30 | | 0 | ||
| |||||
III | | Shekel’s Foxholes function | 2 | | 1 |
| Kowalik’s function | 4 | | 0.0003075 | |
| Six-hump camel-back function | 2 | | −1.0316285 | |
| Brain function | 2 | | 0.398 | |
| Goldstein-Price function | 2 | | 3 | |
| Hartman’s family | 3 | | −3.86 | |
| 6 | | −3.32 | ||
| Shekel’s family | 4 | | −10 | |
| 4 | | −10 | ||
| 4 | | −10 |
Both AACRO and HP-CRO are implemented in Matlab7.8. All simulations are performed on a computer with an Intel Core I5-4590 @ 3.3 GHz CPU and 4 GB of RAM in Windows 7 environment.
In order to achieve the best results, different test functions are set with different parameters and each simulation terminates when a certain number of function evaluations (NFEs) have been performed. The NFEs limits of AACRO for different test functions are listed in Table
Number of NFEs for function
Name | Function | AACRO | HP-CRO | RCCRO |
---|---|---|---|---|
Category I | | 150000 | 150000 | 150000 |
| 250000 | 250000 | 250000 | |
| ||||
Category II | | 150000 | 150000 | 150000 |
| 250000 | 250000 | 250000 | |
| ||||
Category III | | 7500 | 7500 | 7500 |
| 250000 | 250000 | 250000 | |
| 1250 | 1250 | 1250 | |
| 5000 | 5000 | 5000 | |
| 4000 | 4000 | 4000 | |
| 10000 | 10000 | 10000 |
Parameter settings for different categories.
Order | Parameter | Category I | Category II | Category III | |
---|---|---|---|---|---|
| | | | ||
(1) | PopSize | 10 | 100 | 100 | 100 |
(2) | | 0.2 | 0.5 | 0.8 | 0.8 |
(3) | | 0.8 | 0.5 | 0.2 | 0.2 |
(4) | CollRate | 0.2 | 0.2 | 0.2 | 0.2 |
(5) | LossRate | 0.3 | 0.3 | 0.3 | 0.3 |
(6) | IniKE | 1000 | 1000 | 1000 | 1000 |
(7) | buffer | 0 | 0 | 0 | 0 |
(8) | ChangeRate | 0.001 | 0.001 | 0.001 | 0.001 |
This paper also provides the results of the RCCRO versions of RCCRO1 (RCCRO1, RCCRO2, and RCCRO4) and HP-CRO (HP-CRO1 and HP-CRO2). All versions tested 23 standard test functions. For each function, we run AACRO 25 times and obtain the averaged computed minimum value (mean) and standard deviation (StdDev). We rank the results from the lowest mean to the highest and get the average rank. Finally, we order the average rank and get the overall rank.
As shown in Tables
The optimization computing results for
AACRO | HP-CRO1 | HP-CRO2 | RCCRO1 | RCCRO2 | RCCRO4 | |
---|---|---|---|---|---|---|
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 3 | 2 | 6 | 5 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 5 | 6 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 3 | 2 | 1 | 5 | 6 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 3 | 2 | 5 | 6 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 2 | 3 | 1 | 6 | 5 | 4 |
| ||||||
Mean | 0 | 0 | 0 | 0 | 0 | 0 |
StdDev | 0 | 0 | 0 | 0 | 0 | 0 |
Rank | 1 | 1 | 1 | 1 | 1 | 1 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 4 | 6 | 5 | 2 | 1 | 3 |
| ||||||
Average rank | 1.8 | 2.8 | 2.1 | 3.4 | 4.2 | 3.4 |
Overall rank | 1 | 3 | 2 | 4 | 6 | 5 |
The optimization computing results for
AACRO | HP-CRO1 | HP-CRO2 | RCCRO1 | RCCRO2 | RCCRO4 | |
---|---|---|---|---|---|---|
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | | | | | | |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | | | | | | |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | | | | | | |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | | | | | | |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | | | | | | |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 5 | 6 | 4 |
| ||||||
Average rank | 1.8 | 3.5 | 2.1 | 4.6 | 5 | 3.8 |
Overall rank | 1 | 3 | 2 | 5 | 6 | 4 |
The optimization computing results for
AACRO | HP-CRO1 | HP-CRO2 | RCCRO1 | RCCRO2 | RCCRO4 | |
---|---|---|---|---|---|---|
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 3 | 2 | 4 | 6 | 5 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 6 | 2 | 5 | 4 | 3 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 3 | 2 | 6 | 5 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 2 | 3 | 1 | 4 | 6 | 5 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 5 | 4 | 6 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 5 | 4 | 2 | 6 | 3 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 5 | 6 | 4 | 1 | 2 | 3 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 5 | 6 | 4 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 4 | 5 | 6 |
| ||||||
Mean | | | | | | |
StdDev | | | | | | |
Rank | 1 | 2 | 3 | 6 | 5 | 4 |
| ||||||
Average rank | 1.5 | 3.4 | 2.7 | 4.2 | 4.9 | 4.3 |
Overall rank | 1 | 3 | 2 | 4 | 6 | 5 |
Table
For this table, AACRO ranks first, the 2nd highest overall rank is HP-CRO, and the RCCRO versions have the worst performance. In general, AACRO is efficient in solving unimodal functions.
Table
For this table, AACRO also ranks first, followed by HP-CRO versions, and the RCCRO versions ranks last. Therefore, we can also believe that AACRO can treat high-dimensional multimodal functions well.
Table
From Tables
Average computation time
Function | | | | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
Times(s) | | | | | | | | | | | | |
Average computation time
Function | | | | | | | | | | | |
---|---|---|---|---|---|---|---|---|---|---|---|
Times(s) | | | | | | | | | | | |
We can see from Tables
For a more detailed comparison of the AACRO proposed in this paper with the HP-CRO version, we give experimental results of some functions, the results of the execution on
Results of G-best 25 run times of HP-CRO1 and AACRO: (a)
Convergence curves of HP-CRO1 and AACRO: (a)
It was observed that the performance of AACRO is better than HP-CRO in Figures
In this paper, a new algorithm AACRO based on balanced local search and global search has been proposed. The algorithm combines the ACRO features and also incorporates the optimality operator of the PSO algorithm and adds the weighting factor to control the ratio of local search and global search. The algorithm’s structure allows the algorithm to switch seamlessly between global and local searches efficiently, making it easier to find optimal values.
What is more, we give the convergence proof and convergence speed analysis. We draw a conclusion that the AACRO algorithm will keep converging with a high speed.
At last, this algorithm is simulated and compared with HP-CRO and RCCRO two versions of the algorithm. The results show that the algorithm can solve the optimization problem efficiently.
Our future work will be focused on the investigations of AACRO’s parameters and figuring out the impact of each parameter, and the structure of the algorithm still needs to be streamlined. We expect to combine improved algorithms with engineering practice, which is also another key issue in the near future.
The authors declare that there are no conflicts of interest regarding the publication of this paper.