Differential evolution algorithm is a simple yet efficient metaheuristic for global optimization over continuous spaces. However, there is a shortcoming of premature convergence in standard DE, especially in DE/best/1/bin. In order to take advantage of direction guidance information of the best individual of DE/best/1/bin and avoid getting into local trap, based on multiple mutation strategies, an enhanced differential evolution algorithm, named EDE, is proposed in this paper. In the EDE algorithm, an initialization technique, oppositionbased learning initialization for improving the initial solution quality, and a new combined mutation strategy composed of DE/current/1/bin together with DE/
Optimization problems are ubiquitous in the various areas including production, life, and scientific community. These optimization problems are usually nonlinear and nondifferentiable. Particularly, the number of their local optima may increase exponentially with the problem size. Thus, evolutionary algorithms (EAs) only needing the value information of objective functions have many more advantages and have drawn more and more attention of many researchers all over the world. In this way, a lot of researchers have developed a great number of evolutionary algorithms, such as genetic algorithms (GAs), particle swarm optimization (PSO), ant colony optimization (ACO), and differential evolution (DE) algorithm. Among them, differential evolution is one of the most powerful stochastic realparameter optimization algorithms [
Due to its simple implementation, few control parameters, and fast convergence, DE has been widely and successfully applied in function optimization problems [
According to the aforementioned statements, it can be seen that DE has been very successful in solving various optimization problems. As far as the type of optimization problems is concerned, more researches mainly focus on continuous function optimization. However, the convergence precision and convergence speed over function optimization are still to be improved. That is, the exploration ability and exploitation ability of DE cannot be well balanced. To overcome the shortage of imbalance of the two abilities, more and more researchers have developed a large number of DE variants. For example, Noman and Iba [
Unfortunately, up to now, there exists no specific DE version to substantially achieve the best solution for all optimization problems because the exploration and the exploitation often mutually contradict in reality. Hence, searching for better approaches is very necessary. In order to solve continuous optimization problems more efficiently, an enhanced differential evolution algorithm based on multiple mutation strategies, called EDE for short, is presented in this paper.
The structure of the paper is organized as follows. The standard differential evolution algorithm is described briefly in Section
Differential evolution algorithm was first proposed by Storn and Price [
At the first step, a population of NP individuals is generated randomly by the following form:
Mutation strategy is very important in DE. At the step, a mutant vector
In order to exchange information between a mutant vector
After crossover operation, the trial vector
In a word, except for the initialization phase, the aforementioned steps will be repeated in turn until a stopping criterion is reached.
Recently, Rahnamayan et al. [
In order to improve the solution quality of initial population, the OBL scheme is employed to initialize the population individuals of EDE in the work. The initial process can be described as shown in Algorithm
(
(
(
(
(
(
In Algorithm
A mutation strategy DE/current/1/bin is employed. Namely, the target vector
In order to better take advantage of the guiding information of best individual, a new version of DE/best/1/bin, DE/
More specifically, according to the first mutation strategy, it can be seen that new generated mutant vectors will be scattered around the respective target vectors, which can not only keep good population diversity but also avoid the overrandomness of classic mutation strategy DE/rand/1/bin. According to the second mutation strategy DE/
In the meantime, a probabilistic parameter
As a matter of fact, the probability parameter
After repeating all operations (mutation, crossover, and select operations) of differential evolution, a perturbed scheme is conducted over the best individual in order to further trade off the searching ability of the aforementioned solution search equations. During the process, two perturbed equations are introduced and the best individual is perturbed dimension by dimension according to them, which are described by (
From (
What is more, the term
Like the aforementioned tradeoff scheme, a probability parameter
Concretely speaking, (
In order to keep solutions subject to boundary constraints, some components of a solution violating the predefined boundary constraints should be repaired. That is, if a parameter value produced by solution search equations exceeds its predefined boundaries, the parameter should be set to an acceptable value. The following repair rule used in the literature [
In order to effectively take use of the guidance information of best individual, mutation strategy DE/best/1/bin is considered. In order to prevent a large number of individuals from clustering around the global best individual, inspired by JADE [
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
To verify the optimization effectiveness of EDE, twentyfive benchmark functions with different characteristics taken from Yao et al. [
These benchmark functions are listed briefly in Table
Benchmark functions used in experiments.
Test functions  Search space  Optimum 



0 


0 


0 


0 


0 


0 


0 


0 


0 


0 


0 



where  


0 


0 


0 


0 


0 


0 


0 


0 


0 


0 


0 


0 


0 
In our experimental study, all benchmark functions are tested in 30 dimensions and 100 dimensions. The corresponding maximum number of fitness function evaluations (
For the set of experiments tested on 25 benchmark functions, we use the aforementioned parameter settings unless a change is mentioned. Furthermore, each test case is optimized thirty runs independently. Then, experimental results for these wellknown problems as well as some comparisons with other famous methods are reported as follows.
For the purpose of validating the enhancing effectiveness of EDE, EDE is first compared with canonical DE in terms of best, worst, median, mean, and standard deviation (Std.) values of solutions achieved by each algorithm in 30 independent runs. The corresponding results are listed in Table
Best, worst, median, mean, and standard deviation values achieved by DE and EDE through 30 independent runs.
Number  Dim. 

Methods  Best  Worst  Median  Mean  Std.  Sig. 


30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 







DE 





†  
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 







DE 






EDE 






100 

DE 





†  
EDE 







DE 






EDE 









30 

DE 





† 
EDE 






100 

DE 





≈  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





≈ 
EDE 






100 

DE 





  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





 
EDE 






100 

DE 





  
EDE 









30 

DE 





 
EDE 






100 

DE 





  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





 
EDE 






100 

DE 





  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 









30 

DE 





† 
EDE 






100 

DE 





†  
EDE 





† indicates that EDE is better than its competitor by the Wilcoxon rank sum test at
 means that EDE is worse than its competitor.
Convergence performance of DE and EDE on the twelve test functions at
From Table
From Figure
According to the aforementioned analyses, it can be concluded that EDE is better than or approximately equal to DE on almost all the functions. In other words, multiple mutation strategies and perturbation schemes are beneficial to the performance of EDE.
In this subsection, EDE is further compared with some representatives of stateoftheart DE variants, such as SaDE [
Performance comparison between EDE and other three DEs over 30 independent runs for the 16 test functions at
Number 

JADE 
SaDE  SaJADE  EDE 







































































































— 
† indicates that EDE is better than its competitor.
‡ means that EDE is worse than its competitor.
≈ means that the performance of the corresponding algorithm is even with that of EDE.
Bold entities mean the best results.
From Table
It should be pointed out that the results are summarized as
Artificial bee colony algorithm introduced by Karaboga and Basturk is a relatively new swarmbased optimization algorithm [
The further comparison results are given in Table
Comparison between EDE and other two ABCs over 30 independent runs on the 21 test functions with
Number 

ABC  MABC  EDE 















































































































— 
Bold entities mean the best results.
Here “a” means that the results obtained by EDE are set to zero on the function
From Table
In order to achieve a better compromise between the exploration ability and the exploitation ability of DE, in this work, an enhanced differential evolution algorithm, called EDE, is presented. In EDE, first, an initialization technique, oppositionbased learning initialization, is employed. Next, inspired by JADE [
To testify the convergence performance of EDE, twentyfive benchmark functions with different characteristics from literatures are employed. The first experimental results demonstrate that EDE significantly enhances the performance of standard DE in terms of the best, worst, median, mean, and standard deviation (Std.) values of final solutions in most cases. Moreover, other two comparisons also show that EDE performs significantly better than or at least highly competitive with other five wellknown algorithms, that is, JADE, SaDE, SaJADE, ABC, and MABC, on the majority of the corresponding benchmark functions. Therefore, it can be concluded that EDE is an efficient method and it may be a good alternative for solving complex numerical optimization problems.
Last but not least, it is desirable to further apply the EDE algorithm to deal with other optimization problems such as the training of neural networks, system parameter identification, and data clustering.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is supported by the National Natural Science Foundation of China (Grant nos. 61064012, 61164003, 61263027, 61364026, and 61563028), the Natural Science Foundation of Gansu Province (Grant no. 148RJZA030), New Teacher Project of Research Fund for the Doctoral Program of Higher Education of China (Grant no. 20126204120002), and the Science and Technology Foundation of Lanzhou Jiaotong University (Grant no. ZC2014010).