An Improved Differential Evolution Algorithm Based on Adaptive Parameter

. The differential evolution (DE) algorithm is a heuristic global optimization technique based on population which is easy to understand, simple to implement, reliable, and fast. The evolutionary parameters directly influence the performance of differential evolution algorithm. The adjustment of control parameters is a global behavior and has no general research theory to control the parameters in the evolution process at present. In this paper, we propose an adaptive parameter adjustment method which can dynamically adjust control parameters according to the evolution stage. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and faster convergence speed.


Introduction
In recent years, intelligent optimization algorithms [1] are considered as practical tools for nonlinear optimization problems.Differential evolution algorithm [2,3] is a novel evolutionary algorithm on the basis of genetic algorithms first introduced by Storn and Price in 1997.The algorithm is a bionic intelligent algorithm by simulation of natural biological evolution mechanism.Its main idea is to generate a temporary individual based on individual differences within populations and then randomly restructure population evolutionary.The algorithm has better global convergence and robustness, very suitable for solving a variety of numerical optimization problems, quickly making the algorithm a hot topic in the current optimization field.
However, DE algorithm can easily fall into local optimal solution in the course of the treatment of the multipeak and the large search space function optimization problems.In order to improve the optimization performance of the DE, many scholars have proposed many control parameters methods [15,16].Although all the methods can improve the standard DE performance to some extent, they still cannot get satisfactory results for some of the functions.In this paper, we propose an adaptive parameter adjustment method according to the evolution stage.
This paper is organized as follows.Related work is described in Section 2. In Section 3 the background of DE is presented.The improved algorithm is presented in Section 4. In Section 5 some experimental tests, results, and conclusions are given.Section 6 concludes the paper.

Related Work
The DE algorithm has a few parameters.These parameters have a great impact on the performance of the algorithm, such as the quality of the optimal value and convergence rate.There is still no good way to determine the parameters.In order to deal with this problem, researchers have made some attempts.Gamperle et al. [17] reported that it is more difficult

Introduction to DE
Compared to other evolutionary algorithms, DE reserves population-based global search strategy and uses a simple mutation operation of the differential and one-on-one competition, so it can reduce the genetic complexity of the operation.At the same time, the specific memory capacity of DE enables it to dynamically track the current search to adjust their search strategy with a strong global convergence and robustness.So it is suitable for solving some of the complex environments of the optimization problem.Basic operations such as selection, crossover, and mutation are the basis of the difference algorithm.
In an iterative process, the population of each generation  contains  individuals.Suppose that the individual  of generation  is represented as

Mutation
Operation.An individual can be generated by the following formula: Here  1 ,  2 , and  3 are random numbers generated within the interval [1, ] and variation factor  is a real number of the interval [0, 2]; it controls the amplification degree of the differential variable   2 , −   3 , .

Crossover Operation.
In difference algorithm, the crossoperation is introduced to the diversity of the new population.According to the crossover strategy, the old and new individual exchange part of the code to form a new individual.New individuals can be represented as follow: where where rand () is uniformly distributed in the interval [0, 1] and CR is crossover probability in the interval [0, 1]. () means a random integer between [0, ].

Selection Operation.
Selection operation is greedy strategy; the candidate individual is generated from mutation and crossover operation competition with target individual: where  is the fitness function.
The basic differential evolution (DE) algorithm is shown as Algorithm 1.
Algorithm 1 (the differential evolution algorithm).(1) Initialize the number of population NP, the maximum number of evolution Maxinter, the scale factor and cross-factor.
( (4) Until the termination criterion is met.
The flow chart of differential evolution algorithm is shown in Figure 1.

The Adaptive Control Parameter Adjustment Method (ADE)
From standard DE algorithm, it is known that scale factor  and cross-factor CR will not only affect convergence speed of the algorithm, but may also lead to the occurrence of premature phenomenon.In this paper, we propose an adaptive adjustment method according to the evolution stage.We use a sine function (1/4 cycle) with value of (−1, 0) and a cosine function (1/4 cycle) with value of (0, 1).The image of the two functions shows slower change at the beginning and in the end, with rapid changes and gradual increase in the middle.It is very suitable for setting  value and CR value.The early stage and the late stage of scale factor  and crossfactor CR are relatively small, with relatively fast increase in the middle, just to meet the global search of PE where  and  are constants; for example, we can set  = 0.8, and  = 0.75 in the experiment.MAXITER is the maximum number of iterations, and  is the current number of iterations.The procedure for implementing the APE is given by the following steps.

Algorithm 2 (the improved differential evolution algorithm).
(1) Initialize the number of population NP, the maximum (2) Initialize the population pop.
(3) Update the scaling factor  of each individual according to the above formula (6).
(4) Update the cross-factor CR of each individual according to the above formula (7).
(5) Perform the following behavior: Mutation, Crossover and Selection, and produce a new generation of individuals.
(6) Until the termination criterion is met.

Experimental Results
A set of unconstrained real-valued benchmark functions shown in Table 1 was used to investigate the effect of the improved algorithm.
The results are shown in Table 2.Each point is made from average values of over 10 repetitions.We set scale factor  = 0.6 and cross-factor CR = 0.5 for the standard PE algorithm and dynamically adjust  and CR according to the evolution stage for the ADE algorithm.
From Table 2, we can see that no algorithm performs better than the others for all five functions, but on average, the ADE is better than DE algorithm.
For Sphere function, Rastrigin function, and Griewank function, ADE algorithm can effectively improve the accuracy such that the optimal value obtained is much closer to the theoretical one compared with the standard DE algorithm.Ackley function is a multimodal function; from the results of iteration, the accuracy of the improved algorithm is not as that of good as the standard DE algorithm, but the difference is small and acceptable.For Shaffer function, there is no obvious superior algorithm.For all the five functions, there is a significant improvement as expected on the convergence time.These experimental results show that improving the algorithm can effectively improve the convergence speed with excellent convergence effect.
The comparison of two methods with convergent curves is shown in Figures 2, 3

Conclusion
The scale factor  and cross-factor CR have a great impact on the performance of the algorithm, such as the quality of the optimal value and convergence rate.There is still no good way to determine the parameters.In this paper, we propose an adaptive parameter adjustment method according to the evolution stage.From before mentioned experiment, we can know the improved algorithm has more powerful global exploration ability and faster convergence speed and can be widely used in other optimization tasks.

)
Follow the DE/rand/1/bin policy enforcement options, and produce a new generation of individual: (a) mutation operation; (b) crossover operation; (c) selection operation.

Figure 1 :
Figure 1: Flow chart of difference evaluation algorithm.
, 4, 5, and 6.The experiment results show the ADE algorithm has better result.Compared with DE, the ADE algorithm has both global search ability and fast convergence speed.

Table 1 :
Functions used to test the effects of ADE. 2

Table 2 :
The performances of DE and ADE.
number of evolution Maxinter, scale factor  and cross-factor CR.