This paper presents Differential Evolution algorithm for solving highdimensional optimization problems over continuous space. The proposed algorithm, namely, ANDE, introduces a new triangular mutation rule based on the convex combination vector of the triplet defined by the three randomly chosen vectors and the difference vectors between the best, better, and the worst individuals among the three randomly selected vectors. The mutation rule is combined with the basic mutation strategy DE/rand/1/bin, where the new triangular mutation rule is applied with the probability of 2/3 since it has both exploration ability and exploitation tendency. Furthermore, we propose a novel selfadaptive scheme for gradual change of the values of the crossover rate that can excellently benefit from the past experience of the individuals in the search space during evolution process which in turn can considerably balance the common tradeoff between the population diversity and convergence speed. The proposed algorithm has been evaluated on the 20 standard highdimensional benchmark numerical optimization problems for the IEEE CEC2010 Special Session and Competition on Large Scale Global Optimization. The comparison results between ANDE and its versions and the other seven stateoftheart evolutionary algorithms that were all tested on this test suite indicate that the proposed algorithm and its two versions are highly competitive algorithms for solving large scale global optimization problems.
In general, global numerical optimization problem can be expressed as follows (without loss of generality minimization problem is considered here):
The optimization of the large scale problems of this kind (i.e.,
This section provides a brief summary of the basic Differential Evolution (DE) algorithm. In simple DE, generally known as DE/rand/1/bin [
In order to establish a starting point for the optimization process, an initial population must be created. Typically, each decision variable in every vector of the initial population is assigned a randomly chosen value from the boundary constraints:
At generation
There are two main crossover types, binomial and exponential. In the binomial crossover, the target vector is mixed with the mutated vector, using the following scheme, to yield the trial vector
In an exponential crossover, an integer value
DE adapts a greedy selection strategy. If and only if the trial vector
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
(
As previously mentioned, during the past few years, LSGO has attracted much attention by the researches due to its significance as many realworld problems and applications are highdimensional problems in nature. Basically, the current EAbased LSGO research can be classified into two categories:
Cooperative Coevolution (CC) framework algorithms or divideandconquer methods
Noncooperative Coevolution (CC) framework algorithms or no divideandconquer methods
In this section, we outline a novel DE algorithm, ANDE, and explain the steps of the algorithm in detail.
Storn and Price [
Obviously, from mutation equation (
Thus, since the proposed directed mutation balances both global exploration capability and local exploitation tendency while the basic mutation favors exploration only, the probability of using the proposed mutation is twice as much the probability of applying the basic rule. The new mutation strategy is embedded into the DE algorithm and combined with the basic mutation strategy DE/rand/1/bin as follows.
If
(
(
The successful performance of DE algorithm is significantly dependent upon the choice of its three control parameters: the scaling factor
The core idea of the proposed selfadaptation scheme for the crossover rate CR is based on the following fundamental principle. In the initial stage of the search process, the difference among individual vectors is large because the vectors in the population are completely dispersed or the population diversity is large due to the random distribution of the individuals in the search space that requires a relatively smaller crossover value. Then, as the population evolves through generations, the diversity of the population decreases as the vectors in the population are clustered because each individual gets closer to the best vector found so far. Consequently, in order to maintain the population diversity and improve the convergence speed, crossover should be gradually utilized with larger values along with the generations of evolution increased to preserve well genes in so far as possible and promote the convergence performance. Therefore, the population diversity can be greatly enhanced through generations. However, there is no appropriate CR value that balance both the diversity and convergence speed when solving a given problem during overall optimization process. Consequently, to address this problem and following the SaDE algorithm [
If the failure_counter_list
Randomly select one value from list
else
End If
Select the CR value from
In Procedure
Concretely, Procedure
(
(
(
set the Learning Period (LP) = 10% GEN, set the Max_failure_counter = 20.
For each
For each CR values in the list, set the CR_Ratio_List
(
(
(
(
(
(
(
(
(
(
(three randomly selected vectors) and compute
(
(
(
(
(
(
(
(
(
The relative change improvement ratio (RCIR) is updated
(
(
(
(
(
(
(
It is worth noting that although this work is an extension and modification of our previous work in [
The performance of the proposed ANDE algorithm has been tasted on 20 scalable optimization functions for the CEC 2010 special session and competition on large scale in global optimization. A detailed description of these test functions can be found in [
Separable functions
Partially separable functions, in which a small number of variables are dependent while all the remaining ones are independent (
Partially separable functions that consist of multiple independent subcomponents, each of which is
Fully nonseparable functions
To evaluate the performance of algorithm, experiments were conducted on the test suite. We adopt the solution error measure
DE Enhanced by Neighborhood Search for Large Scale Global Optimization (SDENS) [
Large Scale Global Optimization using SelfAdaptive Differential Evolution algorithm (jDElsgo) [
Cooperative Coevolution with Delta Grouping for Large Scale Nonseparable Function Optimization (DECCDML) [
the Differential AntStigmergy Algorithm for Large Scale Global Optimization (DASA) [
twostage based Ensemble Optimization for Large Scale Global Optimization (EOEA) [
Memetic Algorithm Based on Local Search Chains for Large Scale Continuous Global Optimization (MASWCHAINS) [
Dynamic Multiswarm Particle Swarm Optimizer with Subregional Harmony Search (DMSPSOSHS) [
Firstly, some trials have been performed to evaluate the benefits and effectiveness of the proposed new triangular mutation and selfadaptive crossover rate on the performance of ANDE. Two different versions of ANDE have been tested and compared against the proposed one denoted as ANDE1 and ANDE2.
ANDE1, which is same as ANDE except that the new triangular mutation scheme is only used
ANDE2, which is same as ANDE except that the basic mutation scheme is only used
In this section, we compare directly the mean results obtained by ANDE, ANDE1, and ANDE2.
Tables S1, S2, and S3 contain the results obtained by ANDE, ANDE1, and ANDE2 in
In the majority of the functions, the differences between mean and median are small even in the cases when the final results are far away from the optimum, regardless of the number of function evaluations. That implies the ANDE and its versions are robust algorithms.
In many functions, results with FEs =
Results of multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 over all functions at a significance level 0.05 (with 1.25
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus ANDE1  114  76  0.445  12  1  7  ≈ 
ANDE versus ANDE2  107  83  0.629  10  1  9  ≈ 
ANDE1 versus ANDE2  117  93  0.654  10  0  10  ≈ 
Results of the multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 over all functions at a significance level 0.05 (with 6.00
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus ANDE1  92  118  0.627  12  0  8  ≈ 
ANDE versus ANDE2  170  40  0.015  12  0  8  + 
ANDE1 versus ANDE2  165  45  0.025  11  0  9  + 
Results of multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 over all functions at a significance level 0.05 (with 3.00
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus ANDE1  85  124  0.467  12  0  8  ≈ 
ANDE versus ANDE2  171  39  0.014  14  0  6  + 
ANDE1 versus ANDE2  165  45  0.025  11  0  9  + 
In this section, we compare directly the mean results obtained by ANDE, ANDE1, and ANDE2 with the ones obtained by SDENS [
For many test functions, the worst results obtained by the proposed algorithms are better than the best results obtained by other algorithms with all FEs.
For many test functions, there is continuous improvement in the results obtained by our proposed algorithms, especially ANDE and ANDE1, with all FEs while the results with FEs =
For many functions, the remarkable performance of ANDE and its two versions with FEs =
ANDE and its two versions got very close to the optimum of singlegroup
ANDE1, among all other algorithms, got very close to the optimum of singlegroup
The performance of ANDE and ANDE1 is well in all types of problems indicating that it is less affected than the most of other algorithms by the characteristics of the problems.
Results of multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 versus SDENS, jDElsgo, DECCDML, DASA, EOEA, MASWCHAINS, and DMSPSOSHS over all functions at a significance level 0.05 (with 1.25
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus SDENS  203  7  0.000  19  0  11  + 
ANDE versus jDElsgo  210  0  0.000  20  0  0  + 
ANDE versus DECCDML  162  48  0.033  14  0  6  + 
ANDE versus DASA  44  166  0.023  6  0  14  − 
ANDE versus EOEA  26  184  0.003  2  0  18  − 
ANDE versus MASWCHAINS  30  180  0.005  2  0  18  − 
ANDE versus DMSPSOSHS  104  106  0.970  12  0  8  ≈ 


ANDE1 versus SDENS  210  0  0.000  20  0  0  + 
ANDE1 versus jDElsgo  210  0  0.000  20  0  0  + 
ANDE1 versus DECCDML  149  61  0.100  13  0  7  ≈ 
ANDE1 versus DASA  44  166  0.023  6  0  14  − 
ANDE1 versus EOEA  61  149  0.100  4  0  16  ≈ 
ANDE1 versus MASWCHAINS  19  191  0.001  2  0  18  − 
ANDE1 versus DMSPSOSHS  110  100  0.852  12  0  8  ≈ 


ANDE2 versus SDENS  185  25  0.003  17  0  3  + 
ANDE2 versus jDElsgo  194  16  0.001  18  0  2  + 
ANDE2 versus DECCDML  157  53  0.052  15  0  5  ≈ 
ANDE2 versus DASA  45  165  0.025  6  0  14  − 
ANDE2 versus EOEA  37  173  0.011  3  0  17  − 
ANDE2 versus MASWCHAINS  31  179  0.006  4  0  16  − 
ANDE2 versus DMSPSOSHS  105  105  1  13  0  7  ≈ 
Results of multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 versus SDENS, jDElsgo, DECCDML, DASA, EOEA, MASWCHAINS, and DMSPSOSHS over all functions at a significance level 0.05 (with 6.00
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus SDENS  210  0  0.000  20  0  0  + 
ANDE versus jDElsgo  192  18  0.001  18  0  2  + 
ANDE versus DECCDML  178  32  0.006  15  0  5  + 
ANDE versus DASA  77  133  0.296  8  0  12  ≈ 
ANDE versus EOEA  67  143  0.156  6  0  14  ≈ 
ANDE versus MASWCHAINS  38  172  0.012  1  0  15  + 
ANDE versus DMSPSOSHS  78  132  0.313  8  0  12  ≈ 


ANDE1 versus SDENS  210  0  0.000  20  0  0  + 
ANDE1 versus jDElsgo  172  38  0.012  15  0  5  + 
ANDE1 versus DECCDML  162  48  0.033  12  0  8  + 
ANDE1 versus DASA  87  123  0.502  8  0  12  ≈ 
ANDE1 versus EOEA  61  149  0.100  4  0  16  ≈ 
ANDE1 versus MASWCHAINS  41  169  0.017  4  0  16  − 
ANDE1 versus DMSPSOSHS  93  117  0.654  9  0  11  ≈ 


ANDE2 versus SDENS  187  23  0.002  18  0  2  + 
ANDE2 versus jDElsgo  154  56  0.067  15  0  5  ≈ 
ANDE2 versus DECCDML  154  56  0.067  14  0  6  ≈ 
ANDE2 versus DASA  60  150  0.093  7  0  13  ≈ 
ANDE2 versus EOEA  52  158  0.048  5  0  15  − 
ANDE2 versus MASWCHAINS  43  167  0.021  6  0  14  − 
ANDE2 versus DMSPSOSHS  61  149  0.100  7  0  13  ≈ 
Results of multipleproblem Wilcoxon’s test for ANDE, ANDE1, and ANDE2 versus SDENS, jDElsgo, DECCDML, DASA, EOEA, MASWCHAINS, and DMSPSOSHS over all functions at a significance level 0.05 (with 3.00
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus SDENS  187  23  0.002  17  0  3  + 
ANDE versus jDElsgo  31  179  0.006  4  0  16  − 
ANDE versus DECCDML  185  25  0.003  16  0  4  + 
ANDE versus DASA  103  107  0.940  12  0  8  ≈ 
ANDE versus EOEA  88  122  0.526  9  0  11  ≈ 
ANDE versus MASWCHAINS  66  144  0.145  8  0  12  ≈ 
ANDE versus DMSPSOSHS  54  156  0.057  7  0  13  ≈ 


ANDE1 versus SDENS  187  23  0.002  17  0  3  + 
ANDE1 versus jDElsgo  50  160  0.040  5  0  15  − 
ANDE1 versus DECCDML  164  26  0.005  13  1  6  + 
ANDE1 versus DASA  122  88  0.526  12  0  8  ≈ 
ANDE1 versus EOEA  100  110  0.852  9  0  11  ≈ 
ANDE1 versus MASWCHAINS  85  125  0.455  9  0  11  ≈ 
ANDE1 versus DMSPSOSHS  87  123  0.502  8  0  12  ≈ 


ANDE2 versus SDENS  155  55  0.062  15  0  5  ≈ 
ANDE2 versus jDElsgo  19  191  0.001  4  0  16  − 
ANDE2 versus DECCDML  131  79  0.332  10  0  10  ≈ 
ANDE2 versus DASA  64  146  0.126  8  0  12  ≈ 
ANDE2 versus EOEA  41  169  0.017  5  0  15  − 
ANDE2 versus MASWCHAINS  40  170  0.015  5  0  15  − 
ANDE2 versus DMSPSOSHS  21  189  0.002  16  0  4  − 
From Table
On the other hand, from Table
Finally, from Table
Additionally, it can be obviously seen that HDE1 is significantly better than SDENS and DECCDML algorithms; HDE1 is significantly worse than jDElsgo algorithm. Besides, there is no significant difference between DMSPSOSHS, DASA, EOEA, MASWCHAINS, and ANDE1. However, it can be obviously seen that ANDE2 is significantly worse than jDElsgo, EOEA, MASWCHAINS, and DMSPSOSHS algorithms. Besides, there is no significant difference between SDENS, DECCDML, DASA, and ANDE2.
From Tables
In this section, as the performance of ANDE and ANDE1 is almost similar but ANDE converged to better solutions faster than ANDE1 in many problems and ANDE explicitly includes both ANDE1 and ANDE2 versions, some experiments are carried out so as to identify the key parameters that significantly lead to the perfect performance of our proposed approach by studying the effects of the selfadaptive crossover rate scheme and the learning period as well as the maximum failure counter on the performance of ANDE algorithm. Note that in this study we do not try to find the optimal values for these parameters but to verify that the improved performance obtained after combining the proposed selfadaptive crossover rate into basic and proposed mutation schemes.
In the following three subsections, we investigate the effectiveness of the selfadaptive crossover scheme and we experimentally determine the appropriate values of the learning period and maximum failure counter parameters. Since they are not associated with the hybridization process and they have approximately the same effect with different number of function evaluations (FEs), the solution errors of all variants of ANDE are only recorded in
In this subsection, some trials have been performed to evaluate the benefits and effectiveness of the proposed crossover rate adaptation scheme on the performance on the ANDE algorithm. Since all problems with exception to
If the failure_counter_list
Randomly select one value from list
else
End If
Select the value from
The new version of ANDE algorithm is called ANDE with one fixed list of CR values during the LP and it is abbreviated (ANDEOFL).
The statistical results including the best, median, mean, and worst results and the standard deviation over 25 independent runs of these algorithms are summarized in Table S7.
It can be obviously seen from Table
Results of the multipleproblem Wilcoxon’s test for ANDE,
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus 
117  36  0.055  10  0  7  ≈ 
ANDE versus 
144  9  0.001  15  0  2  + 
ANDE versus 
146  7  0.001  16  0  1  + 
ANDE versus ANDEOFL  145  8  0.001  16  0  1  + 
The crossover (CR) practically controls the diversity of the population and it is directly affected by the value of the learning period parameter. In this subsection, some trials have been performed to determine the suitable values of learning period (LP). Thus, the performance of ANDE has been investigated under two different LP values (5% GEN and 20% GEN) and the results have been compared to those obtained by the value of 10% GEN. To analyze the sensitivity of this parameter, we further tested two extra configurations:
It can be obviously seen from Table
Results of the multipleproblem Wilcoxon’s test for ANDE,
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus 
103.5  49.5  0.201  13  0  4  ≈ 
ANDE versus 
126  27  0.019  13  0  4  + 
In this subsection, some trials have been performed to determine the appropriate values of maximum failure counter (MFC). The crossover (CR) practically controls the diversity of the population and it is directly affected by the value of the maximum failure counter parameter. Thus, the performance of ANDE has been investigated under three different maximum failure counter values (0, 10, and 30) and the results have been compared to those obtained by the value of 10. To analyze the sensitivity of this parameter, we further tested three extra configurations:
It can be obviously seen from Table
Results of the multipleproblem Wilcoxon’s test for ANDE,
Algorithm 



Better  Equal  Worse  Dec. 

ANDE versus 
123  30  0.028  13  0  4  + 
ANDE versus 
81  55  0.501  9  1  7  ≈ 
ANDE versus 
127  26  0.017  13  0  4  + 
In order to efficiently concentrate the exploitation tendency of some subregion of the search space and to significantly promote the exploration capability in whole search space during the evolutionary process of the conventional DE algorithm, a Differential Evolution (ANDE) algorithm for solving large scale global numerical optimization problems over continuous space was presented in this paper. The proposed algorithm introduced a new triangular mutation rule based on the convex combination vector of the triplet defined by the three randomly chosen vectors and the difference vectors between the best, better, and the worst individuals among the three randomly selected vectors. The mutation rule is combined with the basic mutation strategy DE/rand/1/bin, where only one of the two mutation rules is applied with the probability of 2/3 since it has both exploration ability and exploitation tendency. Furthermore, we propose a novel selfadaptive scheme for gradual change of the values of the crossover rate that can excellently benefit from the past experience of the individuals in the search space during evolution process which in turn can considerably balance the common tradeoff between the population diversity and convergence speed. The proposed mutation rule was shown to enhance the global and local search capabilities of the basic DE and to increase the convergence speed. The algorithm has been evaluated on the standard highdimensional benchmark problems. The comparison results between ANDE and its versions and the other seven stateoftheart evolutionary algorithms that were all tested on this test suite on the IEEE congress on evolutionary competition in 2010 indicate that the proposed algorithm and its two versions are highly competitive algorithms for solving large scale global optimization problem. The experimental results and comparisons showed that the ANDE algorithm performed better in large scale global optimization problems with different types and complexity; it performed better with regard to the search process efficiency, the final solution quality, the convergence rate, and robustness, when compared with other algorithms. Finally, the performance of the ANDE algorithm was statistically superior to and competitive with other recent and wellknown DEs and nonDEs algorithms. The effectiveness and benefits of the proposed modifications used in ANDE were experimentally investigated and compared. It was found that the two proposed algorithms ANDE and ANDE1 are competitive in terms of quality of solution, efficiency, convergence rate, and robustness. They were statistically superior to the stateoftheart DEs and nonDEs algorithms. Besides, they perform better than ANDE2 algorithm in many cases. Although the remarkable performance of ANDE2 was competitive with some of the compared algorithms, several current and future works can be developed from this study. Firstly, current research effort focuses on how to control the scaling factors by selfadaptive mechanism and develop another selfadaptive mechanism for crossover rate. Additionally, the new version of ANDE combined with Cooperative Coevolution (CC) framework is being developed and will be experimentally investigated soon. Moreover, future research will investigate the performance of the ANDE algorithm in solving constrained and multiobjective optimization problems as well as realworld applications such as data mining and clustering problems. In addition, large scale combinatorial optimization problems will be taken into consideration. Another possible direction is integrating the proposed triangular mutation scheme with all compared and other selfadaptive DE variants plus combining the proposed selfadaptive crossover with other DE mutation schemes. Additionally, the promising research direction is joining the proposed triangular mutation with evolutionary algorithms, such as genetic algorithms, harmony search, and particle swarm optimization, as well as foraging algorithms such as artificial bee colony, bees algorithm, and AntColony Optimization.
The authors declare that they have no competing interests.