TSWJ The Scientific World Journal 1537-744X Hindawi Publishing Corporation 172193 10.1155/2013/172193 172193 Research Article Complexity Reduction in the Use of Evolutionary Algorithms to Function Optimization: A Variable Reduction Strategy http://orcid.org/0000-0003-1552-9620 Wu Guohua 1,2 Pedrycz Witold 2,3 http://orcid.org/0000-0003-1173-6593 Li Haifeng 4,5 Qiu Dishan 1 Ma Manhao 1 Liu Jin 1 Bala P. Egbert P. K. 1 Science and Technology on Information Systems Engineering Laboratory National University of Defense Technology 47 Yanzheng Street, Changsha, Hunan 410073 China nudt.edu.cn 2 Department of Electrical & Computer Engineering University of Alberta, Edmonton, AB Canada T6R 2V4 ualberta.ca 3 Warsaw School of Information Technology Newelska 6, Warsaw Poland wit.edu.pl 4 School of Civil Engineering and Architecture Central South University Changsha, Hunan 410004 China csu.edu.cn 5 School of Geosciences and Info-Physics Central South University Changsha, Hunan China csu.edu.cn 2013 23 10 2013 2013 13 08 2013 08 09 2013 2013 Copyright © 2013 Guohua Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS.

1. Introduction

Optimization, including continuous optimization and discrete optimization, plays an important role in scientific research, management, industry, and so forth, given the fact that many problems in the real world are essentially optimization tasks. Evolutionary algorithms (EAs), such as genetic algorithms (GAs), ant colony optimization (ACO), and particle swarm optimization (PSO), have shown competitive performance when solving complex and large-scale optimization problems. To improve the efficiency of EAs, two aspects are important and deserve investigation. The first one is the search capability of the used EA itself, including both the exploitation and exploration capabilities. The other one is how to effectively integrate domain knowledge about the optimization problem into EAs .

Previously, more attention was paid to the design of generic EA variants for higher search capability. Take PSO as an example. In the last decades, many enhanced PSO versions were developed, such as comprehensive learning PSO , mimetic fitness Euclidean-distance PSO , orthogonal learning PSO , and PSO with local search . According to no free lunch theory , no algorithm will be effective for all optimization problems. It is therefore hard to design an efficient EA that is suitable to all kinds of optimization problems. However, if we can make use of some valuable domain knowledge implied in optimization problems, we may improve the efficiency of EAs by reducing the complexity of the optimization problems.

In the area of discrete optimization, the problem domain knowledge has started to attract researchers’ attention. For instance, the incorporation of knowledge-based strategies into the heuristics of swarm optimization has demonstrated to be effective . Note that the problem domain knowledge in discrete optimization (e.g., the scheduling problem [8, 9] and spatial geoinformation services composition problem [10, 11]) is dependent on concrete problems considered and the knowledge extraction and discovery process is relatively subjective.

In comparison, in the area of continuous optimization, such as function optimization, the problem domain knowledge is seldom considered. One may consider the integration of PSO with the gradient-based search technique as an instance that the problem domain knowledge is combined into EAs , as the gradient-based search technique utilizes the gradient information implied in the optimization problem. This gradient-based search technique is used to guide the search direction of EAs. It is believed that there should be some relations among different variables on the optima of an optimization problem. Particularly, in , the problem domain knowledge of variable symmetry was formulated. Based on this knowledge, an inner variable learning (IVL) strategy was proposed and incorporated into PSO, thus a new PSO variant named PSO-IVL was developed. PSO-IVL demonstrates the effectiveness and potential of the integration of problems domain knowledge into EAs. However, it is not a generic algorithm because it is only suitable to optimization problems with symmetric variables.

Therefore, we may wonder whether there exist more general methods for finding relations among different variables of an optimization problem; then we can utilize the knowledge of such relations to simplify the original optimization problems and improve the efficiency of EAs. As we know, the variable relation reflects the variable dependence, which can shrink the solution space of original optimization problems.

Driven by this motivation, we investigate a method to discover underlying variable relations existing in an unconstrained and first-order derivative optimization function. We find that the discovered variable relations can be used effectively to reduce the number of variables included in the optimization function when applying PSO (other EAs should be suitable as well) to that function optimization problem. Consequently a variable reduction strategy (VRS) is developed and integrated into the PSO variants. Experimental tests on some benchmark optimization functions and a real-world optimization problem demonstrate that VRS can reduce the complexity of the optimization functions and help PSO to find high-quality solutions more efficiently.

2. Variable Reduction Strategy

Assume that f(x1,x2,,xD) has a first-order derivative. For the corresponding unconstrained optimization problem minf(x1,x2,,xD), the optimal solution arises from the relationships (1)f(x1,x2,,xD)x1=0,f(x1,x2,,xD)x2=0,f(x1,x2,,xD)xD=0.

It may sometimes be difficult to solve the equations above to obtain exact values of their variables. There are two reasons for this. The first one is that the equations may be nonlinear and complex which makes it difficult to get the completely analytic solution. The second one is that there might be many extreme points for a multimodal optimization function; that is to say, solutions of the equations related to such an optimization function are not unique. However, some quantitative and explicit relations among variables could be determined from (1). We do not have to find out all the relations among variables, since if we can obtain just some variable relations from (1), we can reduce the number of variables and shrink the solution space, thus, decreasing the complexity of the original optimization function.

For example, if from (1) we can form a relation described as (2)xi=g({xji=1,,m,ji}),1iD,1mD, we say xi can be expressed by {xji=1,,m,ji}. This variable relation has to be satisfied in the optimal solution. Under this condition, in the course of using EA (e.g., PSO) to solve the optimization function, the value of xi can be calculated directly from (2) and the values of variables in {xji=1,,m,ji}. As a result, in the problem solving process, variable xi can be reduced.

To give an intuitive illustration, let us now consider optimization problem minf=x12+x1·sinx2 as an example. The solution space of this optimization problem is illustrated in Figure 1(a). Let the derivative of the optimization function be equal to zero; we can obtain the following: (3)2x1+sinx2=0,(4)x1·cosx2=0.

Illustration of an example of variable reduction strategy. (a) Solution space of the original problem; (b) solution space of the problem after variable reduction.

From (3), we get the relation x1=-0.5sinx2. With this relation, variable x1 can be reduced and the original optimization problem is changed into minf=-0.25(sinx2)2. The solution space of the optimization problem after variable reduction is changed accordingly as displayed in Figure 1(b). As a result, the original two-variable optimization problem is transformed into a one-variable optimization problem after variable reduction. In addition, the solution space is shrunk from two dimensions to one dimension. Therefore, with variable reduction, the complexity of this optimization problem is reduced noticeably.

It is clear that if more variable relations can be found, then more variables can be reduced. Let us introduce several essential definitions.

Core variable: the variable is used to represent other variables.

Reduced variable: the reduced variable can be represented by core variables.

Optimization variable core: the collection of all core variables present in an optimization function. We denote it by C={xii=1,,n}, 1nD.

Obviously, the fewer core variables and the more reduced variables we obtain, the more the complexity of the original optimization function will be eliminated. Therefore, the task of the variable reduction strategy (VRS) is to obtain an optimization variable core with the minimum cardinality. To obtain a general theory and method for finding minimum core variables is still an open problem, since the equations described in (1) may be too complex. However, at least, we can safely conclude that if a variable in a derivative optimization problem is less than or equal to third-order, and this variable can be reduced.

In general, the performance of EAs degrades noticeably with the dimensionality increase of an optimization problem. VRS can help alleviate the problem.

3. Experimental Study on Benchmark Optimization Problems 3.1. Experimental Setting

VRS is integrated into the basic version of PSO  to obtain a new PSO variant called PSO-VRS. We apply PSO-VRS to several benchmark optimization functions to test the effectiveness of VRS. The Rosenbrock function , variably dimensioned function , Wood function , and Ackley function  are selected as the benchmark optimization functions. Each function was optimized by some state-of-the-art PSO variants as well as PSO-VRS. We present the details of the variable reduction procedure for each function, provide the computational results of each algorithm, and demonstrate the evolutionary process of PSO-VRS. The PSO variants used in the comparative study are listed below:

PSO with inertia weight (PSO-w) ;

PSO with constriction factor (PSO-cf) ;

UPSO ;

fully informed particle swarm (FIPS) ;

FDR-PSO ;

CPSO-H ;

CLPSO ;

PSO-VRS.

It should be noted that the parameter settings of PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO are the same as those in . Related parameters of PSO-VRS are set up as follows: the inertia weight w=0.9-(0.5·g/maxGen), the maximum function evaluations maxFEs=200000, acceleration coefficients c1=c2=2.0, and number of particles ps=20.

3.2. Variable Reduction Process of Test Optimization Functions 3.2.1. Rosenbrock Function

This function is formulated as f(x)=i=1D(100(xi2-xi+1)2+(xi-1)2), which is multimodal, is nonseparable, and exhibits a very narrow valley moving from local optimum to global optimum . Let us rewrite the function as follows:(5)f(x)=i=1n-1(100(xi2-xi+1)2+(xi-1)2)=100(x12-x2)2+(x1-1)2+100(x22-x3)2+(x2-1)2+100(x32-x4)2+(x3-1)2++100(xn-12-xn)2+(xn-1-1)2. Then setting the partial derivatives to zero, we have (6)dfdx1=200(x12-x2)·2x1+2(x1-1)=0,(7)dfdxi=-200(xi-12-xi)+200(xi2-xi+1)·2xi+2(xi-1)=0,2<i<n,(8)dfdxn=-200(xn-12-xn)=0. From (6), we have (9)x2=x12+x1-1200x1. Subsequently from (7), we have (10)xi=xi-12+(xi-1-1)-100(xi-22-xi-1)200xi-1,2<i<n. Furthermore, (11)xn=xn-12.

We can observe from the above expressions (9)–(11) that any other variables in the Rosenbrock function can be calculated with the use of x1, such that the objective function can be calculated only with the aid of the value of x1 and (9)–(11). Therefore x1 is the one and only one core variable of this optimization function. As a result, the original multivariable optimization problem is actually transformed into a one-variable optimization problem. The Rosenbrock function with 10 variables was optimized by each PSO variant. The search range of each variable is taken from −3 to 3.

3.2.2. Variably Dimensioned Function

This function is described as f(x)=i=1n(xi-1)2+(i=1ni(xi-1))2+(i=1ni(xi-1))4. With regard to this function, we obtain (12)dfdxi=2(xi-1)+2i(i=1ni(xi-1))+4i(i=1ni(xi-1))3=0,1in.

According to (12), for any two variables xi and xk we have (13)2(xi-1)+2i(i=1ni(xi-1))=-4i(i=1ni(xi-1))3,2(xk-1)+2k(i=1ni(xi-1))=-4k(i=1ni(xi-1))3.

From (13), we get (14)k(xi-1)=i(xk-1).

In the sequel, from (14), we can obtain xi=(ixk-i+k)/k.

Let k=1, then we have xi=ix1-i+1. As a result all other variables can be represented by x1, which is the core variable of this optimization function.

3.2.3. Wood Function

This function comes in the form (15)f(x)=(10(x2-x12))2+(1-x1)2+((90)1/2(x4-x32))2+(1-x3)2+((10)1/2(x2+x4-2))2+((10)-1/2(x2-x4))2.

We determine the related derivative as shown below: (16)dfdx1=200(x2-x12)(-2x1)-2(1-x1)=0,dfdx2=200(x2-x12)+20.2x2+19.8x4-40=0,dfdx3=180(x4-x32)(-2x3)-2(1-x3)=0,dfdx4=180(x4-x32)+19.8x2+20.2x4-40=0. From (16), one has (17)x2=x1-1200x1+x12,(18)x4=1000(x2-x12)+101x2-200-99,(19)x4=x3-1180x3+x32,(20)x3=1001x4+99x2-200900.

We can see from (17)–(20) that x2 can be calculated from x1; x4 can be calculated with the use of both x2 and x1; and x3 can be calculated based on x4 and x2. In fact, all x4, x3, and x2 can be computed from x1, which is therefore the only core variable of this optimization function. Note that, in the search process of PSO-VRS, the value of (1001x4+99x2-200)/900 in (20) may be smaller than zero. Under this condition, the function fitness will be penalized and equal to 1000, to drive the solution to the feasible area.

3.2.4. Ackley Function

This function reads as f5(x)=-20·exp(-0.2(1/n)i=1nxi2)+20-exp((1/n)i=1ncos(2πxi))+e.

Regarding this function, the partial derivatives are (21)dfdxj=-20exp(-0.21ni=1nxi2)×(-0.2)121(1/n)i=1nxi21n2xj-exp(1ni=1ncos(2πxi))1n2πsin(2πxj)=0,dfdxk=-20exp(-0.21ni=1nxi2)×(-0.2)121(1/n)i=1nxi21n2xk-exp(1ni=1ncos(2πxi))1n2πsin(2πxk)=0.

From (21), we obtain the following relationship: (22)xjxk=sin(2πxj)sin(2πxk).

Such that we have xj=xk. Therefore, variable x1 can be the only core variable.

3.3. Computational Results and Comparative Study

To solve each test optimization function, each algorithm was run 30 times. The corresponding computational results are listed in Table 1, where Mean is the mean fitness value of 30 runs; Std is the related standard deviation of the results; FEs is the mean function evaluations to obtain the results. The evolutionary process of the best fitness value of each function obtained by PSO-VRS is displayed in Figure 2.

Optimization results obtained for the test functions; the best results are shown in boldface.

Algorithm Mean Std FEs Mean Std FEs
Rosenbrock function Variably dimensioned function
PSO-w 2.182 e + 000 2.642 e - 002 30 000 1.57 6 e + 003 4 . 6 10 e + 002 200 000
PSO-cf 3.833 e - 001 0.250 e + 000 30 000 5.61 8 e + 000 1 . 3 85 e + 001 200 000
UPSO 1.055 e + 000 1.542 e + 000 30 000 1.16 4 e + 003 4 . 76 3 e + 002 200 000
FDR 6.480 e - 001 6.846 e - 001 30 000 3 . 188 e - 001 5 . 820 e - 001 200 000
FIPS 1.186 e + 000 1.080 e - 001 30 000 4.47 9 e + 003 1.317 e + 003 200 000
CPSO-H 1.044 e + 000 1.265 e + 000 30 000 1.365 e + 004 3.277 e + 003 200 000
CLPSO 1.262 e + 000 9.658 e - 001 30 000 2.889 e + 003 6 . 23 5 e + 002 200 000
PSO-VRS 0.0 0.0 11 565 0.0 0.0 12 684

Algorithm Wood function Ackley function

PSO-w 3.347 e - 006 3.383 e - 006 200 000 3.942 e - 014 1.123 e + 000 200 000
PSO-cf 9.784 e - 006 2.225 e - 007 200 000 1.123 e + 000 8.655 e - 001 200 000
UPSO 1.459 e - 009 5.144 e - 009 200 000 1.225 e - 015 3.162 e - 015 200 000
FDR 1.57 6 e - 009 1.589 e - 009 200 000 2.844 e - 014 4.107 e - 015 200 000
FIPS 5.86 8 e - 005 7.834 e - 005 200 000 4.812 e - 007 9.172 e - 008 200 000
CPSO-H 5 . 861 e - 00 1 2 . 616 e - 00 1 200 000 4.931 e - 014 1.104 e - 014 200 000
CLPSO 1 . 57 0 e - 002 3 . 33 0 e - 002 200 000 0.0 0.0 180 864
PSO-VRS 0.0 0.0 13 078 0.0 0.0 11 254

The evolutionary process of PSO-VRS on test functions, (a) Rosenbrock function, (b) variably dimensioned function, (c) Wood function, and (d) Ackley function.

From the results about the Rosenbrock function in Table 1, we can find that it is generally difficult for other PSO variants to find an optimal or near-optimal solution of the Rosenbrock function. However, with the aid of VRS, PSO-VRS can use the basic PSO to efficiently find the optimal solution averagely within about 12000 fitness evaluations. Moreover, Figure 2(a) demonstrates that VRS enables PSO to converge to high-quality solutions quickly.

Results of the variably dimensioned function listed in Table 1 reveal that it is hard for most of the comparative PSO variants without the integration of VRS to generate a solution of high quality. In contrast, PSO-VRS always produces the optimal solution in relatively small number of function evaluations (averagely about 12684). Figure 2(b) demonstrates that PSO-VRS converges to the optimal solution quickly. The variably dimensioned function is highly related, which brings a great deal of difficulty to typical PSO variants to form satisfactory solutions. On the other hand, VRS utilizes the underlying variable relations and translates the original problem into a one-variable optimization task. This indicates that VRS significantly reduces the complexity of the variably dimensioned function.

We can see from Table 1 that, compared with other comparative PSO variants, PSO-VRS generates the best result (also the optimal) of the Wood function averagely within no more than 13100 function evaluations. Though the variable relations described in (17)–(20) seem more complex, they well support PSO to find the optimal solution. Figure 2(c) underlines that PSO-VRS exhibits high convergence.

It also can be observed from Table 1 that both CLPSO and PSO-VRS could always find the best results, compared to other PSO variants. The superiority of PSO-VRS to CLPSO is that PSO-VRS can obtain the optimal solution at the cost of much less function evaluations. The high convergence capability of PSO-VRS when solving the Ackley function is shown in Figure 2(d).

4. Experimental Study on a Real-World Optimization Problem

Frequency-modulated (FM) sound wave synthesis has an important role in several modern music systems and to optimize the parameter of an FM synthesizer is a six-dimensional optimization problem where the vector to be optimized is represented by X={a1,ω1,a2,ω2,a3,ω3} [22, 23]. This problem is a highly complex multimodal one having strong epistasis with minimum value f(X*)=0 . This problem was frequently solved by EAs or taken as a benchmark real-world optimization problem to test the performance of new EA variants [25, 26]. This optimization problem is formulated as follows : (23)y(t)=a1·sin((ω2·t·θ+a3·sin(ω3·t·θ))ω1·t·θ+a2·sin(ω2·t·θ+a3·sin(ω3·t·θ))),y0(t)=(1.0)·sin(((4.8)·t·θ+(2.0)·sin((4.9)·t·θ))(5.0)·t·θ-(1.5)·sin((4.8)·t·θ+(2.0)·sin((4.9)·t·θ))).

The objective function is (24)minf(X)=t=0100(y(t)-y0(t))2.

To use the variable reduction strategy, we let the derivative of the objective function on variable a1 equal to zero and obtain (25)t=01002·(a1·ϕ(t)-y0(t))·ϕ(t)=0, where y(t)=a1·ϕ(t), ϕ(t)=sin(ω1·t·θ+a2·sin(ω2·t·θ+a3·sin(ω3·t·θ))).

From (25), we have (26)a1=t=0100y0(t)·ϕ(t)t=0100ϕ(t)·ϕ(t).

According to (26), variable a1 can be calculated from other five variables. Therefore, a1 is the reduced variable and collection {ω1,a2,ω2,a3,ω3} is the corresponding optimization variable core. With VRS, the solution space is shrunk from six dimensions to five dimensions.

To evaluate the impact of VRS on this optimization problem, we use PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO with and without the integration of VRS to solve the problem, respectively. Each algorithm is run for 30 times.

From Table 2, we can discover that, for every PSO variant, the results produced by the algorithm with the integration of VRS are better than that produced by the algorithm without the integration of VRS. Particularly, this improvement is more significant when FIPS and CLPSO are taken as the solvers. These results demonstrate the potential application of VRS to real-world optimization problems. Moreover, in this optimization problem, only one variable is reduced, which indicates that when we apply VRS to optimization problems, we do not have to find all the quantitative variable relations and reduce major variables, even the reduction of small number of variables could be beneficial and improve the efficiency of EAs.

Optimization results obtained for the frequency-modulated synthesizer optimization problem; the results obtained by EAs with VRS are shown in boldface.

Algorithm Best Worst Mean Std
PSO-w 0.0 1 . 25 8 0 23 e + 001 3 . 75231 9 e + 000 5 . 85 3543 e + 000
PSO-w-VRS 0.0 1 . 25401 4 e + 001 3 . 37562 0 e + 000 5 . 8 93543 e + 000
PSO-cf 2.673488 e - 006 2 . 180154 e + 001 1 . 200588 e + 001 5.144827 e + 000
PSO-cf-VRS 4 . 2 75 136 e - 00 7 1 . 2 17917 e + 001 8.466443 e + 000 4.848450 e + 000
UPSO 5.839747 e - 028 1 . 4 73172 e + 001 6.608215 e + 000 4.636487 e + 000
UPSO-VRS 0.0 1 . 120719 e + 001 6.425027 e + 000 4.18517 4 e + 000
FDR 0.0 2 . 016705 e + 001 1 . 191916 e + 001 6.19303 7 e + 000
FDR-VRS 0.0 1 . 254068 e + 001 1 . 055711 e + 001 3.626518 e + 000
FIPS 3 . 321497 e - 0 03 8.500601 e + 000 2.306348 e + 000 2.513825 e + 000
FIPS-VRS 9.388098 e - 010 2.787766 e + 000 1.12779 4 e + 000 1.35931 5 e + 000
CPSO-H 1 . 179023 e + 001 2 . 553845 e + 001 1 . 910668 e + 001 4.381173 e + 000
CPSO-H-VRS 7 . 783981 e - 0 01 1.6 83571 e + 001 1 . 2 74518 e + 001 5.058687 e + 000
CLPSO 1.35402 3 e - 0 0 4 1 . 1382 70 e + 001 1.666438 e + 000 3.567321 e + 000
CLPSO-VRS 2.309327 e - 005 6.54130 5 e + 000 3 . 7349 62 e - 0 01 1.456388 e + 000
5. Conclusions

The utilization of the domain knowledge associated with the optimization problem can reduce the complexity of the original problem and facilitate the solution search of EAs. In this study, we investigate the underlying knowledge of quantitative variable relations that have to be satisfied in the optimal solutions of an unconstrained and first-order derivative optimization function. Based on these relations, we propose a variable reduction strategy (VRS). The essence of VRS is to find an optimization variable core with the minimum number of core variables. Computational results and comparative studies carried out for several test benchmark optimization functions and a real-life optimization problem demonstrate that VRS can significantly improve the efficiency of PSO variants. Currently, we cannot guarantee that VRS can be applied to any unconstrained optimization problems. However, it could be beneficial to check the variable relations and use VRS when using EAs to solve unconstrained optimization problems. VRS is expected to have large application potential in real-world optimization problems.

Some future researches can be carried in four directions. Although the variable reduction strategy is generic and effective, on some occasions, it might be very difficult to obtain the variable relations from the partial derivatives of an optimization function because of the complexity of these derivatives. To construct a solid and comprehensive theory of variable reduction, we may investigate whether there are some generic and formal theories about finding the explicit variable relations through a group of equations. Furthermore, we also consider formulating the underlying variable relations by some approximate methods, such as neural network. The third direction of research is to further study the variable reduction strategy to apply it to constrained optimization problems. The fourth one is to test the efficiency and effectiveness of the variable reduction strategy with regard to more real-world optimization problems.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (NSFC, 51178193 and 41001220). Guohua Wu is supported by the China Scholarship Council under Grant no. 201206110082. The authors thank Dr. Ponnuthurai Nagaratnam Suganthan for providing them with the source codes of the comparative algorithms and giving them insightful suggestion and comments.

Wu G. Pedrycz W. Ma M. Qiu D. Li H. A particle swarm optimization variant with an inner variable learning strategy The Scientific World Journal. In press Liang J. J. Qin A. K. Suganthan P. N. Baskar S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions IEEE Transactions on Evolutionary Computation 2006 10 3 281 295 2-s2.0-33744730797 10.1109/TEVC.2005.857610 Liang J. J. Qu B. Y. Ma S. T. Suganthan P. N. Memetic fitness Euclidean-distance particle swarm optimization for multi-modal optimization Bio-Inspired Computing and Applications 2012 6840 378 385 Lecture Notes in Computer Science 2-s2.0-84855642837 10.1007/978-3-642-24553-4_50 Zhan Z. H. Zhang J. Li Y. Shi Y. H. Orthogonal learning particle swarm optimization IEEE Transactions on Evolutionary Computation 2011 15 6 832 847 2-s2.0-82455205925 10.1109/TEVC.2010.2052054 Qu B. Y. Liang J. J. Suganthan P. N. Niching particle swarm optimization with local search for multi-modal optimization Information Sciences 2012 197 131 143 2-s2.0-84859100094 10.1016/j.ins.2012.02.011 Wolpert D. H. Macready W. G. No free lunch theorems for optimization IEEE Transactions on Evolutionary Computation 1997 1 1 67 82 2-s2.0-0031118203 10.1109/4235.585893 Xing L. N. Chen Y. W. Wang P. Zhao Q. S. Xiong J. A knowledge-based ant colony optimization for flexible job shop scheduling problems Applied Soft Computing Journal 2010 10 3 888 896 2-s2.0-77649234984 10.1016/j.asoc.2009.10.006 Wu G. Ma M. Zhu J. Qiu D. Multi-satellite observation integrated scheduling method oriented to emergency tasks and common tasks Journal of Systems Engineering and Electronics 2012 23 723 733 Wu G. Liu J. Ma M. Qiu D. A two-phase scheduling method with the consideration of task clustering for earth observing satellites Computers & Operations Research 2013 40 3 1884 1894 10.1016/j.cor.2013.02.009 Li H. Wu B. Adaptive geo-information processing service evolution: reuse and local modification method ISPRS Journal of Photogrammetry and Remote Sensing 2013 83 165 183 10.1016/j.isprsjprs.2013.03.002 Li H. Zhu Q. Yang X. Xu L. Geo-information processing service composition for concurrent tasks: a QoS-aware game theory approach Computers and Geosciences 2012 47 46 59 2-s2.0-82255170871 10.1016/j.cageo.2011.10.007 Noel M. M. A new gradient based particle swarm optimization algorithm for accurate computation of global minimum Applied Soft Computing Journal 2012 12 1 353 359 2-s2.0-81155155674 10.1016/j.asoc.2011.08.037 Shi Y. Eberhart R. Modified particle swarm optimizer Proceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98) May 1998 69 73 2-s2.0-0031700696 Fan S. K. S. Zahara E. A hybrid simplex search and particle swarm optimization for unconstrained optimization European Journal of Operational Research 2007 181 2 527 548 2-s2.0-33947119065 10.1016/j.ejor.2006.06.034 Shi Y. Eberhart R. Modified particle swarm optimizer Proceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98) May 1998 69 73 2-s2.0-0031700696 Clerc M. Kennedy J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space IEEE Transactions on Evolutionary Computation 2002 6 1 58 73 2-s2.0-0036464756 10.1109/4235.985692 Parsopoulos K. Vrahatis M. UPSO: a unified particle swarm optimization scheme 1 Proceedings of the International Conference of Computational Methods in Sciences and Engineering (ICCMSE '04) 2004 868 873 Lecture Series on Computer and Computational Sciences Mendes R. Kennedy J. Neves J. The fully informed particle swarm: simpler, maybe better IEEE Transactions on Evolutionary Computation 2004 8 3 204 210 2-s2.0-3142781923 10.1109/TEVC.2004.826074 Peram T. Veeramachaneni K. Mohan C. K. Fitness-distance-ratio based particle swarm optimization Proceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS '03) April 2003 174 181 10.1109/SIS.2003.1202264 van den Bergh F. Engelbrecht A. P. A cooperative approach to participle swam optimization IEEE Transactions on Evolutionary Computation 2004 8 3 225 239 2-s2.0-3142697802 10.1109/TEVC.2004.826069 Suganthan P. N. Hansen N. Liang J. J. Deb K. Chen Y. Auger A. Tiwari S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization 2005 Singapore Nanyang Technological University Das S. Suganthan P. Problem definitions and evaluation criteria for CEC, 2011 competition on testing evolutionary algorithms on real world optimization problems 2010 Kolkata, India Jadavpur University Horner A. Beauchamp J. Haken L. Machine tongues XVI: genetic algorithms and their application to FM matching synthesis Computer Music Journal 1993 17 4 17 29 2-s2.0-0027883561 Wang H. Rahnamayan S. Sun H. Omran M. G. Gaussian bare-bones differential evolution IEEE Transactions on Cybernetics 2013 43 2 634 647 10.1109/TSMCB.2012.2213808 Ghosh S. Das S. Roy S. Minhazul Islam S. K. Suganthan P. N. A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimization Information Sciences 2012 182 1 199 219 2-s2.0-80055062692 10.1016/j.ins.2011.08.014 Das S. Abraham A. Chakraborty U. K. Konar A. Differential evolution using a neighborhood-based mutation operator IEEE Transactions on Evolutionary Computation 2009 13 3 526 553 2-s2.0-65749094710 10.1109/TEVC.2008.2009457