Discovering and utilizing problem domain knowledge is a promising direction towards improving the efficiency of evolutionary algorithms (EAs) when solving optimization problems. We propose a knowledge-based variable reduction strategy (VRS) that can be integrated into EAs to solve unconstrained and first-order derivative optimization functions more efficiently. VRS originates from the knowledge that, in an unconstrained and first-order derivative optimization function, the optimal solution locates in a local extreme point at which the partial derivative over each variable equals zero. Through this collective of partial derivative equations, some quantitative relations among different variables can be obtained. These variable relations have to be satisfied in the optimal solution. With the use of such relations, VRS could reduce the number of variables and shrink the solution space when using EAs to deal with the optimization function, thus improving the optimizing speed and quality. When we apply VRS to optimization problems, we just need to modify the calculation approach of the objective function. Therefore, practically, it can be integrated with any EA. In this study, VRS is combined with particle swarm optimization variants and tested on several benchmark optimization functions and a real-world optimization problem. Computational results and comparative study demonstrate the effectiveness of VRS.
1. Introduction
Optimization, including continuous optimization and discrete optimization, plays an important role in scientific research, management, industry, and so forth, given the fact that many problems in the real world are essentially optimization tasks. Evolutionary algorithms (EAs), such as genetic algorithms (GAs), ant colony optimization (ACO), and particle swarm optimization (PSO), have shown competitive performance when solving complex and large-scale optimization problems. To improve the efficiency of EAs, two aspects are important and deserve investigation. The first one is the search capability of the used EA itself, including both the exploitation and exploration capabilities. The other one is how to effectively integrate domain knowledge about the optimization problem into EAs [1].
Previously, more attention was paid to the design of generic EA variants for higher search capability. Take PSO as an example. In the last decades, many enhanced PSO versions were developed, such as comprehensive learning PSO [2], mimetic fitness Euclidean-distance PSO [3], orthogonal learning PSO [4], and PSO with local search [5]. According to no free lunch theory [6], no algorithm will be effective for all optimization problems. It is therefore hard to design an efficient EA that is suitable to all kinds of optimization problems. However, if we can make use of some valuable domain knowledge implied in optimization problems, we may improve the efficiency of EAs by reducing the complexity of the optimization problems.
In the area of discrete optimization, the problem domain knowledge has started to attract researchers’ attention. For instance, the incorporation of knowledge-based strategies into the heuristics of swarm optimization has demonstrated to be effective [7]. Note that the problem domain knowledge in discrete optimization (e.g., the scheduling problem [8, 9] and spatial geoinformation services composition problem [10, 11]) is dependent on concrete problems considered and the knowledge extraction and discovery process is relatively subjective.
In comparison, in the area of continuous optimization, such as function optimization, the problem domain knowledge is seldom considered. One may consider the integration of PSO with the gradient-based search technique as an instance that the problem domain knowledge is combined into EAs [12], as the gradient-based search technique utilizes the gradient information implied in the optimization problem. This gradient-based search technique is used to guide the search direction of EAs. It is believed that there should be some relations among different variables on the optima of an optimization problem. Particularly, in [1], the problem domain knowledge of variable symmetry was formulated. Based on this knowledge, an inner variable learning (IVL) strategy was proposed and incorporated into PSO, thus a new PSO variant named PSO-IVL was developed. PSO-IVL demonstrates the effectiveness and potential of the integration of problems domain knowledge into EAs. However, it is not a generic algorithm because it is only suitable to optimization problems with symmetric variables.
Therefore, we may wonder whether there exist more general methods for finding relations among different variables of an optimization problem; then we can utilize the knowledge of such relations to simplify the original optimization problems and improve the efficiency of EAs. As we know, the variable relation reflects the variable dependence, which can shrink the solution space of original optimization problems.
Driven by this motivation, we investigate a method to discover underlying variable relations existing in an unconstrained and first-order derivative optimization function. We find that the discovered variable relations can be used effectively to reduce the number of variables included in the optimization function when applying PSO (other EAs should be suitable as well) to that function optimization problem. Consequently a variable reduction strategy (VRS) is developed and integrated into the PSO variants. Experimental tests on some benchmark optimization functions and a real-world optimization problem demonstrate that VRS can reduce the complexity of the optimization functions and help PSO to find high-quality solutions more efficiently.
2. Variable Reduction Strategy
Assume that f(x1,x2,…,xD) has a first-order derivative. For the corresponding unconstrained optimization problem minf(x1,x2,…,xD), the optimal solution arises from the relationships
(1)∂f(x1,x2,…,xD)∂x1=0,∂f(x1,x2,…,xD)∂x2=0,⋮∂f(x1,x2,…,xD)∂xD=0.
It may sometimes be difficult to solve the equations above to obtain exact values of their variables. There are two reasons for this. The first one is that the equations may be nonlinear and complex which makes it difficult to get the completely analytic solution. The second one is that there might be many extreme points for a multimodal optimization function; that is to say, solutions of the equations related to such an optimization function are not unique. However, some quantitative and explicit relations among variables could be determined from (1). We do not have to find out all the relations among variables, since if we can obtain just some variable relations from (1), we can reduce the number of variables and shrink the solution space, thus, decreasing the complexity of the original optimization function.
For example, if from (1) we can form a relation described as
(2)xi=g({xj∣i=1,…,m,j≠i}),1≤i≤D,1≤m≤D,
we say xi can be expressed by {xj∣i=1,…,m,j≠i}. This variable relation has to be satisfied in the optimal solution. Under this condition, in the course of using EA (e.g., PSO) to solve the optimization function, the value of xi can be calculated directly from (2) and the values of variables in {xj∣i=1,…,m,j≠i}. As a result, in the problem solving process, variable xi can be reduced.
To give an intuitive illustration, let us now consider optimization problem minf=x12+x1·sinx2 as an example. The solution space of this optimization problem is illustrated in Figure 1(a). Let the derivative of the optimization function be equal to zero; we can obtain the following:
(3)2x1+sinx2=0,(4)x1·cosx2=0.
Illustration of an example of variable reduction strategy. (a) Solution space of the original problem; (b) solution space of the problem after variable reduction.
From (3), we get the relation x1=-0.5sinx2. With this relation, variable x1 can be reduced and the original optimization problem is changed into minf=-0.25(sinx2)2. The solution space of the optimization problem after variable reduction is changed accordingly as displayed in Figure 1(b). As a result, the original two-variable optimization problem is transformed into a one-variable optimization problem after variable reduction. In addition, the solution space is shrunk from two dimensions to one dimension. Therefore, with variable reduction, the complexity of this optimization problem is reduced noticeably.
It is clear that if more variable relations can be found, then more variables can be reduced. Let us introduce several essential definitions.
Core variable: the variable is used to represent other variables.
Reduced variable: the reduced variable can be represented by core variables.
Optimization variable core: the collection of all core variables present in an optimization function. We denote it by C={xi∣i=1,…,n}, 1≤n≤D.
Obviously, the fewer core variables and the more reduced variables we obtain, the more the complexity of the original optimization function will be eliminated. Therefore, the task of the variable reduction strategy (VRS) is to obtain an optimization variable core with the minimum cardinality. To obtain a general theory and method for finding minimum core variables is still an open problem, since the equations described in (1) may be too complex. However, at least, we can safely conclude that if a variable in a derivative optimization problem is less than or equal to third-order, and this variable can be reduced.
In general, the performance of EAs degrades noticeably with the dimensionality increase of an optimization problem. VRS can help alleviate the problem.
3. Experimental Study on Benchmark Optimization Problems3.1. Experimental Setting
VRS is integrated into the basic version of PSO [13] to obtain a new PSO variant called PSO-VRS. We apply PSO-VRS to several benchmark optimization functions to test the effectiveness of VRS. The Rosenbrock function [2], variably dimensioned function [14], Wood function [14], and Ackley function [2] are selected as the benchmark optimization functions. Each function was optimized by some state-of-the-art PSO variants as well as PSO-VRS. We present the details of the variable reduction procedure for each function, provide the computational results of each algorithm, and demonstrate the evolutionary process of PSO-VRS. The PSO variants used in the comparative study are listed below:
PSO with inertia weight (PSO-w) [15];
PSO with constriction factor (PSO-cf) [16];
UPSO [17];
fully informed particle swarm (FIPS) [18];
FDR-PSO [19];
CPSO-H [20];
CLPSO [2];
PSO-VRS.
It should be noted that the parameter settings of PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO are the same as those in [2]. Related parameters of PSO-VRS are set up as follows: the inertia weight w=0.9-(0.5·g/maxGen), the maximum function evaluations maxFEs=200000, acceleration coefficients c1=c2=2.0, and number of particles ps=20.
3.2. Variable Reduction Process of Test Optimization Functions3.2.1. Rosenbrock Function
This function is formulated as f(x)=∑i=1D(100(xi2-xi+1)2+(xi-1)2), which is multimodal, is nonseparable, and exhibits a very narrow valley moving from local optimum to global optimum [21]. Let us rewrite the function as follows:(5)f(x)=∑i=1n-1(100(xi2-xi+1)2+(xi-1)2)=100(x12-x2)2+(x1-1)2+100(x22-x3)2+(x2-1)2+100(x32-x4)2+(x3-1)2+⋯+100(xn-12-xn)2+(xn-1-1)2.
Then setting the partial derivatives to zero, we have
(6)dfdx1=200(x12-x2)·2x1+2(x1-1)=0,(7)dfdxi=-200(xi-12-xi)+200(xi2-xi+1)·2xi+2(xi-1)=0,2<i<n,(8)dfdxn=-200(xn-12-xn)=0.
From (6), we have
(9)x2=x12+x1-1200x1.
Subsequently from (7), we have
(10)xi=xi-12+(xi-1-1)-100(xi-22-xi-1)200xi-1,2<i<n.
Furthermore,
(11)xn=xn-12.
We can observe from the above expressions (9)–(11) that any other variables in the Rosenbrock function can be calculated with the use of x1, such that the objective function can be calculated only with the aid of the value of x1 and (9)–(11). Therefore x1 is the one and only one core variable of this optimization function. As a result, the original multivariable optimization problem is actually transformed into a one-variable optimization problem. The Rosenbrock function with 10 variables was optimized by each PSO variant. The search range of each variable is taken from −3 to 3.
3.2.2. Variably Dimensioned Function
This function is described as f(x)=∑i=1n(xi-1)2+(∑i=1ni(xi-1))2+(∑i=1ni(xi-1))4. With regard to this function, we obtain
(12)dfdxi=2(xi-1)+2i(∑i=1ni(xi-1))+4i(∑i=1ni(xi-1))3=0,1≤i≤n.
According to (12), for any two variables xi and xk we have
(13)2(xi-1)+2i(∑i=1ni(xi-1))=-4i(∑i=1ni(xi-1))3,2(xk-1)+2k(∑i=1ni(xi-1))=-4k(∑i=1ni(xi-1))3.
From (13), we get
(14)k(xi-1)=i(xk-1).
In the sequel, from (14), we can obtain xi=(ixk-i+k)/k.
Let k=1, then we have xi=ix1-i+1. As a result all other variables can be represented by x1, which is the core variable of this optimization function.
3.2.3. Wood Function
This function comes in the form
(15)f(x)=(10(x2-x12))2+(1-x1)2+((90)1/2(x4-x32))2+(1-x3)2+((10)1/2(x2+x4-2))2+((10)-1/2(x2-x4))2.
We determine the related derivative as shown below:
(16)dfdx1=200(x2-x12)(-2x1)-2(1-x1)=0,dfdx2=200(x2-x12)+20.2x2+19.8x4-40=0,dfdx3=180(x4-x32)(-2x3)-2(1-x3)=0,dfdx4=180(x4-x32)+19.8x2+20.2x4-40=0.
From (16), one has
(17)x2=x1-1200x1+x12,(18)x4=1000(x2-x12)+101x2-200-99,(19)x4=x3-1180x3+x32,(20)x3=1001x4+99x2-200900.
We can see from (17)–(20) that x2 can be calculated from x1; x4 can be calculated with the use of both x2 and x1; and x3 can be calculated based on x4 and x2. In fact, all x4, x3, and x2 can be computed from x1, which is therefore the only core variable of this optimization function. Note that, in the search process of PSO-VRS, the value of (1001x4+99x2-200)/900 in (20) may be smaller than zero. Under this condition, the function fitness will be penalized and equal to 1000, to drive the solution to the feasible area.
3.2.4. Ackley Function
This function reads as f5(x)=-20·exp(-0.2(1/n)∑i=1nxi2)+20-exp((1/n)∑i=1ncos(2πxi))+e.
Regarding this function, the partial derivatives are
(21)dfdxj=-20exp(-0.21n∑i=1nxi2)×(-0.2)121(1/n)∑i=1nxi21n2xj-exp(1n∑i=1ncos(2πxi))1n2πsin(2πxj)=0,dfdxk=-20exp(-0.21n∑i=1nxi2)×(-0.2)121(1/n)∑i=1nxi21n2xk-exp(1n∑i=1ncos(2πxi))1n2πsin(2πxk)=0.
From (21), we obtain the following relationship:
(22)xjxk=sin(2πxj)sin(2πxk).
Such that we have xj=xk. Therefore, variable x1 can be the only core variable.
3.3. Computational Results and Comparative Study
To solve each test optimization function, each algorithm was run 30 times. The corresponding computational results are listed in Table 1, where Mean is the mean fitness value of 30 runs; Std is the related standard deviation of the results; FEs is the mean function evaluations to obtain the results. The evolutionary process of the best fitness value of each function obtained by PSO-VRS is displayed in Figure 2.
Optimization results obtained for the test functions; the best results are shown in boldface.
Algorithm
Mean
Std
FEs
Mean
Std
FEs
Rosenbrock function
Variably dimensioned function
PSO-w
2.182e+000
2.642e-002
30 000
1.576e+003
4.610e+002
200 000
PSO-cf
3.833e-001
0.250e+000
30 000
5.618e+000
1.385e+001
200 000
UPSO
1.055e+000
1.542e+000
30 000
1.164e+003
4.763e+002
200 000
FDR
6.480e-001
6.846e-001
30 000
3.188e-001
5.820e-001
200 000
FIPS
1.186e+000
1.080e-001
30 000
4.479e+003
1.317e+003
200 000
CPSO-H
1.044e+000
1.265e+000
30 000
1.365e+004
3.277e+003
200 000
CLPSO
1.262e+000
9.658e-001
30 000
2.889e+003
6.235e+002
200 000
PSO-VRS
0.0
0.0
11 565
0.0
0.0
12 684
Algorithm
Wood function
Ackley function
PSO-w
3.347e-006
3.383e-006
200 000
3.942e-014
1.123e+000
200 000
PSO-cf
9.784e-006
2.225e-007
200 000
1.123e+000
8.655e-001
200 000
UPSO
1.459e-009
5.144e-009
200 000
1.225e-015
3.162e-015
200 000
FDR
1.576e-009
1.589e-009
200 000
2.844e-014
4.107e-015
200 000
FIPS
5.868e-005
7.834e-005
200 000
4.812e-007
9.172e-008
200 000
CPSO-H
5.861e-001
2.616e-001
200 000
4.931e-014
1.104e-014
200 000
CLPSO
1.570e-002
3.330e-002
200 000
0.0
0.0
180 864
PSO-VRS
0.0
0.0
13 078
0.0
0.0
11 254
The evolutionary process of PSO-VRS on test functions, (a) Rosenbrock function, (b) variably dimensioned function, (c) Wood function, and (d) Ackley function.
From the results about the Rosenbrock function in Table 1, we can find that it is generally difficult for other PSO variants to find an optimal or near-optimal solution of the Rosenbrock function. However, with the aid of VRS, PSO-VRS can use the basic PSO to efficiently find the optimal solution averagely within about 12000 fitness evaluations. Moreover, Figure 2(a) demonstrates that VRS enables PSO to converge to high-quality solutions quickly.
Results of the variably dimensioned function listed in Table 1 reveal that it is hard for most of the comparative PSO variants without the integration of VRS to generate a solution of high quality. In contrast, PSO-VRS always produces the optimal solution in relatively small number of function evaluations (averagely about 12684). Figure 2(b) demonstrates that PSO-VRS converges to the optimal solution quickly. The variably dimensioned function is highly related, which brings a great deal of difficulty to typical PSO variants to form satisfactory solutions. On the other hand, VRS utilizes the underlying variable relations and translates the original problem into a one-variable optimization task. This indicates that VRS significantly reduces the complexity of the variably dimensioned function.
We can see from Table 1 that, compared with other comparative PSO variants, PSO-VRS generates the best result (also the optimal) of the Wood function averagely within no more than 13100 function evaluations. Though the variable relations described in (17)–(20) seem more complex, they well support PSO to find the optimal solution. Figure 2(c) underlines that PSO-VRS exhibits high convergence.
It also can be observed from Table 1 that both CLPSO and PSO-VRS could always find the best results, compared to other PSO variants. The superiority of PSO-VRS to CLPSO is that PSO-VRS can obtain the optimal solution at the cost of much less function evaluations. The high convergence capability of PSO-VRS when solving the Ackley function is shown in Figure 2(d).
4. Experimental Study on a Real-World Optimization Problem
Frequency-modulated (FM) sound wave synthesis has an important role in several modern music systems and to optimize the parameter of an FM synthesizer is a six-dimensional optimization problem where the vector to be optimized is represented by X={a1,ω1,a2,ω2,a3,ω3} [22, 23]. This problem is a highly complex multimodal one having strong epistasis with minimum value f(X*)=0 [24]. This problem was frequently solved by EAs or taken as a benchmark real-world optimization problem to test the performance of new EA variants [25, 26]. This optimization problem is formulated as follows [22]:
(23)y(t)=a1·sin((ω2·t·θ+a3·sin(ω3·t·θ))ω1·t·θ+a2·sin(ω2·t·θ+a3·sin(ω3·t·θ))),y0(t)=(1.0)·sin(((4.8)·t·θ+(2.0)·sin((4.9)·t·θ))(5.0)·t·θ-(1.5)·sin((4.8)·t·θ+(2.0)·sin((4.9)·t·θ))).
The objective function is
(24)minf(X)=∑t=0100(y(t)-y0(t))2.
To use the variable reduction strategy, we let the derivative of the objective function on variable a1 equal to zero and obtain
(25)∑t=01002·(a1·ϕ(t)-y0(t))·ϕ(t)=0,
where y(t)=a1·ϕ(t), ϕ(t)=sin(ω1·t·θ+a2·sin(ω2·t·θ+a3·sin(ω3·t·θ))).
From (25), we have
(26)a1=∑t=0100y0(t)·ϕ(t)∑t=0100ϕ(t)·ϕ(t).
According to (26), variable a1 can be calculated from other five variables. Therefore, a1 is the reduced variable and collection {ω1,a2,ω2,a3,ω3} is the corresponding optimization variable core. With VRS, the solution space is shrunk from six dimensions to five dimensions.
To evaluate the impact of VRS on this optimization problem, we use PSO-w, PSO-cf, UPSO, FDR, FIPS, CPSO-H, and CLPSO with and without the integration of VRS to solve the problem, respectively. Each algorithm is run for 30 times.
From Table 2, we can discover that, for every PSO variant, the results produced by the algorithm with the integration of VRS are better than that produced by the algorithm without the integration of VRS. Particularly, this improvement is more significant when FIPS and CLPSO are taken as the solvers. These results demonstrate the potential application of VRS to real-world optimization problems. Moreover, in this optimization problem, only one variable is reduced, which indicates that when we apply VRS to optimization problems, we do not have to find all the quantitative variable relations and reduce major variables, even the reduction of small number of variables could be beneficial and improve the efficiency of EAs.
Optimization results obtained for the frequency-modulated synthesizer optimization problem; the results obtained by EAs with VRS are shown in boldface.
Algorithm
Best
Worst
Mean
Std
PSO-w
0.0
1.258023e+001
3.752319e+000
5.853543e+000
PSO-w-VRS
0.0
1.254014e+001
3.375620e+000
5.893543e+000
PSO-cf
2.673488e-006
2.180154e+001
1.200588e+001
5.144827e+000
PSO-cf-VRS
4.275136e-007
1.217917e+001
8.466443e+000
4.848450e+000
UPSO
5.839747e-028
1.473172e+001
6.608215e+000
4.636487e+000
UPSO-VRS
0.0
1.120719e+001
6.425027e+000
4.185174e+000
FDR
0.0
2.016705e+001
1.191916e+001
6.193037e+000
FDR-VRS
0.0
1.254068e+001
1.055711e+001
3.626518e+000
FIPS
3.321497e-003
8.500601e+000
2.306348e+000
2.513825e+000
FIPS-VRS
9.388098e-010
2.787766e+000
1.127794e+000
1.359315e+000
CPSO-H
1.179023e+001
2.553845e+001
1.910668e+001
4.381173e+000
CPSO-H-VRS
7.783981e-001
1.683571e+001
1.274518e+001
5.058687e+000
CLPSO
1.354023e-004
1.138270e+001
1.666438e+000
3.567321e+000
CLPSO-VRS
2.309327e-005
6.541305e+000
3.734962e-001
1.456388e+000
5. Conclusions
The utilization of the domain knowledge associated with the optimization problem can reduce the complexity of the original problem and facilitate the solution search of EAs. In this study, we investigate the underlying knowledge of quantitative variable relations that have to be satisfied in the optimal solutions of an unconstrained and first-order derivative optimization function. Based on these relations, we propose a variable reduction strategy (VRS). The essence of VRS is to find an optimization variable core with the minimum number of core variables. Computational results and comparative studies carried out for several test benchmark optimization functions and a real-life optimization problem demonstrate that VRS can significantly improve the efficiency of PSO variants. Currently, we cannot guarantee that VRS can be applied to any unconstrained optimization problems. However, it could be beneficial to check the variable relations and use VRS when using EAs to solve unconstrained optimization problems. VRS is expected to have large application potential in real-world optimization problems.
Some future researches can be carried in four directions. Although the variable reduction strategy is generic and effective, on some occasions, it might be very difficult to obtain the variable relations from the partial derivatives of an optimization function because of the complexity of these derivatives. To construct a solid and comprehensive theory of variable reduction, we may investigate whether there are some generic and formal theories about finding the explicit variable relations through a group of equations. Furthermore, we also consider formulating the underlying variable relations by some approximate methods, such as neural network. The third direction of research is to further study the variable reduction strategy to apply it to constrained optimization problems. The fourth one is to test the efficiency and effectiveness of the variable reduction strategy with regard to more real-world optimization problems.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (NSFC, 51178193 and 41001220). Guohua Wu is supported by the China Scholarship Council under Grant no. 201206110082. The authors thank Dr. Ponnuthurai Nagaratnam Suganthan for providing them with the source codes of the comparative algorithms and giving them insightful suggestion and comments.
WuG.PedryczW.MaM.QiuD.LiH.A particle swarm optimization variant with an inner variable learning strategyThe Scientific World Journal. In pressLiangJ. J.QinA. K.SuganthanP. N.BaskarS.Comprehensive learning particle swarm optimizer for global optimization of multimodal functionsLiangJ. J.QuB. Y.MaS. T.SuganthanP. N.Memetic fitness Euclidean-distance particle swarm optimization for multi-modal optimizationZhanZ. H.ZhangJ.LiY.ShiY. H.Orthogonal learning particle swarm optimizationQuB. Y.LiangJ. J.SuganthanP. N.Niching particle swarm optimization with local search for multi-modal optimizationWolpertD. H.MacreadyW. G.No free lunch theorems for optimizationXingL. N.ChenY. W.WangP.ZhaoQ. S.XiongJ.A knowledge-based ant colony optimization for flexible job shop scheduling problemsWuG.MaM.ZhuJ.QiuD.Multi-satellite observation integrated scheduling method oriented to emergency tasks and common tasksWuG.LiuJ.MaM.QiuD.A two-phase scheduling method with the consideration of task clustering for earth observing satellitesLiH.WuB.Adaptive geo-information processing service evolution: reuse and local modification methodLiH.ZhuQ.YangX.XuL.Geo-information processing service composition for concurrent tasks: a QoS-aware game theory approachNoelM. M.A new gradient based particle swarm optimization algorithm for accurate computation of global minimumShiY.EberhartR.Modified particle swarm optimizerProceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98)May 199869732-s2.0-0031700696FanS. K. S.ZaharaE.A hybrid simplex search and particle swarm optimization for unconstrained optimizationShiY.EberhartR.Modified particle swarm optimizerProceedings of the 1998 IEEE International Conference on Evolutionary Computation (ICEC '98)May 199869732-s2.0-0031700696ClercM.KennedyJ.The particle swarm-explosion, stability, and convergence in a multidimensional complex spaceParsopoulosK.VrahatisM.UPSO: a unified particle swarm optimization scheme1Proceedings of the International Conference of Computational Methods in Sciences and Engineering (ICCMSE '04)2004868873Lecture Series on Computer and Computational SciencesMendesR.KennedyJ.NevesJ.The fully informed particle swarm: simpler, maybe betterPeramT.VeeramachaneniK.MohanC. K.Fitness-distance-ratio based particle swarm optimizationProceedings of the 2003 IEEE Swarm Intelligence Symposium (SIS '03)April 200317418110.1109/SIS.2003.1202264van den BerghF.EngelbrechtA. P.A cooperative approach to participle swam optimizationSuganthanP. N.HansenN.LiangJ. J.DebK.ChenY.AugerA.TiwariS.Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization2005SingaporeNanyang Technological UniversityDasS.SuganthanP.Problem definitions and evaluation criteria for CEC, 2011 competition on testing evolutionary algorithms on real world optimization problems2010Kolkata, IndiaJadavpur UniversityHornerA.BeauchampJ.HakenL.Machine tongues XVI: genetic algorithms and their application to FM matching synthesisWangH.RahnamayanS.SunH.OmranM. G.Gaussian bare-bones differential evolutionGhoshS.DasS.RoyS.Minhazul IslamS. K.SuganthanP. N.A Differential Covariance Matrix Adaptation Evolutionary Algorithm for real parameter optimizationDasS.AbrahamA.ChakrabortyU. K.KonarA.Differential evolution using a neighborhood-based mutation operator