A Chaos-Enhanced Particle Swarm Optimization with Adaptive Parameters and Its Application in Maximum Power Point Tracking

1Department of Electrical Engineering, Chung Yuan Christian University, Chung Li District, Taoyuan City 320, Taiwan 2Center for Research & Development and Department of Electronics Engineering, Adamson University, 1000 Manila, Philippines 3School of Graduate Studies, Mapua Institute of Technology, 1002 Manila, Philippines 4School of Electrical Electronics Computer Engineering, Mapua Institute of Technology, 1002 Manila, Philippines


Introduction
Swarm intelligence is becoming one of the hottest areas of research in the field of computational intelligence especially with regard to self-organizing and decentralized systems.Swarm intelligence simulates the behavior of human and animal populations.Several swarm intelligence optimization algorithms can be found in the literature, such as ant colony optimization, artificial bee colony optimization, the firefly algorithm, differential evolution, and others.These are biologically inspired optimization and computational techniques that are based on the social behaviors of fish, birds, and humans.Particle swarm optimization (PSO) is a nature-inspired algorithm that draws on the behavior of flocking birds, social interactions among humans, and the schooling of fish.In fish schooling, bird flocking, and human social interactions, the population is called a swarm and candidate solutions, corresponding to the individuals or members in the swarm, are called particles.Birds and fishes generally travel in a group without collision.Accordingly, using the group information for finding the shelter and food, each particle adjusts its corresponding position and velocity, representing a candidate solution.The position of a particle is influenced by neighbors and the best found solution by any particle.PSO is a population-based search technique that involves stochastic evolutionary optimization.It is originally developed in 1995 by Eberhart and Kennedy [1,2] to optimize constrained and unconstrained, continuous nonlinear, and nondifferentiable multimodal functions [1,3].PSO is a metaheuristic algorithm that was inspired by the collaborative or swarming behavior of biological populations [4].Since then, it has been applied to solve a wide range of optimization problems, such as constrained and unconstrained problems, multiobjective problems, problems with multiple solutions, and optimization in dynamic environments [5][6][7][8].Some of the advantages of particle swarm optimization are the following: (a) computational efficiency [6], (b) effective convergence and parameter selection [7], (c) simplicity, flexibility, robustness, and ease of implementations [9], (d) ability to hybridize with other algorithms [10], and many others.PSO has few parameters to adjust and a small memory requirement and uses few CPU resources, making it computationally efficient.Unlike simulated annealing which can work only with discrete variables, PSO can work for both discrete and analog variables without ADC or DAC conversion.Also, genetic algorithm optimization requires crossover, selection, and mutation operators, whereas PSO utilizes only the exchange of information among individuals searching the problem space repeatedly [11].In recent years, the use of particle swarm optimization has been investigated with focus on its use to solve a wide range of scientific and engineering problems such as fault detection [12], parameter identification [13,14], power systems [15][16][17], transportation [18], electronic circuit design [19], and plant control design [20].Most relevant research focuses on either constrained or unconstrained optimization problems.
The particle swarm optimization was developed to optimally search for the local best and the global best; these searches are frequently known as the exploitation and exploration of the problem space, respectively.Hong et al. [21] stated that exploitation involves an intense search of particles in a local region while exploration is a long term search, whose main objective is to find the global optimum of the fitness function.Although particle swarm optimization rapidly searches the solution of many complex optimization problems, it suffers from premature convergence, trapping at a local minimum, the slowing down of convergence near the global optimum, and stagnation in a particular region of the problem space especially in a multimodal functions and high-dimensional problem space.If a particle is located at the position of the global best and the preceding velocity and weight inertia are non-zero, then the particle is moving away from that particular point [16,22].Premature convergence happens if no particle moves and the previous velocities are near to zero.Stagnation thus occurs if the majority of particles are concentrated at the best position that is disclosed by the neighbors or the swarm.This fact has in recent years motivated various investigations by several researchers on variants of the particle swarm optimization, in an attempt to improve the performance of exploitation and exploration and to eliminate the aforementioned problems.The various methods of particle swarm optimization have been used for several purposes, including scheduling, classification, feature selection, and optimization.
Mendes et al. [23] presented fully informed particle swarm optimization, in which, during the optimization search, particles are influenced by the best particles in their neighborhood and information is evenly distributed during the generations of the algorithm.Liang et al. [24] proposed a comprehensive learning PSO in which each particle learns from the other neighborhood personal best at different dimensions.Accordingly, particles update their velocity based on the history of the their own personal bests.
Wang et al. [25,26] developed the opposition-based PSO with Cauchy mutation.Their technique uses an opposition learning scheme in which the Cauchy mutation operator helps the particles move to the best positions.Pant et al. [27] modified the inertia weight to exhibit a Gaussian distribution.Xiang et al. [28] applied the time delay concept PSO to enable the processing of information by particles to find the global best.Cui et al. [29] presented the fitness uniform selection strategy (FUSS) and the random walk strategy (RWS) to enhance the exploitation and exploration capabilities of PSO.Montes de Oca et al. [30] developed Frankenstein's PSO, which incorporates several variants of PSO in the literature such as constriction [31], the timevarying inertia weight optimizer [32,33], the fully informed particle swarm optimizer [23], and the adaptive hierarchical PSO [34].The adaptive PSO that was proposed by Zhan et al. [35] utilized the information that was obtained from the population distribution and fitness of particles to determine the status of the swarm and an elitist learning strategy to search for the global optimum.Juang et al. [36] presented the use of fuzzy set theory to tune automatically the acceleration coefficients in the standard PSO.A quadratic interpolation and crossover operator is also incorporated to improve the global search ability.The literature includes hybridization of particle swarm optimization with other stochastic or evolutionary techniques [10,[37][38][39] to realize all of their strengths.
Every modification of the particle swarm optimization uses a different method to solve optimization problems.The investigations cited above therefore elucidate some improvements of the standard particle swarm optimization.However, variants of particle swarm optimization generally exhibit the following limitations.(a) The particles may be positioned in a region that has a lower quality index than previously, leading to a risk of premature convergence, trapping in local optima, and the impossibility of further improvement of the best positions of the particles because the inertia weight, cognitive factors, and social learning factors in the algorithm are not adaptive or self-organizing.(b) The inclusion of the mutation operator may improve the speed of convergence.Nevertheless, global convergence is not guaranteed because the method is likely to become trapped in the local optimum during local searches of several functions.(c) The probability in the algorithm may improve the updated positions of particles.However, the changes in the new positions of particles, consistent with the probabilistic calculations, can move the particles into the worst positions.(d) Improving information sharing and the particle learning process capability of the algorithm can provide several benefits, but doing so often increases CPU times for computing the global optimum.(e) Integrating particle swarm optimization with other evolutionary or stochastic algorithms may increase the number of required generations, the complexity of the algorithm, and the number of required calculations.
This paper proposes a novel particle swarm optimization framework.The primary merits of the proposed variant of particle swarm optimization are as follows.(a) A modified sine chaos inertia weight operator is introduced, overcoming the drawback of trapping in a local minimum which is commonly associated with an inertia weight operator.Chaos search improves the best positions of the particles, favors rapid finding of solutions in the problem space, and avoids the risk of premature convergence.(b) Type 1  constriction coefficient [40] is incorporated to increase the convergence rate and stability of the particle swarm optimization.(c) Self-organizing, adaptive cognitive, and social learning coefficients [41] are integrated to improve the exploitation and exploration search of the particle swarm optimization algorithm.(d) The proposed optimization algorithm has simple structure reducing the required memory demands and the computational burden on the CPU.It can therefore easily be realized using a few, low-cost test modules.
The remainder of this paper is organized as follows.Section 2 presents the standard particle swarm optimization algorithm.Section 3 describes the proposed variant of particle swarm optimization algorithm.Section 4 discusses the performance of the proposed variant of the particle swarm optimization and compares results obtained when well known optimization methods are applied to benchmark functions.The proposed variant of particle swarm optimization is further utilized in maximum power point tracking control using fuzzy logic for a standalone photovoltaic system.Finally, a brief conclusion is drawn.

Standard Particle Swarm Optimization
The particle swarm optimization is a simulating algorithm, evolutionary, and a population-based stochastic optimization method that originates in animal behaviors such as the schooling of fish and the flocking of bird, as well as human behaviors.It has best position memory of all optimization methods and a few adjustable parameters and is easy to implement.The standard PSO does not use the gradient of an objective function and mutation [11].Each particle randomly moves throughout the problem space, updating its position and velocity with the best values.Each particle represents a candidate solution to the problem and searches for the local or global optimum.Every particle retains a memory of the best position achieved so far, and it travels through the problem space adaptively.The personal best (  best ) is the best solution so far achieved by an individual in the swarm within the problem space   best = (  best,1 ,   best,2 ,   best,3 , . . .,   best,pop ) while the global best ( best ) refers to globally obtained best solution by any particle within the swarm  best = ( 1 ,  2 ,  3 , . . .,   ) in the problem space dimension .The position and velocity of particle  in the problem space dimension  are thus given by   () = ( 1 ,  2 ,  3 , . . .,   ) and V  () = (V 1 , V 2 , V 3 , . . ., V  ), respectively.The velocity and position of a particle  are adjusted as follows [1,2]: where the superscript  is the generation index, whereas  1 and  2 are cognitive and social parameters, which are frequently known as acceleration constants and which are mainly responsible for attracting the particles toward   best and  best .The terms  1 ,  2 , and  denote uniform random numbers [ 1 ,  2 ] ∈ [0, 1] and inertia weight  ∈ [0, 1], respectively.These factors are mainly responsible for balancing the local and global optima search capabilities of the particles in problem space.Every generation, the velocity of individuals in the swarm is computed and which adjusted velocity is used to compute the next position of the particle.To determine whether the best solution is achieved and to evaluate the performance of each particle, the fitness function is included.The best position of each particle is relayed to all particles in the neighborhood.The velocity and the position of each particle are repeatedly adjusted until the halting criteria are satisfied or convergence is obtained.

Chaos-Enhanced Particle Swarm Optimization with Adaptive Parameters
This section demonstrates that the proposed variant of particle swarm optimization improves upon the performance of the standard particle swarm optimization consistent with (1).
The novel scheme improves upon the performance of other population-based algorithms in solving high-dimensional or multimodal problems.Chaos operates in a nonlinear fashion and is associated with complex behavior, unpredictability, determinism, and high sensitivity to initial conditions.In chaos, a small perturbation in the initial conditions can produce dramatically different results [42,43].In 1963, Lorenz [44] presented an autonomous nonlinear differential equation that generated the first chaotic system.In recent years, the scientific community has paid increasing attention to the chaotic systems and their applications in various areas of science and engineering.Such systems have been investigated in such fields as parameter identifications [14], optimizations [45], electronic circuits [46], electric motor drives [47,48], power electronics [49], communications [50], robotics [51], and many others.Feng et al. [52] introduced two means of modifying the inertia weight of a PSO using chaos.The first type is the chaotic decreasing inertia weight and the second type is the chaotic random inertia weight.In this paper, the latter is considered intensifying the inertia weight parameter of the PSO.The dynamic chaos random inertia weight is used to ensure a balance between exploitation and exploration.A low inertia weight favors exploitation while a high inertia weight favors exploration.A static inertia weight influences the convergence rate of the algorithm and often leads to premature convergence.Chaotic search optimization in all instances was used herein because of its highly dynamic property, which ensures the diversity of the particles and escape from local optimum in the process of searching for the global optimum.
The logistic map  +1 =   (1 −   ) [53,54], where  = 4 is a very common chaotic map, which is found in much of the literature on chaotic inertia weight; it does not guarantee chaos on initial values of  0 ∉ {0, 0.25, 0.5, 0.75, 1} that may arise during the initial generation process.In this paper, the sine chaotic map [54] given by ( 2) was utilized to avoid this shortcoming.Its simplicity eliminates complex calculations, reducing the CPU time: where  > 0,   ,  +1 ∈ [0, 1] and  is the generation number.Figure 1 presents the bifurcation diagram of the sine chaotic map.In some instances of generations,  +1 has relatively very small values.Hence, to improve the effectiveness of the chaos random inertia weight of particle swarm optimization, the original sine chaotic map is lightly modified as follows: where  = 1 and   ,  +1 ∈ [0, 1]; the absolute sign ensures that the next-generation process in chaos space has  +1 ∈ [0, 1].Therefore, the chaotic random inertia weight   chaos is given by Figure 2 plots the dynamics of the modified sine chaotic map while Figure 3 displays the bifurcation diagram.
Type 1  constriction coefficient is integrated to the proposed variant of PSO to prevent the divergence of the particles during the search for solutions in problem space.The coefficient is used to fine-tune the convergence of particle swarm optimization.Consider  = 2 where the parameter  =  1 +  2 depends on the cognitive and social parameters and the criterion  > 4 guarantees the effectiveness of the constriction coefficient.Incorporating the above coefficient ensures the quality of convergence and the stability of the generation process for particle swarm optimization.Time-varying cognitive and social parameters are incorporated into PSO to improve its local and the global search by making the cognitive component large and the social component small at the initialization or in the early part of the evolutionary process.A linearly decreasing cognitive component and a linearly increasing social component in the evolutionary process enhance the exploitation and exploration of the PSO, helping the particle swarm to converge at the global optimum.The mathematical equation is represented as follows: where  1, ,  2, ,  1, , and  2, are the initial and final values of the cognitive parameters and the social parameters, respectively;  is the current generation, and the MAXITR is the value in the final generation.The above components improve the performance of the standard PSO.Therefore, the proposed mathematical equation for the velocity and position of the particle swarm optimization are as follows: The uniform random numbers [ 1 ,  2 ] ∈ [0, 1] from the velocity equation of the standard PSO are not included in the proposed PSO. Figure 4 displays the flowchart of the proposed chaos-enhanced PSO.
The mathematical representations and algorithmic steps represent a significant improvement on the performance of the standard PSO.A numerical benchmark test was carried out using the unimodal and multimodal functions.The following section presents and discusses the results.

Simulation Results
In this section, four benchmark test functions are used to test the performance of the proposed algorithm.Subsections elucidate the names of the benchmark test functions, their search spaces, their mathematical representations, and variable domains.
Five programming codes were developed in Matlab 7.12 to minimize the above benchmark functions.These codes correspond to the proposed PSO, the standard PSO, the firefly algorithm (FA) [55,56], ant colony optimization (ACO) [57,58], and differential evolution (DE) [59,60], which are evaluated using the benchmark test functions for comparative purposes.
The parameter settings for the above algorithms are as follows.For the PSO, the inertia weight , cognitive learning  1 , and social learning  2 factors are given as 0.99, 1.5, and 2.0, respectively.For the FA, the light absorption , attraction , and mutation  coefficients are 1.0, 2.0, and 0.2, respectively.For ACO, the selection pressure  and deviation-to-distance ratio  are 0.5 and 1.0, respectively.The roulette wheel selection method is used for ACO.The mutation coefficient  and the crossover rate for DE are 0.8 and 0.2, respectively.The population size and the number of unknowns (dimensions) for all population-based algorithms that were used in the benchmark test are 20.Ten simulations tests are performed using each of the algorithms in order to evaluate their performance in minimization.
To verify the optimality and robustness of the algorithms, two convergence criteria are adopted: the convergence tolerance and the fixed maximum number of generations.The desktop computer that was used in the benchmark test function experiments ran the Microsoft Windows 7 64-Bit Operating System and had an Intel (R) Core 5 (3.30GHz) processor with 8.0 GB RAM installed.
In the next experimental test, the convergence tolerance was set to 0.001.Table 3 presents the best, mean, and worst values that were used to minimize  1 () using all techniques, based on ten simulation results.Almost all techniques provide similar solutions.As presented in Table 3, the proposed method gives smaller values of  1 () in the fewest generations and in the shortest CPU times.Figure 7 shows the convergence performance in minimizing  1 ().

Initialize velocities and population of particles with random positions
Update the modified sine chaotic map and chaos random inertia weight according to (3) and ( 4) Update the velocity and position of particles using (7) End of criterion?
Initialization: control parameters and settings: (3), (4), and    Table 4 illustrates the shortest and the longest CPU times based on ten simulation tests.Table 4 reveals that the proposed method yields a small fitness of  1 () in the shortest CPU times and in the fewest generations.Figure 8 displays the number of generations of the different algorithmic methods and their shortest CPU times.5 presents the best, mean, and worst values of minimizing Powell function obtained in 500 generations using all population-based algorithms.Figure 9 plots the performance of the algorithms in minimizing  2 ().Table 6 and Figure 10 display the shortest CPU times of the algorithm.Table 7 represents the best, mean, and worst values of the minimum  2 () that is obtained using all techniques with the convergence tolerance set to 0.001.The proposed method yields the best value  2 () in fewest generations based on ten simulation results.Figure 11 displays the minimum  2 () at convergence.Table 8 provides the shortest and longest CPU times of the algorithms based on the ten simulation results.Table 8 indicates that the proposed method has shortest CPU times.Figure 12 displays the generation of the different population-based algorithms.9 presents the best, mean, and worst values obtained using all of the tested algorithms in 500 generations.Figure 13 presents the performance of the different algorithms in minimizing  3 ().Table 10 and Figure 14 provide the shortest CPU times required by the various algorithms.

Benchmark Testing: Powell
Table 11 presents the best, mean, and worst values that are used to minimize  3 () using all techniques, based on ten simulation results.The convergence tolerance was set to 0.001.As presented, the proposed method provides the best mean value of  3 () in the fewest generations and the shortest CPU time.Figure 15 shows the convergence   performance of each method in minimizing  3 ().Table 12 presents the shortest and the longest CPU times based on the ten simulation tests of the different algorithms.Table 12 shows that the proposed method yields the smallest value of  3 () in the shortest CPU times.Figure 16 plots the generation of the different algorithms.

Benchmark
Testing: Ackley Function.The Ackley func- 13 presents the best, mean, and worst values obtained using all of the algorithms in 500 generations.Figure 17 presents the performance of the algorithms in minimizing  4 ().Table 14 and Figure 18 highlight the shortest CPU times of the different populationbased algorithms.
Table 15 presents the best, mean, and worst values of the minimum  4 () that are obtained using all techniques when the convergence tolerance was set to 0.001.Based on the ten simulation results, the proposed method provides best optimal value of  4 () in the fewest generations.Figure 19 plots the convergence value in the minimizing of  4 ().Table 16 presents the shortest and longest CPU times required by the different algorithms based on the ten simulation results.Table 16 reveals that the proposed method yields a smallest fitness value of  4 ().Figure 20 plots the generation of the different population-based algorithms.

FLC Optimized by Chaos-Enhanced PSO with Adaptive Parameters for Maximum Power Point Tracking in Stan-
dalone Photovoltaic System.Developing fuzzy logic control for the MPPT [61][62][63][64][65][66][67] involves determining the scaling factor parameters and the shape of the fuzzy membership functions.The two inputs and one output for this purpose are (), which is tracking error, Δ(), which is change of tracking error, and Δ(), which is the change of the duty cycle.They are selected to tune optimally the fuzzy logic controller.The mathematical description are given as () = (() − ( − 1))/(V() − V( − 1)) and Δ() = () − ( − 1).In this case, () and V() are the instantaneous power and voltage of the PV, respectively.() represents the operating power point of the load, whether it is currently located on the left hand side or right hand side, while Δ() denotes  the direction of motion of the operating point.The fuzzy inference system (FIS) approach used herein for maximum power point tracking was the Mamdani system with the minmax fuzzy combination operation for maximum power point tracking.The defuzzification method that was used to obtain the actual value of the duty cycle signal as a crisp output was the center of gravity-based method.The equation is  = ∑  =1 ((  ) −   )/ ∑  =1 (  ).The variable output is the pulse width modulation signal, which is transmitted to the DC/DC boost converter to drive the necessary load (Table 18).The chaos-enhanced particle swarm optimization with adaptive parameters was utilized to determine the parameter of the scaling factors and to optimize the width of each inputs and output membership function.Each of these inputs and outputs includes five membership functions of the fuzzy logic.
The operation of the chaos-enhanced PSO with adaptive parameters begins by generating a solution from the randomly generated population with the best positions.The velocity equation yields particles in better positions through the application of chaos search and the self-organizing      The maximum power point tracking control was carried out using the fuzzy logic and scaling factor controllers. Figure 21 presents the model that was used to tune the parameters of the fuzzy logic controller during the process of optimization.The benchmark test is conducted with a variable irradiance and temperature of PV module as operating conditions, which is shown in Figure 22 and the DC/DC boost converter  and the rule base of the fuzzy logic controller (Table 19). Figure 23 displays the optimal best inputs and output width of membership functions obtained by the swarm.The optimal fuzzy logic solution yields symmetric triangular membership functions for (), Δ(), and Δ().The chaos-enhanced PSO with adaptive parameters causes the maximum power point tracking of a PV system to converge toward the best fitness that has been obtained by the swarm.Figure 24 shows the output power.The obtained optimal fuzzy logic controller is more robust than, and outperforms, other maximum power point tracking algorithms.

Conclusion
This paper presents a novel technique with promising new features to enhance the performance and robustness of the standard PSO for solving optimization problems.The improved technique incorporates chaos searching to avoid the risk of stagnation, premature convergence, and trapping in a local optimum; it incorporates type 1  constriction to improve the quality of convergence and adaptive cognitive and social learning coefficients to improve the exploitation and exploration search characteristics of the algorithm.The proposed chaos-enhanced PSO with adaptive parameters was experimentally tested in high-dimensional problem space using a four benchmark functions to verify its effectiveness.The advantages of the chaos-enhanced PSO with adaptive parameters over the population-based algorithms are verified and the numerical results demonstrate that the proposed technique offers a faster convergence with near precise results, better reliability, and lower computational burden; avoids stagnation and premature convergence; and can escape from local minimum and low CPU time and speed requirements.A complete stand-alone PV model was developed in which maximum power point tracking control with fuzzy logic is utilized to evaluate the performance of the proposed algorithm in real-world engineering optimization applications.
It is envisaged that the proposed chaos-enhanced PSO with adaptive parameters can be applied to a wider class of complex scientific and engineering problems such as electric power system optimization (e.g., minimization of nonconvex fuel cost and power losses), robust design of nonlinear plant control system under the presence of parametric uncertainties, and forecasting of wind farm output power using the artificial neural network where the connection weights and thresholds are needed to be adjusted optimally.

Figure 5 :
Figure 5: Maximum number of generations in which different algorithms minimize sphere function.

Figure 6 :
Figure 6: Shortest CPU times in terms of maximum number of generations for minimizing sphere function using different algorithms.

Figure 7 :
Figure 7: Convergence performance of different algorithms to minimize sphere function.

Figure 9 :
Figure 9: Maximum number of generations in which algorithms minimize Powell function.

Figure 10 :
Figure 10: Shortest CPU times in terms of maximum number of generations in which algorithms minimize Powell function.

Figure 11 :Figure 12 :
Figure 11: Convergence performance of algorithms used to minimize Powell function: (a) ACO and DE; (b) FA, PSO, and proposed.

Figure 14 :
Figure 14: Shortest CPU times in terms of maximum number of generations in which various algorithms minimize Griewank function.

Figure 15 :
Figure 15: Convergence performance of different algorithms used to minimize Griewank function.

Figure 16 :
Figure 16: Shortest CPU times for convergence of different algorithms to minimize Griewank function.

Figure 17 :
Figure 17: Maximum number of generations in which different algorithms minimize Ackley function.

Figure 18 :Figure 19 :
Figure 18: Shortest CPU times in terms of maximum number of generations in which different algorithms minimize Ackley function.

Figure 20 :
Figure 20: Shortest CPU times in terms of convergence for different algorithms used to minimize Ackley function.

Figure 21 :Figure 22 :
Figure 21: Block diagram of maximum power point tracking for standalone PV system.

Table 1 :
Performance of different methods in minimizing sphere function based on ten simulation results (number of generations = 500).

Table 2 :
Shortest CPU times of different methods in minimizing sphere function based on ten simulation results (number of generations = 500).

Table 3 :
Performance of methods in minimizing sphere function based on ten simulation results (convergence tolerance = 0.001).

Table 4 :
Shortest CPU times for minimizing sphere function based on ten simulation results (convergence tolerance = 0.001).

Table 5 :
Performance of methods used to minimize Powell function based on ten simulation results (number of generations = 500).

Table 6 :
Shortest CPU times for minimizing Powell function using various methods based on ten simulation results (number of generations = 500).

Table 7 :
Performance of methods used to minimize Powell function based on ten simulation results (convergence tolerance = 0.001).

Table 8 :
Shortest CPU times of methods used to minimize Powell function based on ten simulation results (convergence tolerance = 0.001).

Table 9 :
Performance of various methods used to minimize Griewank function based on ten simulation results (number of generations = 500).

Table 10 :
Shortest CPU times in which various methods minimize Griewank function based on ten simulation results (number of generations = 500).

Table 11 :
Performance of different methods used to minimize Griewank function based on ten simulation results (convergence tolerance = 0.001).

Table 12 :
Shortest CPU times of different methods used to minimize Griewank function based on ten simulation results (convergence tolerance = 0.001).

Table 13 :
Performance of different methods used in minimizing Ackley function based on ten simulation results (number of generations = 500).

Table 14 :
Shortest CPU times of different methods in minimizing Ackley function based on ten simulation results (number of generations = 500).

Table 15 :
Performance of different methods used to minimize Ackley function based on ten simulation results (convergence tolerance = 0.001).

Table 16 :
Shortest CPU times of different methods used to minimize Ackley function based on ten simulation results (convergence tolerance = 0.001).

Table 19 :
Fuzzy logic control rule base.