A Novel Adaptive Elite-Based Particle Swarm Optimization Applied to VAR Optimization in Electric Power Systems

Particle swarm optimization (PSO) has been successfully applied to solve many practical engineering problems. However, more efficient strategies are needed to coordinate global and local searches in the solution space when the studied problem is extremely nonlinear and highly dimensional. This work proposes a novel adaptive elite-based PSO approach. The adaptive elite strategies involve the following two tasks: (1) appending themean search to the original approach and (2) pruning/cloning particles.Themean search, leading to stable convergence, helps the iterative process coordinate between the global and local searches. The mean of the particles and standard deviation of the distances between pairs of particles are utilized to prune distant particles.The best particle is cloned and it replaces the pruned distant particles in the elite strategy. To evaluate the performance and generality of the proposed method, four benchmark functions were tested by traditional PSO, chaotic PSO, differential evolution, and genetic algorithm. Finally, a realistic loss minimization problem in an electric power system is studied to show the robustness of the proposedmethod.


Introduction
Particle swarm optimization (PSO) is a population-based optimization method [1].PSO attempts to mimic the goalsearching behavior of biological swarms.A possible solution of the optimization problem is represented by a particle.The PSO algorithm requires iterative computation, which consists of updating the individual velocities of all particles at their corresponding positions.The PSO algorithm has some merits: (1) it does not need the crossover and mutation operations used in genetic algorithms; (2) it has a memory so that the elite solutions can be retained by all particles (solutions); and (3) the computer code of PSO is easy to develop.The PSO has been successfully applied to solve many problems, for example, reconfiguration of shipboard power system [2], economic dispatch [3], harmonic estimation [4], harmonic cancellation [5], and energy-saving load regulation [6].
PSO is designed to conduct global search (exploration) and local search (exploitation) in the solution space.The former explores different extreme points of the search space by jumping among extreme particles.In contrast, the latter searches for the extreme particle in a local region.The particles ultimately converge to the same extreme position.However, when a problem involves a great number of unknowns, the PSO generally requires a large number of particles in order to gain a global optimal solution (position).Consequently, achieving the coordination between the global search and the local search becomes more difficult.
To overcome the limitation described above, van den Bergh and Engelbrecht presented the cooperative PSO (COPSO) based on the dimension partition [7].Nickabadi et al. focused on several well-known inertia weighting strategies and gave a first insight into various velocity control approaches [8].The adaptive PSO (APSO) presented by Zhan et al. utilized the information on population distribution to identify the search status of particles; the learning strategy tries to find an elitist and then search the global best position in an iterative step [9].Caponetto et al. presented a chaos algorithm to compute the inertia weight of preceding position of a particle and a self-tuning method to compute learning factors [10].Their method introduces a chaos parameter, which is determined by a logistic map.The learning factors are adaptive according to the fitness value of the best solution during the iterations.To enhance the scalability of PSO, José and Enrique presented two mechanisms [11].First, a velocity modulation is designed to guide the particles to the region of interest.Second, a restarting modulation redirects the particles to promising areas in the search space.Deng et al. presented a two-stage hybrid swarm intelligence optimization algorithm incorporating with the genetic algorithm, PSO, and ant colony optimization to conduct rough searching and detailed searching by making use of the advantages of the parallel and positive feedback [12].Li et al. presented a cooperative quantum-behaved PSO, which produces several particles using Monte Carlo method first and then these particles cooperate with one another to converge to the global optimum [13].Recently, Arasomwan and Adewumi tried many chaotic maps, in additional to the traditional logistic map, which attempted to enhance the performances of PSO [14].Yin et al. incorporated the crossover and mutation operators in PSO to enhance diversity in the population; besides, a local search strategy based on constraint domination was proposed and incorporated into the proposed algorithm [15].
The variants of PSO generally have the following limitations.
(1) The solutions are likely to be trapped in the local optima and undesirable premature situations because it relies heavily on the learning factors and inertia weight.
(2) Compared with the traditional PSO, variants of PSO require longer CPU times to compute the global optimum due to the extra treatment in exploration and exploitation.
(3) The iterative process is unstable or even fails to converge because of the ergodic and dynamic properties, for example, chaos sequence.
Based on the above literature review, a novel PSO is proposed in this paper.This novel adaptive elite-based PSO employs the mean of all particles and standard deviation of distances between any two particles.In addition to global and local searches, a new mean search is introduced as an alternative method for finding the global optimum.The standard deviation of distances among all particles is utilized to prune the distant particles.The best particle is cloned to replace the discarded particles.This treatment ensures a stable iterative process.The increase in the computational burden of this enhancement is trivial.
The rest of this paper is organized as follows.Section 2 provides the general PSO algorithm.Section 3 proposes the novel elite-based PSO.In Section 4, four benchmark functions are utilized to validate the proposed method.Comparative studies among different optimization methods (traditional PSO, chaotic PSO [16], differential evolution [17], and genetic algorithm) are given.The results of its application to the real problem of loss minimization in an electric power system are presented.Section 5 draws conclusions.

General PSO Algorithm
PSO, which is an evolutionary optimization method for minimizing an objective (), reflects the behavior of flocking birds or schooling fish.PSO comprises a population of particles iteratively updating the empirical information about a search space.The population consists of many individuals that represent possible solutions and are modeled as particles moving in a -dimensional search space.Let the superscript  be the iteration index.The position and velocity of each particle  are updated as follows [1]: where vectors   and   are the -dimensional position and velocity of particle ,  = 1, 2, . . .,  (number of population size), respectively.The inertia weight  is designed to copy the previously updated features to the next iteration.If  = 1, then the preceding    has a strong impact on  +1  . best and  best are the best position of a particle and the best known position found by any particle in the swarm so far, respectively.The random numbers  1 and  2 are between 0 and 1. Learning factors  1 and  2 are positive constants constrained within the range 0 and 2 such that  1 +  2 ≤ 4. The products  1 ×  1 and  2 ×  2 thus stochastically control the overall velocity of a particle.In this paper,  +1  denotes the th vector (particle) at the ( + 1)th iteration and  +1  can be regarded as the (+1)th updated value (Δ   ) that is added to    .
The second term in (1) is designed to conduct the global search (exploration) and the third term in (1) describes the local search (exploitation) in the solution space, as described in Section 1.The global and local searches should be coordinated in order to gain the global optimum and avoid premature results.The inertia weight  decreases linearly throughout the iterations and governs the momentum of the particle by weighting the contribution of the particle's previous velocity to its current velocity.A large  is designed for global searches, whereas a small  is used for local searches.Consequently, in the earlier stages of the search process, a large  is preferred, while a small value is preferred in the later stages.

Proposed Adaptive Elite-Based PSO
The proposed novel PSO has the following three features.(1) An extra mean search is introduced to coordinate exploration and exploitation; (2) distant particles are pruned and the best particle is cloned to ensure that the iterative process is stable; and (3) complicated computation is avoided and CPU time is thus reduced.The proposed adaptive elitebased PSO, thus, performs 2 main tasks: mean search and particle pruning/cloning.The effects of these two tasks decline as the number of iterations increases.The limitations of the traditional variants of PSO described in Section 1 can therefore be eliminated.

Mean Search.
The proposed method is based on the mean of the particles and standard deviation of the distances between any two particles in the tth iterations.Equation ( 1) is herein modified to the following: where the inertia weight   is decreased linearly from 0.5 in the first iteration to 0.3 in the final iteration.In addition to  1 and  2 , the term  3 is also a random number between 0 and 1.    is a -dimensional vector of the mean values of all particles.The last term introduced in ( 3) is used to coordinate the global and local searches as well as the particle pruning/cloning task later.

Particle Pruning/Cloning. 𝑋 𝑡
introduced in Section 3.1 is used to develop the second task of the adaptive elite strategy: pruning the distant particles.The standard deviation   of all distances among particles in the tth iteration is evaluated.Suppose that all distances follow a Gaussian distribution.Then    ± 3  covers 99.7% of all particles.Let   = 3 − 2  3 .Because   3 in (4) decreases approximately from 1.48 to zero, the values of   will be increased from 0.04 to 3. Hence, the second task in the adaptive elite-based PSO is initially to prune distant particles and finally cover all particles.Restated, the particles outside the range of    ±     are pruned.The reduced number of particles are substituted by cloning   best .That is, As shown in Figure 1,    is a virtual particle which includes the mean of all particles.Consider the range    ±   .Only seven particles are inside this range and three are outside it.Consider    which is outside the range and is pruned.   is hence substituted by   best , which is denoted by    .

Algorithmic
Steps.The proposed method can be implemented using the following algorithmic steps.Step 1. Input the objective, unknowns, and the number of population size.
Step 2. Give randomly the particles (vectors of unknowns).
Step 3. Calculate all objective values for all particles.
Step 4. If the iterative process meets the convergence criteria, then stop.
Step 5. Find the   best and   best from all particles according to the objective values of all particles.
Step 8.If a particle lies outside the range    ±     , then it is pruned and    =    =   best .
Step 9. Calculate the velocities of all particles using (3).
The equilibrium point Ŷeq () = 0 must occur at Ŷ( + 1) = Ŷ() as  → ∞ for any ().According to the condition which is true only when   best =   best =    as  → ∞.Hence, the equilibrium point is In order to guarantee  eq () of the discrete-time linear system as shown in (12) to be asymptotically stable, the following necessary and sufficient conditions are obtained using the classical Jury criterion: Since random numbers   1 ,   2 , and   3 are between 0 and 1, the above stability conditions are equivalent to the following set of parameter selection heuristics, which guarantee convergence for the proposed adaptive elite-based PSO algorithm: According to (17), the convergence of the proposed adaptive elite-based PSO can be guaranteed.

Simulation Results
The performance of the proposed adaptive elite-based PSO is verified by four benchmark test functions in this section.Traditional PSO, chaotic PSO (CPSO), differential evolution (DE), and genetic algorithm (GA) are used to investigate the benchmark functions for comparative studies [21].The number of dimensions of each benchmark function is 20 (meaning that the number of unknowns  = 20).The number of particles (population size) is also 20 in all methods.The mutation rate and crossover rate in DE are 0.8 and 0.5, respectively; while those in GA are 0.01 and 1.0, respectively.Ten simulations are conducted by each method to verify the optimality and robustness of the algorithms.Two convergence criteria are adopted: convergence tolerance and fixed maximum number of iterations.Five manuscript codes are developed using MATLAB 7.12 to minimize the benchmark functions.The CPU time for studying these four test functions is evaluated using a PC with Intel (R) Core i7 (CPU@3.4GHz, RAM 4 GB).Finally, the problem of loss minimization in an electric power system is used to validate the proposed method.
4.1.Benchmark Testing: Sphere Function.First, the sphere function is tested as follow: where  = 20 and x = ( 1 ,  2 , . . .,  20 ) ⋅  1 ( x) is a unimodal test function whose optimal solution is  1 ( x) = 0 and  1 =  2 = ⋅⋅⋅ =  20 = 0. First, the convergence tolerance is set to 0.001.Table 1 shows the shortest, mean and the longest CPU times obtained from 10 simulations by the five methods.As illustrated in Table 1, all methods can approach similar solutions except for GA (which fails to converge).DE requires the longest CPU times.In the conditions of mean and the longest CPU times, the proposed method yields smaller values of  1 ( x) than the other methods.Figure 2 shows the iteration excursions of the different methods with their shortest CPU times.Table 2 shows the best, mean and the worst  1 ( x) obtained by running 500 iterations using all methods.It can be found that the proposed method always yields the smallest  1 ( x) among the five methods.Figures 3 and 4 present the parameter  3 and the standard deviation  obtained using the proposed method in the case with the best  1 ( x), respectively.The values of  3 decrease quickly in the first 30 iterations and the standard deviation oscillates while decreasing to zero near the 200th iteration.
Generally, the CPU time required by the proposed method is longer than those taken by PSO and CPSO but shorter than those taken by DE and GA.

Benchmark
Testing: Rosenbrock Function.This subsection employs the Rosenbrock function for testing as follows:  where  = 20 and x = ( 1 ,  2 , . . .,  20 ).Each contour of the Rosenbrock function looks roughly parabolic.Its global minimum is located in the valley of the parabola (banana valley).Since the function changes little in the proximity of the global minimum, finding the global minimum is considerably difficult. 2 ( x) is also a unimodal test function whose optimal solution is  2 ( x) = 0 and  1 =  2 = ⋅ ⋅ ⋅ =  20 = 0. First, the convergence tolerance is set to 0.001.shows the shortest, mean and the longest CPU times obtained by performing 10 simulations using the five methods.As illustrated in Table 3, all methods yield similar solutions except for CPSO and GA (which fail to converge).The CPU times required by the proposed method are between those taken by PSO and DE.It could be found that DE requires fewer iterations but longer CPU times in all studies.All solutions (optimality), from the viewpoints of the shortest, mean and the longest CPU times, are similar.Figure 5 shows the iteration excursions of the different methods (the cases with their shortest CPU times).Table 4 shows the best, mean and the worst  2 ( x) obtained by running 1000 iterations using all methods.Since the proposed AEPSO and PSO require 6500∼8000 iterations to find the global optimum of  2 ( x), they only reach a near optimal solution in the 1000th iterations.Figures 6 and 7 display the parameter  3 and standard deviation  of the proposed method in the case with the best  2 ( x), respectively.The values of  3 decrease quickly in the first 40 iterations.However, the standard deviation decreases to a very small positive value near the 220th iteration and then oscillates continuously.

Benchmark Testing: Griewank Function.
In this subsection, the Griewank function is tested as follows: where  = 20 and x = ( 1 ,  2 , . . .,  20 ).If  = 1, then the function  3 () has 191 minima, with the global minimum at  = 0 and local minima at ± for  ≅ 6.28005, 12.5601, 18.8401, and 25.1202,. ... First, the convergence tolerance is set to 0.01.Table 5 shows the shortest, mean and the longest CPU times achieved using the five methods in 10 simulations.As illustrated in Table 5, CPSO, DE, and GA fail to converge.The optimality of the proposed method and PSO, as given by the shortest, mean and the longest CPU times, is similar.Figure 8 shows the iteration excursions of the methods with the shortest CPU times.Table 6 shows the best, mean and the worst  3 ( x) obtained at the 1000th iteration by all methods.The proposed method still attains the best optimality compared with other methods.

Benchmark
Testing: Ackley Function.The Ackley function is an n-dimensional highly multimodal function that has a great number of local minima, which look like noise but only one global minimum at  4 ( x) = 0 and x = ( 1 ,  2 , . ..) = (0, 0, . ..): where  = 20 and x = ( 1 ,  2 , . . .,  20 ) herein.First, the convergence tolerance is set to 0.001.Table 7 shows the shortest, mean and the longest CPU times of 10 simulations obtained by the five methods.As illustrated in Table 7, all methods are able to find the global optimum, except for CPSO (which fails to converge).The CPU times required by the proposed method are between those required by PSO and DE, while GA needs very long CPU times.All solutions (optimality) are similar from the viewpoints of the shortest, mean and the longest CPU times.Figure 11 shows the iteration excursions of the various methods with the shortest CPU times.The proposed method converges the fastest.
Table 8 shows the best, mean and the worst  4 ( x) obtained at the 500th iteration by all methods.The proposed method and DE can gain the global optimum but PSO, CPSO,  and GA cannot.The proposed method always gains the smallest values of  4 ( x) by inspecting the best, mean and the worst values of the  4 ( x).The proposed method also requires the shortest CPU times to converge to solutions.Figures 12 and 13 illustrate parameter  3 and standard deviation , respectively, obtained using the proposed method in the case with the best  4 ( x).The values of  3 decrease to small positive values close to the 220th iteration.The standard deviation decreases to very small positive values near the 400th iteration and thereafter oscillates (see Figures 9 and 10).

Studies on Loss Minimization in a Power
System.Active power loss in the electric power system is caused by the resistance in the transmission/distribution lines.The losses can be evaluated as ∑ (each line current) 2 × (each line resistance) or simplified as (total active power generations − total active power demands).If the active power demands are constant and voltage magnitudes are increased, then the line currents will be reduced, leading to reduction of active power losses.Therefore, engineers who work on electric power systems tend to utilize voltage controllers to increase the voltage profile in order to reduce the active power losses [22].The voltage controllers include generator voltages, transformer taps, and shunt capacitors.The problem formulation of active power loss minimization is given in the appendix.The proposed method, PSO, CPSO, DE, and GA are applied to the find the optimal 39 control variables in order to minimize the active power (MW) losses.Ten simulations, each with fixed 500 iterations are conducted using each method.Figure 15 illustrates the iteration performance of these five methods.It can be found the iteration process of the proposed method converges slowly at first but quickly after the 220th iteration.Table 9 reveals that the proposed method is able to converge the fastest and requires the shortest CPU times for the conditions of the best active power loss, mean values, and the worst active power loss in the 10 simulations.As shown in Table 9, GA still needs much more CPU time to solve the realistic problem and DE yields the worst, mean values over 10 simulation runs.Table 10 gives the results of optimal controls obtained using the different methods (best solution).Figure 16 displays the optimal voltage profile, which is obtained by the proposed method, of the electric power system.novel mean search and particle pruning/cloning manners have been added to achieve stable iterative process and they avoid ineffective searches, respectively.Although the space complexity is increased a little bit due to these two tasks as shown in ( 3)-( 6), the number of iterations can be mitigated  and the computational burden can be reduced effectively.Thus, the time complexity is similar to that of the traditional PSO.
The performance comparisons among the proposed method, PSO, CPSO, DE, and GA are implemented using a set of four benchmark functions and one realistic loss minimization problem in an electric power system.The simulation results, as shown in Tables 1-10, imply that the CPSO and DE sometimes are unable to find the optimal solution or even diverge.Furthermore, both GA and DE always require long CPU times for computation.Additionally, the computational effort required by the proposed method to obtain high quality solutions is less than the effort required to obtain the same high quality solutions by the GA and DE.Therefore, the proposed method has the most stable convergence performance to find the optimums and has the best computational efficiency compared with the other methods considering the time complexity and space complexity.

Conclusions
In this paper, to provide a better tradeoff between global and local search in the iterative process than existing methods,

Figure 2 :
Figure 2: Iteration excursions with the shortest CPU times for different methods used to minimize  1 .

Figure 4 :
Figure 4: Iterative performance in terms of standard deviation (sphere function).

Figure 5 :
Figure 5: Iteration excursions with the shortest CPU times for different methods used to minimize  2 .

Figure 7 :
Figure 7: Iterative performance in terms of standard deviation (Rosenbrock function).

Figure 8 :
Figure 8: Iteration excursions with the shortest CPU times for different methods used to minimize  3 .

Figure 10 :
Figure 10: Iterative performance in terms of standard deviation (Griewank function).

Figure 11 : 3 Figure 12 :
Figure 11: Iteration excursions with the shortest CPU times for different methods used to minimize  4 .

Figure 14
Figure 14 illustrates a 25-busbar power system with 3 wind farms at busbars 13, 14, and 25.The wind power generations in these three busbars are 2.4, 2.4, and 5.1 MW, respectively.The diesel generators are located at busbars 1∼11.The shunt capacitors are at busbars 21, 22, and 23.Seven

4. 6 .
Comparison of Time, Space Complexities, and Performance.Compared with the traditional PSO, the proposed

Figure 13 :
Figure 13: Iterative performance in terms of standard deviation (Ackley function).

Table 1 :
Performance of different methods used to minimize  1 based on 10 simulation results (convergence tolerance = 0.001).

Table 2 :
Performance of different methods used to minimize  1 based on 10 simulation results (number of iterations = 500).

Table 3 :
Performance of different methods used to minimize  2 based on 10 simulation results (convergence tolerance = 0.001).

Table 4 :
Performance of different methods used to minimize  2 based on 10 simulation results (number of iterations = 1000).

Table 5 :
Performance of different methods used to minimize  3 based on 10 simulation results (convergence tolerance = 0.01).

Table 6 :
Performance of different methods used to minimize  3 based on 10 simulation results (number of iterations = 1000).

Table 7 :
Performance of different methods used to minimize  4 based on 10 simulation results (convergence tolerance = 0.001).

Table 8 :
Performance of different methods used to minimize  4 based on 10 simulation results (number of iterations = 500).

Table 9 :
Optimality and CPU time required by different methods (10 runs).

Table 10 :
Optimal solutions obtained by different methods (best solution).