Quantum Behaved Particle Swarm Optimization with Neighborhood Search for Numerical Optimization

,


Introduction
Many real-world problems can be formulized into optimization problems over continuous or discrete search space.With development of economic, optimization problems are increasingly complex, and more efficient optimization algorithms are needed.Over the past several years, some population-based random optimization techniques have been widely used to solve optimization problems, such as genetic algorithms (GA) [1], evolutionary programming (EP) [2], particle swarm optimization (PSO) [3], differential evolution (DE) [4], ant colony optimization (ACO) [5], and artificial bee colony (ABC) [6].Due to PSO's simple concept, easy implementation, yet effective, has been widely applied to various optimization areas [7][8][9][10][11].
PSO was firstly introduced by Kennedy and Eberhart in 1995.It is a new optimization technique inspired by swarm intelligence.Compared to GA, PSO is also a populationbased algorithm, but it does not contain any crossover or mutation operator.In PSO, the movement of particles is determined by their corresponding previous best particles (best) and the global best particle (best).Due to the attraction of these best particles (best and best), PSO shows fast convergence rate.However, it easily converges to local minima when solving complex problems.The main reason is that the attraction search pattern of PSO greatly depends on best and best.Once these best particles (best and best) get stuck, all particles in the swarm will quickly converge to the trapped position.To help trapped particles jump out, many improved PSO algorithms have been proposed.In [12], Shi and Eberhart introduced a inertia weight  into the original PSO to achieve a balance between the global and local search.Reported results show that a linearly decreased  is a good choice for the test suite.Bergh and Engelbrecht [13] proposed a cooperative approach for PSO (CPSO-H) for solving multimodal problems.Liang et al. [14] proposed a comprehensive learning PSO (CLPSO), in which each particle can learn other particles' experiences in different dimensions.Simulation results show that CLPSO outperforms seven other PSO algorithms.Li et al. [15] presented an adaptive learning PSO for function optimization, in which the learning mechanism of each particle is separated into three components: its own historical best position, the closest neighbor, and the global best one.By using this individual level adaptive technique, a particle can well control its well-balanced behavior of exploration and exploitation.Zhan et al. [16] presented an adaptive PSO (APSO) by employing two following strategies.The first one evaluates the population distribution and particle fitness and identifies the current search status.The second one utilizes an elitist learning strategy to help the global best particle jump out of the likely local optima.Wang et al. [17] proposed a new PSO algorithm with generalized opposition-based learning (GOBL) and Cauchy mutation.GOBL is an enhanced opposition-based learning (OBL) [18], which is helpful to accelerate the evolution.The Cauchy mutation focuses on improving the global search ability.In [19], Wang et al. introduced a diversity enhanced PSO algorithm (DNSPSO) which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a tradeoff between exploration and exploitation abilities.
Like other population-based stochastic algorithms, the performance of PSO is greatly influenced by its control parameters, such as initial weight () and acceleration coefficients ( 1 and  2 ).Different parameter settings may result in different performance.To minimize the effects of these parameters, some adaptive parameter mechanisms have been designed [15,16].Recently, Sun et al. [20] proposed a novel PSO algorithm called quantum-behaved PSO (QPSO), in which a quantum model is used to depict the state of particles.Compared to the original PSO, QPSO eliminates the velocity term and does not contain parameters ,  1 , and  2 .In QPSO, new particles are generated around the weighted positions of previous best particles and the global best particle.This may result in attracting too fast.To tackle this problem, some improved QPSO algorithms have been proposed [21][22][23][24].Sun et al. [21] proposed a diversity-guided QPSO (DGQPSO), which employs a mutation operator on the global best particle.In [22], chaotic search is introduced into QPSO to increase the diversity of swarm in the latter period of the search, so as to help the algorithm escape from local minima.Zhao et al. [23] proposed a fuzzy QPSO, in which the center of potential particle is influenced by more than two particles in the neighborhood and the influence is defined by a fuzzy variable.Wang and Zhou [24] presented a local QPSO (LQPSO) as a generalized local search operator.The LQPSO is incorporated into a main QPSO to construct a hybrid algorithm QPSO-LQPSO.Results show that QPSO-LQPSO achieves better results than PSO and QPSO.
In this paper, we also proposed a new QPSO algorithm called NQPSO, which employs one local and one global neighborhood search strategies are utilized to balance exploitation and exploration.In addition, a concept of opposition-based learning (OBL) [18] is employed for population initialization.To verify the performance of our approach, twelve well-known benchmark functions, including multimodal and rotated problems, are used in the experiments.Simulation results show that NQPSO achieves better results than some similar QPSO algorithms and other stateof-the-art PSO variants.
The rest of the paper is organized as follows.The original PSO and QPSO are briefly introduced in Sections 2 and 3, respectively.Our approach NQPSO is described in Section 4. Experimental results and discussions are presented in Section 5. Finally, the work is summarized in Section 6.

Particle Swarm Optimization
PSO is a population-based optimization technique, which is motivated by the behaviors of fish schooling or birds flocking.In PSO, a population is called a swarm, and each member in the swarm is called a particle which is a potential solution to the optimization task.During the evolution, the search direction of one particle is determined by its own previous best particle and the global best particle found so far by all particles.
Let  be the swarm size.Each particle  (1 ≤  ≤ ) has two vectors, velocity () and position ().At each iteration, each particle in the swarm updates its velocity and position as follows [12]  , ( + 1) =  ⋅  , () +  1 ⋅  1 ⋅ (best , −  , ()) where   and   are the position and velocity vectors of the th particle, respectively.best  represents the previous best particle of the th particle and best is the global best particle found so far by all particles. 1 and  2 are two independently generated random numbers with the range of [0, 1].The parameter  is called inertia weight. 1 and  2 are known as acceleration coefficients.

Quantum-Behaved Particle Swarm Optimization
A recent theoretical study [25] reported that each particle converges to its local attractor,   = ( ,1 ,  ,2 , . . .,  , ) defined as follows: where  ∈ (0, 1).It can be seen that   is a stochastic attractor of particle  that lies in a hyperrectangle with best  and best.Based on the above characteristic, Sun et al. [20] proposed a quantum-behaved PSO (QPSO) algorithm.In QPSO, each particle only has position vector and does not have the velocity vector.During the evolution, each particle updates its position as follows: where ℎ and  are two random numbers distributed uniformly with the range of (0,1), respectively.The parameter  is called contraction-expansion coefficient which can be tuned to control the convergence speed of the algorithm.best is called mean best position of the population which is calculated by where  is the population size.
The main steps of the QPSO are described in Algorithm 1, where  is the local attractor, FEs is the number of fitness evaluations, and MAX FEs is the maximum number of FEs.Compared to the original PSO, QPSO does not have the velocity term and the parameters, ,  1 , and  2 .But QPSO introduced a new parameter  which is linearly deceased from 1.0 to 0.5 reported in some of the literature [20,21].

Proposed Approach
According to (3), new particles are generated around the local attractor   .It means particles move to the local attractors during the search process.The local attractors are weighted positions of pbest and gbest.Then, the local attractors are in the neighborhood of the gbest.It indirectly demonstrates that particles move to the neighborhood of the gbest.This search mechanism can obtain fast convergence speed by generating new particles in the neighborhood of gbest.However, it may result in premature convergence because of fast attraction.Figure 1 illustrates the search behavior of QPSO.
To enhance the global search ability and avoid premature convergence, some mutation techniques have been introduced into QPSO algorithm.In [21], Sun et al. proposed a diversity-guided QPSO, in which a mutation operation is conducted on the gbest if the diversity of swarm is smaller than a predefined value.In [26], Jamalipour et al. proposed another mutation operator inspired by the mutation scheme of DE, in which each particle updates the position according to the original quantum model or the DE mutation with equal probability.
Although the above mutation techniques can improve the global search ability of QPSO, they show poor search for local search.To make a tradeoff between global and local search, this paper proposes one local and one global neighborhood search strategies.
In the local neighborhood search (LNS) strategy, we focus on searching the local neighborhood of the current particle.This can help find more accurate solutions.The local neighborhood search strategy is defined by where  1 and  2 are the position vectors of two randomly selected particles,  1 ,  2 , and  3 are three random numbers with the range of (0,1), and  1 + 2 + 3 = 1. Figure 2 illustrates the mechanism of the local neighborhood search strategy.The proposed LNS strategy is similar to the local search operator used in [19], but they are different.The local search operator used in [19] is based on an assumed ring topology, while our approach is based on the population.
In the global neighborhood search (GNS) strategy, we concentrate on searching the global neighborhood of the current particle.This can enhance the global search and avoid premature convergence.The global neighborhood search strategy is defined by Search area of the GNS The whole search area

New particle
Particle i where Levy() is a random number generated by Lévy distribution with a parameter  = 1.3 [15].The main reason of using Lévy mutation is that the Lévy probability distribution has an infinite second moment and is, therefore, more likely to generate a new particle that is farther away from its parent than the commonly employed Gaussian mutation.Figure 3 illustrates the mechanism of the global neighborhood search strategy.
When conducting the neighborhood search, two new particles are generated by the local and global neighborhood search strategies, respectively.Then, there are three particles, the current particle and two other new particles.A greedy selection method is employed to choose the best one among the three particles as the new current particle.
Population initialization, as an important step in population-based stochastic algorithms, can affect the convergence speed and quality of solutions.In General, randomly initialization is used to generate initial population when lacking prior information.By the suggestions of [27], replacing the random initialization with opposition-based learning (OBL) can obtain better initial solutions and accelerate convergence speed.So, this paper also employs OBL for population initialization.This method is described as follows.
(1) Randomly generate  particles to initialize the population .
(2) Calculate the boundaries [  ,   ] of the current population according to ( 7) (3) For each particle in , an opposite particle is generated by where  *  is the opposite position of   .After conduct the opposition, there are  opposite particles, which form an opposite population OP.
(4) Select  fittest particles from  and OP as the initial population.
In our approach NQPSO employs one local and one global neighborhood search strategies into the original QPSO.The neighborhood search strategies focus on searching the local and global neighbors of particles and making a balance between the local and global search.The oppositionbased population initialization can generate high quality of initial solutions and accelerate the convergence speed.
The main steps of NQPSO are described in Algorithm 2, where rand(0,1) is a random number with the range of [0, 1], the parameter  is the probability of neighborhood search, FEs is the number of fitness evaluations, and MAX FEs is the maximum number of FEs.Compared to the original QPSO, our approach NQPSO does not add extra loop operations.Therefore, both NQPSO and QPSO have the same computational time complexity.

Test Problems.
There are twelve benchmark functions used in the following experiments.These problems were utilized in previous studies [14,17,19].According to their properties, they are divided into three groups: unimodal and simple multimodal problems ( 1 - 2 ), unrotated multimodal problems ( 3 - 8 ), and rotated multimodal problems ( 9 - 12 ).For rotated problems, the original variable  is left multiplied by the orthogonal matrix M to get the new rotated variable y = M * x.All problems used in this paper are minimization problems.The brief descriptions of these problems are presented in Table 1.

Comparison of NQPSO with
Other Similar QPSO Algorithms.Since the introducing of QPSO, some improved QPSO algorithms have been proposed.In order to verify the effectiveness our approach, this section compares NQPSO with similar QPSO algorithms, including QPSO, diversityguided QPSO (DGQPSO) [21], and QPSO with weighted mean best position (WQPSO) [28].
To have a fair competition, the same settings are used for common parameters.For all algorithms, the population size  is set to 40.The parameter  is linearly deceased from 1.0 to 0.5.For DGQPSO, the parameter  low is set to 0.0001, and the coefficient  used in the mutation is equal to 0.00001.For NQPSO, the probability  of the neighborhood search is set to 0.2 based on empirical studies.When the number of fitness evaluations (FEs) reaches to the maximum value MAX FEs, the algorithm stops running.In the experiment, MAX FEs is set to 2.0 + 05.All algorithms are run 30 times for each test problem.Throughout the experiments, the mean fitness error values and standard deviation are reported (the mean error value is defined as () − (opt), where () is the fitness value found in the last generation, and (opt) is the global optimum of the problem).
Table 2 presents the computational results of QPSO, DGQPSO, WQPSO, and NQPSO on the test suite, where "Mena" indicates the mean fitness error value, and "Std" represents the standard deviation.For each problem, the best result (the minimal value) is shown in bold.It can be seen that DGQPSO outperforms QPSO on  5 .In this problem, all four    To observe the evolutionary processes of the algorithms, Figure 4 lists the convergence characteristics of QPSO, DGQPSO, WQPSO, and NQPSO on some representative , NQPSO find the global optimum at the beginning stage of the evolution, while other three QPSO algorithms shows slow convergence rate.Although DGQPSO and WQPSO achieve better results than the original QPSO, the convergence characteristics of them are similar.

Comparison of NQPSO with Other State-of-the-Art PSO
Algorithms.To further verify the performance of our approach, this section presents a comparative study of NQPSO with other state-of-the-art PSO variants.These PSO algorithms are listed as follows.
The parameter settings of CPSO-H and CLPSO are described in [14].By the suggestions of [16], the same parameter settings of APSO are used.The parameters  is set to 0.7298, and  1 =  2 = 1.49618.For GOCLPSO, the probability of opposition is set to 0.3 and other parameters keep the same  In order to compare the performance of multiple algorithms on the test suite, we conduct Friedman test by the suggestions of [19].Table 4 presents the average ranking of CPSO-H, CLPSO, APSO, GOCLPSO, DNSPSO, and NQPSO.These ranking values are calculated by SPSS software.The best ranking having the lowest ranking value is shown in bold.As seen, the performance of the six algorithms can be sorted into the following order: NQPSO, DNSPSO, CLPSO, GOCLPSO, CPSO-H, and APSO.The best average ranking was obtained by NQPSO algorithms, which outperforms the other five PSO algorithms.According to the literature [29], GOCLPSO is better than CLPSO, but our results show that CLPSO is better than GOCLPSO.The main reason is that the benchmark functions tested in this paper are different from the ones in [29].For different test problems, one algorithm may show different performance.
Besides the Friedman test, we also conduct Wilcoxon signed-rank test to compare the performance differences between NQPSO and the other five PSO algorithms [19,30].Table 5 shows the  values when comparing NQPSO with other algorithms.The results show that NQPSO is

Conclusions
Quantum-behaved PSO (QPSO) is a new PSO variant, which employs a quantum model to update the positions of particles.Compared to the original PSO, QPSO eliminates the velocity term and does not contain the related parameters, ,  1 , and  2 .Although QPSO introduces a new parameter to control the step size, it still has fewer control parameters than PSO.Some recent studies show that QPSO performs better than the original PSO on many benchmark functions and real-world problems.However, both PSO and QPSO still easily fall into local minima when solving complex problems.The main reason is that particles tends to move to the neighborhood of the gbest by the attraction of the weighted of pbest and gbest.If particles are attracted too fast, premature convergence will easily occur.To tackle this problem, this paper proposes a new QPSO algorithm called NQPSO, which employs one local and one global neighborhood search strategies to make a balance between exploitation and exploration.Moreover, a concept of opposition-based learning (OBL) is employed for population initialization.To verify the performance of our approach, twelve well-known benchmark functions including multimodal and rotated problems are used in the experiments.Computation results show that NQPSO outperforms some similar QPSO algorithms and five other state-of-the-art PSO variants.
Although NQPSO significantly improves the performance of QPSO on many problems, it still falls into local minima on some problems, such as  2 ,  7 , and  8 .How to enhance the performance of NQPSO on these problems will be a possible research direction.In addition, a new parameter  is introduced to control the frequency of conducting neighborhood search.We have not investigated the effects of this parameter on the performance of NQPSO.How to select the best  will be another research direction in our future work.

Appendix The Orthogonal Matrix M
The orthogonal matrix M is 30 × 30

Figure 4 :
Figure 4: The convergence characteristics of four QPSO algorithms on some representative problems.

Table 1 :
Benchmark problems used in the experiments.
Like DGQPSO, WQPSO performs better than QPSO on all problems except for  5 , but the weighted best position method can significantly improve the performance of QPSO on  10 and  11 .NQPSO significantly improves the performance of QPSO on  1 ,  4 ,  6 , and  10 - 13 .Especially for  4 ,  6 , and  10 - 13 , only NQPSO can converge to the global optimum, while other algorithms fall into local minima.

Table 3 :
Computational results achieved by NQPSO and other five PSO algorithms.As seen, NQPSO shows faster convergence speed than other three QPSO algorithms.Especially for  4 ,  10 , and  12

Table 4 :
[19]age rankings of the six PSO algorithms.The parameters ,   , and  ns used in DNSPSO are set to 2, 0.9, and 0.6, respectively[19].The above six PSO algorithms use the same population size ( = 40) and maximum number of fitness evaluations (MAX FEs = 2.0 + 05).For each test problem, each algorithm is run 30 times and the mean fitness error values are recorded.Table3presents the computational results achieved by CPSO-H, CLPSO, APSO, GOCLPSO, DNSPSO, and NQPSO, where "Mean" indicates the mean fitness error values.For each problem, the best result is shown in bold.From the results, it can be seen that NQPSO outperforms CPSO-H on all test problems except for  2 ,  7 , and  8 .CLPSO performs better than NQPSO on  7 , and  8 , while NQPSO achieves better results on the rest 10 problems.APSO outperforms NQPSO on  2 and  8 , while NQPSO performs better than QPSO on the rest 10 problems.GOCLPSO, DNSPSO, and NQPSO can find the global optimum on  4 .NQPSO performs better than GOCLPSO on 10 problems.Both DNSPSO and NQPSO achieve the same results on  3 ,  4 ,  6 ,  9 ,  10 , and  12 .DNSPSO outperforms NQPSO on  2 and  7 , while NQPSO obtains better results on  1 ,  5 ,  8 , and  11 .From the comparison of DNSPSO and NQPSO, both of them employ neighborhood search strategies, but NQPSO shows better performance than DNSPSO on the majority of test problems.

Table 5 :
Results of Wilcoxon signed-rank test between NQPSO and other five PSO algorithms.