An Improved Multiobjective Quantum-Behaved Particle Swarm Optimization Based on Double Search Strategy and Circular Transposon Mechanism

,


Introduction
Optimization problems widely exist in fields including economics [1], natural science [2], and industrial production [3].Compared to single-objective optimization problems (SOPs), the theories and methods of multiobjective optimization problems (MOPs) are more complicated and usually incomplete.Moreover, it is difficult to find the feasible Pareto front (PF) for most optimization algorithms due to the equality and inequality constrains as well as multivariable constrains in a MOP [1].Since the objectives in MOPs are conflict in many cases, there exists a set of solutions which are superior to the rest of solutions in the search space when all objectives are considered but are inferior to other solutions in the space in one or more objectives [4].These solutions are termed nondominated solutions.How to obtain a set of well-distributed nondominated solutions is a crucial task of multiobjective optimization.
As an efficient evolutionary algorithm, particle swarm optimization (PSO) algorithm has been widely used for various optimization problems [5].Due to its fast convergence speed and good convergence accuracy, multiobjective optimization based on PSO has attracted much more attention.In [6], the multiobjective particle swarm optimization (MOPSO) algorithm was first proposed to handle various kinds of MOPs, in which Pareto dominance was used to generate nondominated solutions.In MOPSO, a secondary archive was first used to store the nondominated solutions found by the algorithm to guide the particles toward the true PF.Although the convergence performance of the algorithm is improved greatly by this mechanism, the distribution of the PF generated by the algorithm could not be guaranteed.In [7], the mechanism of crowding distance computation was introduced into MOPSO to maintain the external archive, which could make the algorithm obtain a set of well-distributed nondominated solutions.Based on these theories, the external archive was widely used in different kinds of MOPSO and many mechanisms were applied to maintain the external archive.In [8], a new archive called nondominated local set was proposed to store the personal best solutions obtained by one particle during the search process, which showed a superiority in terms of capturing the shape of the true PF and obtaining nondominated solutions with satisfactory diversity characteristics.In [9], a new multiobjective particle swarm optimizer based on -Dominance was proposed to utilize -Dominance to generate a new archive and the crowding distance to maintain the archive.In [10], a novel hybrid multiobjective particle swarm optimization was proposed, which used a teaching-learningbased optimization algorithm to promote the diversity of the algorithm and circular crowded sorting to select gbest and teacher solution from the archive.In [11], a niching multiobjective particle swarm optimization was proposed to solve the MOPs of contractors in circuits, in which the entropy weight ideal point theory was proposed to ensure the diversity of the nondominated solutions.Different from the algorithms mentioned above, a multiobjective particle swarm optimization algorithm based on a decomposition approach (dMOPSO) was proposed in [12], which decomposed a continuous and constrained MOP into many SOPs.Although this algorithm does not need an external archive to store the nondominated solutions, the solutions generated by the algorithm cannot cover the entire true PF for some complicated MOPs.It can be seen that an external archive is crucial for most algorithms to select the leading particle.
The traditional search strategy of the particles in PSO has the deficiency of the particles not being able to traverse the entire search space, which may lead to an incomplete search in the search space and miss some solutions.To overcome this deficiency, a new PSO based on probability called quantumbehaved particle swarm optimization (QPSO) was proposed in [13], in which particles could appear in any position of the feasible region.Since the faster convergence speed of QPSO than classical PSO, multiobjective optimization algorithms based on QPSO have been studied by many researchers.In [14], QPSO and adaptive grid were introduced to improve the performance of MOPSO (MOQPSO-AG), which showed a superior convergence performance when compared with other algorithms, especially in terms of convergence speed.In [15], a novel quantum Delta potential well with two local attractors named double-potential well was established for QPSO and applied to deal with MOPs, which showed a better convergence and distribution performance when handling high-dimensional MOPs.In [16], QPSO was used to handle constrained multiobjective optimization problems, in which two strategies were investigated to combine constraint processing mechanism with a multiobjective QPSO.It can be seen that QPSO has a great potential to deal with MOPs.
Despite the fact that QPSO shows superior performance over PSO in terms of convergence speed when handling MOPs, the leadership of the average best position may be reduced dramatically due to the single search pattern in QPSO, which is not beneficial to obtain more potential solutions for MOPs.Moreover, the mechanism of constructing an attractor for each particle may reduce the ability of the particles to jump out of local optima when its personal best position is equal to the global best position.Finally, most MOPSO algorithms only use the external archive to select the global and local best individuals without fully utilizing the potential significant information hidden in the archive, which may reduce the convergence accuracy of the algorithm.
To alleviate the situation mentioned above, an improved multiobjective quantum-behaved particle swarm optimization based on double search strategy and circular transposon mechanism (MOQPSO-DSCT) is proposed in this paper.On one hand, the double search strategy is proposed to adjust the relationship between the exploration and the exploitation ability, which is beneficial for the diversity of the solutions set as well as the convergence speed.To implement the double search strategy, a search probability parameter is proposed to make the two search patterns execute alternatively to maintain the diversity of the swarm.Moreover, an improved attractor construction mechanism based on opposition-based learning is used to construct a better attractor for one particle when its personal best position is equal to the global best position, which improves the ability of the particles to escape from local optimum.On the other hand, the circular transposon mechanism is introduced into the external archive to help the particles find more useful information between each other, which can make the population move toward the true PF with a higher probability and maintain the distribution of the PF.Therefore, the MOQPSO-DSCT algorithm has improved the performance in convergence accuracy and kept the reasonable distribution of the solutions.
The reminder of the paper is organized as follows.In Section 2, the concepts of multiobjective optimization, the basic model of QPSO, and some related strategies are briefly described.The details of the proposed algorithm (MOQPSO-DSCT) are depicted in Section 3. The experiment results on several benchmark test functions are given in Section 4. Finally, the conclusions and our future work are included in Section 5.

Multiobjective Optimization.
In general, the model of a minimization MOP can be defined as follows: min  () = ( 1 () ,  2 () , . . .,   ()) where  is a vector with  decision variables,  is the number of objectives,   () is the function value of the -th objective, and  and  are the number of inequality and equality constraints, respectively.
To better understand how a given MOP () produces a large number of nondominated solutions in the feasible region Ω, some definitions related to Pareto Optimality are described as follows [17].
Definition 2 (Pareto optimal).A solution  * ∈ Ω is said to be a Pareto optimal or a nondominated solution, if and only if Definition 3 (Pareto optimal set).A set  containing all the Pareto optimal is a Pareto optimal set, which can be defined as Definition 4 (Pareto front).The region  generated by the objective function values of all Pareto optimal solutions is Pareto front, which can be defined as 2.2.Quantum-Behaved Particle Swarm Optimization.In 2004, a quantum-behaved particle swarm optimization algorithm was proposed to overcome the deficiency of PSO not being able to traverse the entire search space.Particles in QPSO only have positions without velocities, which is extremely different form the particles in PSO.In quantum world, the positions of particles can not be determined because of the uncertainty principle so that particles can appear anywhere in the feasible search region with a certain probability, which guarantees the ability of QPSO to converge to the global optimal.Each particle in PSO is considered as an individual with two attributes in D-dimensional search space.The position and velocity of each particle can be represented as   = ( 1 ,  2 , . . .,   ) and V  = (V 1 , V 2 , . . ., V  )  = (1, 2, . . ., ), respectively, where M is the size of population.Then the position and velocity of each particle can be updated by the following equations [18]: where  = ( Different from the traditional PSO, the main idea of QPSO is to establish a potential field to bind the particles.In [19], the convergence behavior of particles was analyzed by Clerc M and Kennedy J, which demonstrated that the th particle in PSO is attracted by an attractor   = ( 1 ,  2 , . . .,   ).Obviously, the attractor directly influences the convergence behavior of the particles in QPSO.The position of   is described as where   () is the personal best position of the -th particle; () is the global best position in the current iteration; In QPSO,   () is a uniformly distributed random number between [0, 1].Then, the position of the -th particle in QPSO can be updated as shown in () = 2        () −   ()      (10) where () is the average best position in population; () is the length of the Delta potential well, which denotes the creativity of the individual in the population, () with a larger value usually means the particle has a relatively large search range and is more likely to find more solutions;   () is a uniformly distributed random number between [0, 1];  is a contraction-expansion coefficient usually less than 1.782 [20].In our experiments,  is set to be positive and linearly decreasing from 1.0 to 0.5 as proposed in [21].
where  0 and  1 are set to 1.0 and 0.5, respectively;  is the current iteration; and  max is the maximum iteration number.

Opposition-Based Learning Strategy.
In [22], the concepts of opposition-based learning were proposed, which demonstrated that the opposite solution has a higher probability to find a better solution than the current solution.On the other words, we can get the opposite position of the particle according to opposition-based learning.If the opposite position is better than the original position, the original position will be replaced with the opposite position.Currently, this mechanism is often combined with intelligent algorithms to deal with optimization problems.In [23], opposition-based learning was introduced into a new intelligent algorithm named chicken swarm optimization to improve the diversity of the population.In [24], opposition-based learning was used to generate the transformed population in the population initialization, which enhanced the performance of constrained differential evolution algorithms.In [25], opposition-based learning was executed by the current best individual to generate an opposition search populations in its dynamic search boundaries, which improved the balance and exploring ability of the algorithm.In [26], the opposite process of the opposition-based learning strategy was combined with the refraction principle of light to improve the global search ability of PSO.Therefore, in this paper, oppositionbased learning will be used to construct a new attractor for the particle when its personal best position is equal to the global best position to help the particle to escape from local optimum in this paper.Some definitions of opposition-based learning are as follows.
Definition 5 (opposite number).Let  be a real number defined on a certain interval  ∈ [, ].The opposite number x is defined as follows: Definition 6 (opposite point).Let ( 1 ,  2 , . . .,   ) be a point in a D-dimensional coordinate system and The opposite point P is completely defined by its coordinates x1 , x2 , . . ., x where

Transposon Mechanism.
Transposon, also known as jumping genes, was a special chromosomal genetic phenomenon firstly discovered by Barbara McClintock in 1950 [27,28].In her work on maize chromosomes, she found that genes can jump from one position to another on the same chromosome when existing dissociation elements and genes can jump between different chromosomes when existing activator elements.She also found that there were two methods to make a gene move on the chromosome.One method is cut-and-paste transposon, in which one part of DNA is cut then pasted to another part.The other is called copy-and-paste transposon, in which the information on one position in DNA is copied and pasted to RNA, then the information in one position is copied from RNA to another position in DNA.In [29], the transposon mechanism was first adopted to replace the operations of crossover, mutation, and selection in traditional evolutionary algorithms, which enhanced the diversity of the population.In PSO, a particle in the search space can be considered as a chromosome, then this operation can also be executed on a particle.In this paper, the copy-and-paste transposon operation will be performed on the individuals in the external archive, which can greatly improve the information exchange ability of the particles in the external archive and guide the population toward the true PF.This operation is easily implemented only with a probability parameter .For example, a transposon and an insertion position are generated for two particles C 1 and C 2 , respectively.Then, the transposon of C 1 is cut and pasted to the insertion position of C 2 ; the transposon of C 2 is cut and pasted to the insertion position of C 1 .Two new particles S 1 and S 2 can be generated after this operation.The copyand-paste transposon operation on two particles is shown in Figure 1.

The Proposed MOQPSO-DSCT Algorithm
The proposed MOQPSO-DSCT algorithm is aimed at dealing with the two problems about how to find more potential solutions for different MOPs and how to improve the convergence accuracy and the distribution of the nondominated solutions.In the MOQPSO-DSCT algorithm, the double search strategy is proposed to balance the exploration and exploitation ability of the swarm and obtain a large number of solutions, in which an improved attractor construction mechanism based on opposition-based learning can help each particle to escape from local optima.The double search strategy combined with the opposition-based learning strategy could solve the former problem.To solve the latter problem, the circular transposon mechanism is proposed to improve the communication ability of the particles in the external archive, which can make the solutions have a higher accuracy and a better distribution.Therefore, the MOQPSO-DSCT algorithm can generate a set of well-distributed and accurate nondominated solutions.The double search strategy and circular transposon mechanism are depicted in detail in the following subsections.

The Double Search Strategy Combined with the Opposition-Based Learning.
To improve the diversity of the solutions set and keep a balance between the exploration and exploitation ability of the swarm, the double search strategy is proposed in the MOQPSO-DSCT algorithm to replace the single search pattern in QPSO.In earlier iterations, the particles dominantly learn from their personal best position to improve the ability of the particles to discover new regions.In later iterations, the particles dominantly learn from the global best position to improve the convergence performance.A search probability parameter  is used to control the two search patterns to be executed alternately.
Inspired by the social and individual cognition in PSO, two search patterns are proposed in the MOQPSO-DSCT algorithm to replace the single search pattern in QPSO.One search pattern called global search pattern is described in (15) as follows: where   () is the personal best position of the -th particle.
Obviously, this search pattern focuses on the individual cognition, which can make all the particles freely diverge into the whole search space and improve the probability of the particles to find more global best solutions.So the global search pattern is suitable to be executed in earlier iterations.
The other search pattern called local search pattern is described in (16) as follows: where () is a particle selected from the external archive with a large crowding distance.Obviously, this search pattern focuses on the social cognition, which can make the particles search around the elite individuals to improve the ability to converge to an optimum.Since () has a large crowding distance, it can directly guide the particle to move toward the sparse region of the search space, which can keep the distribution of the nondominated solutions set.So the local search pattern is suitable to be executed in later iterations.
To keep a balance between the implementation of the two patterns, a search probability parameter  is introduced.In [30], the exponential decrease rule and randomness were integrated into the dynamical control of the inertial weight  instead of decreasing  simply with iterations.Inspired by this idea, this method is introduced to dynamical adjust the value of , then  is shown as where  is the number of the current iteration and  max is the maximum number of iterations.It can be seen that the value of  is decreased with the increasing of iterations on the whole but maintains an oscillatory trend because the curve is affected by a random number between [0, 1].
If  is more than 0.5, the swarm will perform the global search represented by (15); otherwise, the swarm will turn to the local search represented by (16).Thus, the particles in MOQPSO-DSCT can be updated as follows: Therefore, the two search patterns alternatively executed can not only avoid the situation that all the particles in QPSO converge to the same position (), but also make the particles move toward the sparse region, which is beneficial for the diversity of the nondominated solutions set.However, the particle will get trapped into local optimum if its personal best position is equal to the global best position in (8) because the attractor can not be updated for a long time.In MOQPSO-DSCT, opposition-based learning is adopted to further search a better attractor for an individual locally when its personal best position is equal to the global personal.According to (14), the opposite attractor  * ( * 1 ,  * 2 , . . . *  ) of the ( 1 ,  2 , . . .  ) can be described as where   and   are the upper bound and the lower bound of the -ℎ dimension of population, respectively.If  * is dominated by , the current attractor will be kept; If  is dominated by  * , the current attractor will be replaced with  * ; otherwise, one of the two attractors will be selected randomly as the current attractor.

The Circular Transposon Mechanism.
It is evident that the external archive is crucial for most MOPSO algorithms to store the nondominated solutions and select the best position of the population.However, individuals in the external archive can not exchange useful information between each other, which is not beneficial for the population to move toward the true PF.To improve the communication ability of the particles in the external archive, the circular transposon mechanism is introduced into the external archive in the proposed algorithm.The flowchart of the circular transposon mechanism is given in Figure 2. To adopt this mechanism into the external archive, the 50% nondominated solutions with the largest crowding distance in the external archive  will be stored to a new archive .For each solution   = ( 1 ,  2 , . . .,   )( = 1, 2, . . ., ||), a solution  = ( 1 ,  2 , . . .,   ) is randomly selected from , then a probability parameter  will be introduced to control the operation described as Figure 1 on   and .For each dimension of   and , if the value of  is less than an uniformly distributed random number between [0, 1], this operation will be performed on   and  for once.After this operation, a new set  containing the children of   and  will be generated.Therefore, the solutions in  are the results of the information exchange among the particles in .Finally, the algorithm for updating the external archive is used to update the external archive , which can be referred to Algorithm 3 in [31].

The Steps of the Proposed MOQPSO-DSCT Algorithm.
The steps of the MOQPSO-DSCT algorithm are given in Figure 3, and the detailed steps are summarized as follows.
Step 1. Initialize the position of each particle, the corresponding parameters, and the external archive .
Step 2. Evaluate all the particles and store the nondominated solutions to the external archive  according to Pareto dominance.
Step 3.For each particle in population, select one particle from the external archive  as a leader according to the crowding distance.
Step 4. If the personal best position of the current particle is equal to the best position of the population, oppositionbased learning described as (19) will be used to construct an attractor for the current particle; otherwise, an attractor will be constructed according to (8) for the current particle.
Step 5. Calculate the value of  according to (17) and update the position for each particle according to the double search strategy described as (18).
Step 6. Evaluate all the particles and update ; the 50% solutions in  with the largest crowding distance are stored to another archive .
Step 7.For each particle in , randomly select one solution from , perform the operation of Figure 2  Step 8. Use Algorithm 3 in [31] to update .
Step 9.If the termination condition is satisfied, the optima results are output; otherwise repeat Steps 3-8.

Time Complexity of the MOQPSO-DSCT Algorithm.
According to Section 3. performed on each particle.Therefore, the time complexity of this part is ().
The second part is Step 7, which contains the circular transposon mechanism.It can be observed from Section 3.2 and Figure 3 that the time complexity of the circular transposon mechanism is determined by the size of the archives  and  and the number of decision variables.Since not all the decision variables are affected by this mechanism and the number of decision variables is much smaller than the size of , the influence of the number of decision variables can be ignored.Therefore, the time complexity of the second part is ( 2 ).
The last part is Step 8, in which Algorithm 3 in [31] is used to update the external archive.The time complexity of this part is also ( 2 ), which can be obtained from [31].
Therefore, the time complexity of the MOQPSO-DSCT algorithm is ( * ( 2 +  2 + )) ∼ ( 2 ) in each iteration.Since the values of  and  are much smaller than  and , the time complexity of the MOQPSO-DSCT algorithm is ( 2 ) in each iteration.

Test Functions and Parameters
Setting.In this section, to verify the performance of the proposed algorithm on ZDT (ZDT1∼4, ZDT6) [32] and DTLZ (DTLZ2, DTLZ5, and DTLZ6) [33] function sets, the MOQPSO-DSCT algorithm will be compared to other multiobjective optimization algorithms.These algorithms can be divided into two categories, multiobjective PSO including MPSO/D [34], MOQPSO-AG [14], dMOPSO [12], SMPSO [35], and other multiobjective optimization algorithms including NSGA-II [36] and MOEA/D [37].MOQPSO-AG and dMOPSO are both mentioned above.MPSO/D is a multiobjective particle swarm optimization that uses objective space decomposition to maintain the diversity of the solutions.SMPSO is a new multiobjective swarm optimization algorithm, in which one mechanism is used to limit the velocity of the particles to enable the particles search the space sufficiently.NSGA-II is a multiobjective genetic algorithm without an external archive, which uses a fast nondominated sorting to make the algorithm find the true PF at a high speed and crowding distance computation to get a well-distributed solutions.1.
In the experiments, the size of population is 100, the maximum evaluation is 30000, and the maximum iteration is 300 for all the algorithms.The size of the external archive is 100 for ZDT and DTLZ function sets.Parameters in all the algorithms are shown in Table 2. 30 test runs are done for each test function in each algorithm.The algorithms are executed on MATLAB R2015b, Intel(R) Core(TM) i5-6500, 3.20GHz, 4GB RAM.
Concave non-uniform disconnected 10 2 ) 2 ) Non-uniform 10 Table 2: Parameters setting in the seven algorithms.4.2.Performance Indicator.In order to better evaluate the performance of MOQPSO-DSCT, IGD (Inverted Generational Distance) [38] will be used as an indicator in this paper.IGD is the distance from the PF obtained by the algorithm to the true PF.Let  be the true Pareto optimal set and  be the solutions set obtained by our proposed algorithm.The IGD of  to  can be calculated by the following equation:

Algorithms
where   3, where the best values are highlighted in bold and italic font.The PFs obtained by all algorithms in each test function are shown in Figures 4-11.The convergence curves of all algorithms in each test function are shown in Figure 12.It can be seen from Table 3 that MOQPSO-DSCT obtains best results on six test functions in terms of both mean and std values, including ZDT1, ZDT2, ZDT3, ZDT4, DTLZ5, and DTLZ6, which demonstrates that MOQPSO-DSCT has a better convergence and distribution performance than other compared algorithms.
ZDT1 is a two-objective function with a convex PF and ZDT2 is a two-objective function with a concave PF.Almost all the algorithms can converge to the PF.It can be seen from Table 3 that MOQPSO-DSCT, MPSO/D, MOQPSO-AG, and SMPSO all perform better than dMOPSO, NSGA-II, and MOEA/D.As shown in Figures 4, 5, and 12, MOQPSO-AG has a relatively faster convergence speed than other algorithms, but it can not generate a set of well-distributed solutions, which will lead to a slightly worse IGD value than other algorithms.NSGA-II has the slowest convergence speed in all the algorithms and cannot find the true PF on ZDT1.Although the IGD value of MOQPSO-DSCT, SMPSO, and MPSO/D are very close, the std value of MOQPSO-DSCT is significantly better than SMPSO and MPSO/D in terms of magnitude.Therefore, MOQPSO-DSCT has the best performance on ZDT1 and ZDT2.
ZDT3 is a two-objective function with a disconnected PF.It can be observed from Figure 6 that the worst performance on ZDT3 is NSGA-II, which is unable to find the true PF; MOEA/D, dMOPSO, and MPSO/D all have the similar performance; they can converge to the true PF but fail to obtain a set of satisfactory solutions.It can be seen from Table 3 that the IDG values of MOQPSO-DSCT, MOQPSO-AG, and SMPSO are very close, but the std of MOQPSO-DSCT is significantly better than MOQPSO-AG and SMPSO in terms of magnitude.Therefore, MOQPSO-DSCT has the best performance on ZDT3.
ZDT4 is a two-objective function with 21 9 local PFs and a global PF.Many algorithms can not converge to the true PF because of their low ability to escape from local optimal.As shown in Figure 7 PFs produced by NSGA-II and MOEA/D are closer to the true PF; it can be deduced that NSGA-II and MOEA/D have the potential to converge to the true PF if the maximum evaluation is set to a larger number.Only MOQPSO-DSCT, dMOPSO, and SMPSO can find the true PF.It can be seen from Figure 12 that the convergence speed of MOQPSO-DSCT is significantly faster than other algorithms.The values in Table 3 indicate that MOQPSO-DSCT can find best convergence and better spread solutions along the entire true PF than other compared algorithms with the lowest value of IGD.
ZDT6 is a two-objective function with a nonconvex and nonuniformly spaced PF.All the algorithms can converge easily to the true PF.It can be observed from Figure 8 that SMPSO and MOQPSO-AG cannot push all the solutions to the true PF.In Table 3, MOQPSO-DSCT, MPSO/D, dMOPSO, and MOEA/D have the similar IGD value.The best performance on ZDT6 is dMOPSO with the lowest std value of IGD.MOQPSO-DSCT performs the second best on ZDT6.
DTLZ2 is a three-objective function with a spherical PF.As shown in Figure 9, the PFs obtained by dMOPSO are far away from the true PF; MOQPSO-DSCT and NSGA-II have  similar performance such that they can converge to the true PF but cannot make solutions spread well along the entire PF; MOQPSO-AG and SMPSO can find the true PF but fail to obtain a set of satisfactory solutions.It can be seen from Table 3 and Figure 9 that both MOEA/D and MPSO/D can not only converge easily to the true PF but also generate a set of well-distributed solutions.MOEA/D performs best on DTLZ2 with the lowest IGD value and MOQPSO-DSCT performs the third best on DTLZ2.
DTLZ5 and DTLZ6 are all 3-objective functions that test the ability to converge into a degenerated curve and the search ability in disconnected area, respectively.As shown in Figures 10 and 11, NSGA-II and SMPSO can converge to the true PF but cannot keep the distribution of the solutions; MPSO/D, MOEA/D and dMOPSO all have a bad performance on DTLZ5 and DTLZ6.MOQPSO-AG has good performance on DTLZ5 while it cannot push all the solutions to the true PF on DTLZ6.The values in Table 3 indicate that MOQPSO-DSCT performs significantly better than other algorithms.set.To measure the diversity of the nondominated solutions set, SP (Spacing Metric) [38] is introduced as an indicator.Let  = ( 1 ,  2 , . . .,  || ) be the nondominated solutions set obtained by the algorithm, so the indicator SP can be defined as

MOQPSO-DSCT True Pareto Front
where   = min || =1 {∑  =1 |  (  ) −   (  )|},  = 1, 2, . . ., ||; || is the size of ;  is the mean value of   (1 <  < ||);  is the number of the objectives.A value of zero for SP means a good diversity of the nondominated solutions set.To demonstrate that the double search strategy in MOQPSO-DSCT can make the particles find more nondominated solutions and improve the diversity of the nondominated solutions set, we will compare the SP values of MOQPSO-DSCT and MOQPSO-AG on each test function.To highlight the effectiveness of the double search strategy, the value of  will be set to 0 to exclude the influence of the circular transposon mechanism on the diversity of the nondominated solutions set.30 test runs are done for each algorithm.The mean and std values of SP are shown in Table 4, where the best values are highlighted  in bold and italic font.It can be seen from Table 4 that MOQPSO-DSCT get the best results in all test functions.Therefore, the double search strategy can be said to have the ability to improve the diversity of the nondominated solutions set.14 and 15 that the value of  has a great influence on ZDT4, ZDT6 and DTLZ6.MOQPSO-DSCT will not find the true PF of ZDT4 and some solutions cannot be pushed to the true PF of ZDT6 if  is set to 0. The convergence speed of MOQPSO-DSCT on DTLZ6 can be greatly reduced without this mechanism.It can be seen from Figure 14 that the logarithm of IGD values of MOQPSO-DSCT on all test functions almost has no differences with  changing from 0.2 to 1.0, but the computing resources will be severely consumed if  is set to a large value.Therefore, the value of  is set to 0.2 in this paper.

Conclusions
In this paper, an improved multiobjective quantum-behaved particle swarm optimization based on double search strategy and circular transposon mechanism (MOQPSO-DSCT) was proposed.In MOQPSO-DSCT, the double search strategy with a search probability parameter  was used to update the position of all the particles.To help the particles escape from local optimal, opposition-based learning was introduced to construct a better attractor for each particle when its personal best position is equal to the global best position.The double search strategy can make the algorithm generate more solutions for MOPs.Then, the circular transposon mechanism was added to the external archive to exchange useful information between particles, which greatly improved the convergence accuracy of the algorithm and made the true PF be covered with the nondominated solutions evenly and dispersively.The experiment results have demonstrated the best performance of MOQPSO-DSCT in most MOPs when comparing with other multiobjective optimization algorithms.However, the experiment results on 3-objective problems (DTLZ function sets) are worse than the results on 2-objective problems (ZDT function sets) and 3-objective optimization problems are less involved in this paper.Therefore, our next improvements will be aimed at 3-objective problems.The future work will include two aspects.Firstly, some information hidden in the swarm will be considered to guide the particle to select the appropriate search pattern instead of introducing a new search probability parameter, because the new parameter will increase the complexity of designing the algorithm and adjusting the parameter.Secondly, this algorithm will be applied to solve some realworld problems on gene selection.For example, our proposed algorithm will be combined with other classical machine learning algorithms to deal with a multiobjective optimization model with two problems about how to obtain predictive genes with lower redundancy and higher prediction accuracy.

4. 3 .
Experimental Results and Discussion.The mean values and standard deviations (std) of IGD for all the algorithms in each test function are all summarized in Table

4. 4 .
The Comparison of the Diversity of the Solutions.The diversity of the nondominated solutions set indicates the number of the nondominated solutions obtained by the algorithm and the distribution of the nondominated solutions

Table 1 :
The details of test functions.

Table 3 :
The IGD values of all the algorithms in the eight test functions.