Multiobjective Particle Swarm Optimization Based on PAM and Uniform Design

In MOPSO (multiobjective particle swarm optimization), to maintain or increase the diversity of the swarm and help an algorithm to jump out of the local optimal solution, PAM (Partitioning Around Medoid) clustering algorithm and uniform design are respectively introduced to maintain the diversity of Pareto optimal solutions and the uniformity of the selected Pareto optimal solutions. In this paper, a novel algorithm, the multiobjective particle swarm optimization based on PAM and uniform design, is proposed.The differences between the proposed algorithm and the others lie in that PAM and uniform design are firstly introduced to MOPSO. The experimental results performing on several test problems illustrate that the proposed algorithm is efficient.


Introduction
Many real-world optimization problems often need to simultaneously optimize multiple objectives that are incommensurable and generally conflicting with each other.They can usually be written as min ∈Ω { 1 () ,  2 () , . . .,   ()} , where  = ( 1 ,  2 , . . .,   ) is a variable vector in a real and -dimensional space, Ω is the feasible solution space, and  is the number of the objective functions.Since the pioneering attempt of Schaffer [1] to solve multiobjective optimization problems, many kinds of multiobjective evolutionary algorithms (MOEAs), ranging from traditional evolutionary algorithms to newly developed techniques, have been proposed and widely used in different applications [2][3][4].
Multiobjective evolutionary algorithms, MOEAs, have become well-known methods for solving the multiobjective optimization problems that are too complex to be solved by exact methods.The main challenge for MOEAs is to be satisfied with three goals at the same time: (1) the Pareto optimal solutions are as near to true Pareto front, which means the convergence of MOEAs, (2) the nondominated solutions are evenly scattered along the Pareto front, which means the diversity of MOEAs, and (3) MOEAs obtain Pareto optimal solutions in limited evolution times [5].
The particle swarm optimization algorithm, PSO, and MOEAs are both intelligent optimization algorithms.It was proposed by Eberhart and Kennedy in 1995 [6,7].It originates from sharing and exchanging of information in the process of searching food among the bird's individuals.Each individual can benefit from the discovery and flight experience of the others.PSO seems particularly suitable for multiobjective optimization mainly because of the high speed of convergence [8,9].
PAM is one of -medoids clustering algorithms based on partitioning methods.It attempts to divide data objects into  partitions.Namely, it can divide a swarm into  different subswarms with different features.
This paper proposed a novel multiobjective particle swarm optimization based on PAM and uniform design, abbreviated as UKMOPSO.It first uses PAM to partition the data points into several clusters, and then the smallest cluster is implemented crossover operator based on the uniform design to generate some new data points.When the size of the Pareto solution is larger than the size of the external archive, PAM is used to determine which Pareto solution is to be removed or appended.The results of the experimental simulation implemented on several well-known test problems indicate that the proposed algorithm is efficient.
The rest of this paper is organized as follows.Section 2 states the preliminaries of the proposed method.Section 3 presents our method in detail.Section 4 gives the numerical results of the proposed method.The conclusion of the work is made in Section 5.

Preliminaries
In this section, we describe some concepts concerning particle swarm optimization, multiobjective particle swarm optimization, PAM, and uniform design.
The update equation of the velocity consists of the previous velocity component, a cognitive component and a social component.They are mainly controlled by three parameters: the inertia weight and two acceleration coefficients.
From the theoretical analysis for the trajectory of particles in PSO [10], the trajectory of a particle   converges to a weighted mean of   and   .Whenever the particle converges, it will "fly" to the individual best position and the global best position.According to the update equation, the individual best position of the particle will gradually move closer to the global best position.Therefore, all the particles will converge onto the global best particle's position.Coello et al.; it adopts swarm intelligence to optimize MOPs, and it uses the Pareto optimal set to guide the particle's flight [9].

Multiobjective Particle Swarm Optimization. MOPSO is proposed by
Particle swarm optimization has been proposed for solving a large number of single objective problems.Many researchers are interested in solving multiobjective problems (MOP) using PSO.To modify a single objective PSO to MOPSO, a guide must be redefined in order to obtain a set of nondominated solutions (Pareto front).In MOPSO, the Pareto optimal solutions should be used to determine the guide for each particle.How to select suitable local guides for attaining both convergence and diversity of solutions becomes an important issue.
There have been some publications to use PSO to solve MOP.A dynamic neighborhood PSO was proposed [11], which optimizes only one objective at a time and uses a scheme similar to lexicographic ordering.In addition, this approach also proposes an unconstrained elite archive named dominated tree to store the nondominated solutions.However, it is a difficult issue for this approach to pick up a best local guide from the set of Pareto optimal solutions for each particle of the population.A strategy for finding suitable local guides for each particle was proposed and named Sigma method.The local guide is explicitly assigned to specify particles according to the Sigma value [12].This results in the desired diversity and convergence, but it is still not close enough to the Pareto front.On the other hand, an enhanced archiving technique to maintain the best (nondominated) solutions found during the course of a MO algorithm was proposed [13].It shows that using archives in PSO for MO problems will improve their performance directly.A parallel vector evaluated particle swarm optimization (VEPSO) method for multiobjective problems was proposed [14], which adopted a ring migration topology and PVE system to simultaneously work 2-10 CPUs to find nondominated solutions.In [9], MOPSO method was proposed.It incorporates Pareto dominance and a special mutation operator to solve MO problems [15].
Recently, a hybrid multiobjective algorithm combining both genetic algorithm (GA) and particle swarm optimization (PSO) was proposed [16].A multiobjective particle swarm optimization based on self-update and grid strategy was proposed for improving the function of Pareto set [17].A new dynamic self-adaptive multiobjective particle swarm optimization (DSAMOPSO) method is proposed to solve binary-state multiobjective reliability redundancy allocation problems [18], which used a modified nondominated sorting genetic algorithm (NSGA-II) method and a customized timevariant multiobjective particle swarm optimization method to generate nondominated solutions.
The MOPSO method is becoming more popular due to its simplicity to be implemented and its ability to quickly converge to a reasonably acceptable solution for problems in science and engineering.PAM constructs  partitions (clusters) of the given dataset, where each partition represents a cluster.Each cluster may be represented by a centroid or a cluster representative which is some sort of summary description of all the objects contained in a cluster.It needs to determine  partitions for  objects.The process of PAM is by and large as follows.Firstly, randomly select  representative objects, and cluster other objects to the same group as the representative object according to the minimum distances between the representative object and other objects.Then try to replace these representative objects with other nonrepresentative objects in order to minimize squared error.All the possible pairs of objects are analyzed, where one object in each pair is considered a representative object and the other is not.The total cost of the clustering is calculated for each such combination.An object will be replaced with such an object having minimized squared error.The set of best objects in each cluster after iteration forms the representative objects for the next iteration.The final set of representative objects is the respective centroids of the clusters [19][20][21][22].

Uniform Design
2.4.1.Uniform Design.The main objective of uniform design is to sample a small set of points from a given set of points such that the sampled points are uniformly scattered [23].
Let there be  factors and  levels per factor.When  and  are given, the uniform design selects  from   possible combinations, such that these  combinations are uniformly scattered over the space of all possible combinations.The selected  combinations are expressed as a uniform array (, ) = [ , ] × , where  , is the level of the th factor in the th combination and can be calculated by the following formula where  is a parameter given in Table 1.

Improved Generation of Initial
Population.An algorithm for dividing the solution space and an algorithm for generating the initial population have been designed [23].However, Algorithm 2 in [23] considers only the dividing of the solution space, not the dividing of the -dimension space.This will bring about some serious problems.If we assume that, in Step 2,  0 = 5 and  = 10, then (,  0 ) is impossible to be generated because  0 must be larger than .Namely, Algorithm 2 in [23] is only suitable for the low dimensional problem.In order to overcome the shortcomings, the dividing of the -dimension space is introduced to improve the algorithm.The improved algorithm can be suitable for not only low dimensional but also high dimensional problem.The improved algorithm is shown as follows.
Algorithm A (improved generation of initial population) Step 1.Judge whether  0 is valid or not, as it must be found in the 1st column of Table 1.If it is not valid, it stops and shows error messages, otherwise it continues.
Step 2. Execute Algorithm 1 in [23] to divide Step 3.1.Judge whether  0 is more than , if yes, turn to Step 3.2, otherwise turn to Step 3.3.
Step 3.3.Divide -dimension space into ⌊/ 0 ⌋ parts, where  0 is an integer more than 1 and less than  0 and is generally taken as  0 −1.Among ⌊/ 0 ⌋ parts, the 1st part corresponds to the dimension from 1 to  0 , and the 2nd part corresponds to the dimension from  0 + 1 to 2 *  0 , and so forth.If the remainder  of / 0 is not equal to 0, then a plus part corresponds to the dimension from ⌊/ 0 ⌋ *  0 + 1 to , whose length is surely less than  0 .Repeat to execute Step 3.4 for each part.

Crossover Operator Based on the Uniform Design.
The crossover operator based on the uniform design acts on two parents.It quantizes the solution space defined by these parents into a finite number of points, and then it applies the uniform design to select a small sample of uniformly scattered points as the potential offspring.
For any two parents  1 and  2 , their minimal values and the maximal values of each dimension are used to form the novel solution space [ parent ,  parent ].Each domain of [ parent ,  parent ] is quantized into  1 levels, where  1 is a predefined prime number.Then the uniform design is applied to select a sample of points as the potential offspring.The details of the algorithm can be referred to [23].[4].Elitism means that elite individuals cannot be excluded from the mating pool of the population.A strategy presented can always include the best individual of the current population in the next generation in order to prevent the loss of good solutions found so far.This strategy can be extended to copy the best  individuals to the next generation.This is explanation of the elitism.In MOP, elitism plays an important role.

Elitist Selection or Elitism
Two strategies are often used to implement elitism.One maintains elitist solutions in the population; the other stores elitist solutions into an external secondary list or external archive and reintroduces them to the population.The former copies all nondominated solutions in the current population to the next population and then fills the rest of the next population by selecting from the remaining dominated solutions in the current population.The latter uses an external secondary list or external archive to store the elitist solutions.External list stores the nondominated solutions found.It will be updated in the next generation by means of removing elitist solutions dominated by a new solution or adding the new solution if it is not dominated by any existing elitist solution.
The work adopts the second strategy, namely, storing elitist solutions to an external secondary list.Its advantage is that it can preserve and dynamically adjust all the nondominated solutions set till the current generation.The pseudocodes of selecting elitist and updating elitist are, respectively, shown in Algorithms 2 and 3.

Selection Mechanism for the Swarm Based on Uniform
Design.This paper adopts the uniform design to select the best  1 points ( 1 < ) from the  points and acquires their objectives fit.The detailed steps are as follows.

Algorithm B
Step 1. Calculate each of  objectives for each of the  points; normalize each of objectives   () as follows: where  is a set of points in the current population and ℎ   is the normalized objective.
Step 2. Apply the uniform design to generate the  0 weight vectors  1 ,  2 , . . .,   0 ; each of them is used to compose one fitness function by the following formula, where  0 is a design parameter and it is prime: Step 3. Based on each of fitness functions, evaluate the quality of the  points.Assume the remainder  1 / 0 is  0 .For the first  0 fitness functions, select the best ⌈ 1 / 0 ⌉ points; for the rest of fitness functions, select the best ⌊ 1 / 0 ⌋ points.Overall, a total of  1 points are selected.The objectives of these selected points are correspondingly stored into fit.

Selection Mechanism for Gbest Based on PAM.
In MOPSO,  plays an important role in guiding the entire swarm toward the global Pareto front [24].In contrast to the single objective PSO having only one global best , MOPSO has multiple Pareto optimality solutions which are nondominated each other.How to select a suitable  for each of particles from Pareto optimal solutions is one very key issue.
The paper presents the selection mechanism for  based on PAM as follows.
Algorithm C. Assume the population and the numbers of particles are denoted as pop and pop, and the Pareto optimality and the numbers of Pareto optimality are denoted as pareto and .
Step 1. Acquire the number of cluster according to the following formula: where  min and  max , respectively, denote the minimal and maximal value of , namely,  ∈ [ min ,  max ];  and max  indicate the th iteration and the maximal iteration number.The formula can acquire the linearly increasing number of cluster so as to accord with the process from the coarse search to the elaborate search.
Step 2. If  ≤ , then for each particle in pop, find the nearest Pareto optimal solution as its .Otherwise, turn to Step 3.
Step 3. Perform PAM described in Section 2.3 to partition Pareto and pop into  clusters, respectively, the cluster centroids of which are denoted as  1 = { Step 4. For each  2, ∈  2 , find the nearest  1, ∈  1 .For all the particles in the cluster represented by  2, , their  randomly takes one of Pareto optimal solutions in the cluster represented by  1, .

Adjustment of Pareto Optimal Solutions Based on the
Uniform Design and PAM.The number of Pareto optimal solutions in the external archive will increasingly enlarge with evolution; namely, the size of the external archive will become very large with the iteration number.This will increase the computation complexity and the execution time of an algorithm.Therefore, the size of the external archive must be controlled.In the meanwhile, Pareto optimal solutions in the external archive are possibly close to each other in the objective space.If the solutions are close to each other, they are nearly the same choice.It is desirable to find the Pareto optimal solutions scattered uniformly over the Pareto front, so that the decision maker can have a variety of choices.Therefore, how to control the size of the external archive and select representative Pareto optimal solutions scattered uniformly over the Pareto front is a key issue.The paper presents the adjustment algorithm of Pareto optimal solutions based on the uniform design and PAM.The algorithm firstly implements PAM to partition the Pareto front into  clusters in the objective space and then implements the uniform crossover operator on the minimal cluster so as to generate more Pareto optimality in the minimal cluster; finally it keeps all the points in the lesser clusters and discards some points in the larger clusters.The detailed steps of the algorithm are shown as follows.
Algorithm D. Assume the Pareto optimality, their values of  functions, and the numbers of Pareto optimality are denoted as pareto, pareto, and .The size of the external archive is assumed as  lim.The number of clusters is .
Step 1.If the numbers of data points in the maximal and the minimal clusters have too many differences or are both very small, select all the data points from the minimal cluster and the same number of points from the cluster which is the nearest to the minimal cluster.For each pair of data points, perform the uniform crossover operator described in Section 2.4.3.Update pareto, pareto, and  according to Algorithm 3.
Step 2. If  >  lim, implement PAM to partition the pareto into  clusters in the objective space and let  div =  lim /; it indicates the average points to select from each of  clusters.We can classify three situations according to the number of points in each cluster and  div as follows.
(i) Situation 1: the numbers of points in all clusters are larger than or equal to  div.
(ii) Situation 2: the number of data points is larger than or equal to  div only in one cluster.
(iii) Situation 3: the number of the clusters, in which the number of data points is larger than  div, is larger than 1 and less than .
Step 3.For Situation 1, sort all clusters according to their containing points in ascending order.Select  div points from the previous  − 1 clusters, and select the remainder points,  lim − div * ( − 1), from the last cluster.Turn to Step 7.
Step 4. For Situation 2, keep the points of all the clusters, in which the number of the points is less than or equal to  div, and the remainder points are selected from that only cluster.Turn to Step 7.
Step 5.For Situation 3, keep all the points of all the clusters, in which the number of points is less than or equal to  div.Turn to Step 6 to select the remainder point.
Step 6. Recalculate  div, and let  div = rem/  , where rem and   indicate the number of the remainder points and clusters.According to the new  div, turn to Step 3 or Step 4.
Step 7. Save the selected  lim points and terminate the algorithm.In single objective problems, there is only one objective to be optimized.Therefore, the global best () of the whole swarm is only one.In multiobjective problems, there are multiple consistent or conflicting objectives to be optimized.Therefore, there exist very large or infinite numbers of solutions which cannot dominate each other.These solutions are nondominated solutions or Pareto optimal solutions.Therefore,  of the whole swarm is more than one.How to select suitable  is a very key issue.

Thoughts on the
When it is not possible to find all these solutions, it may be desirable to find as many solutions as possible in order to provide more choices to the decision maker.However, if the solutions are close to each other, they are nearly of the same choice.It is desirable to find the Pareto optimal solutions scattered uniformly over the Pareto front, so that the decision maker can have a variety of choices.The paper introduces uniform design to ensure that the acquired solutions scatter uniformly over the Pareto front in the objective space.
For the MOPSO, the diversity of the population is a very important factor.It has a key impact on the convergence of an algorithm and uniformly distribution of Pareto optimal solution.It can effectively get rid of the premature of the algorithms.The paper introduces PAM clustering algorithms to maintain the diversity of the population.The particles in the same cluster have similar features, whereas ones in different clusters have dissimilar features.Thus, choosing particles from different clusters may necessarily increase the diversity of the population.

Steps of the Proposed Algorithm
Step 1.According to the population size pop, determine the number of subintervals  and the population size  0 in each subinterval, such that  0 *  is more than pop;  0 is a prime and must exist in Table 1.Execute Algorithm A in Section 2.4.2 to generate a temporary population containing  0 *  ≥ pop points.
Step 2. Perform Algorithm B described in Section 3.2 to select the best pop points from the  0 *  points as the initial population pop and acquire their objectives fit.Step 4. According to Algorithm 2, select elitist or Pareto optimal solutions from pop and store them to the external secondary list, pareto, the size of which is assumed as .
Step 5. Choose the suitable  for each of the particles from pareto in terms of Algorithm C described in Section 3.3.
Step 6. Update the position pop and velocity  for each of the particles, respectively, using formulas (2) and (3), where formula (2) is modified as follows: V , ( + 1) =  ⋅ V , () Step 7.For the th dimensional variable of the th particle, if it goes beyond its lower boundary or upper boundary, then it takes the th dimensional value of its corresponding boundary and the th dimensional value of its velocity takes the opposite value.
Step 8. Calculate each of  objectives for each of the particles, and update them into fit.
Step 9. Update  of each particle as follows.
For Step 10.Update the external archive pareto storing Pareto optimality and its size  according to Algorithm 3.
Step 11.Implement Algorithm D described in Section 3.4 to adjust the Pareto optimal solutions such that the number of Pareto optimal solutions is less than or equal to the size of the external archive  lim.
Step 12.If the stop criterion is satisfied, terminate algorithm; otherwise, turn to Step 5 and continue.

Numerical Results
In order to evaluate the performances of the proposed algorithm, we compare it with two outstanding algorithms, UMOGA [23] and NSGA-II [25].
The definitions of them are as follows.FON is defined as where   ∈ [ −4, 4].
This test function has a nonconvex Pareto optimal front.KUR is defined as where  = 3 and   ∈ [ −5, 5].
This test function has a nonconvex Pareto optimal front, in which there are three discontinuous regions.
ZDT1 is defined as where  = 30 and   ∈ [0, 1].This test function has a convex Pareto optimal front.ZDT2 is defined as where  = 30 and   ∈ [0, 1].This test function has a nonconvex Pareto optimal front.ZDT3 is defined as where  = 30 and   ∈ [0, 1].This test function represents the discreteness feature.Its Pareto optimal front consists of several noncontiguous convex parts.The introduction of the sine function in ℎ causes discontinuity in the Pareto optimal front.ZDT6 is defined as where  = 10 and   ∈ [0, 1].

Mathematical Problems in Engineering
This test function has a nonconvex Pareto optimal front.It includes two difficulties caused by the nonuniformity of the search space.Firstly, Pareto optimal solutions are nonuniformly distributed along the global Pareto optimal front (the front is biased for solutions in which  1 ( 1 ) is near one).Secondly, the density of the solutions is the lowest near the Pareto optimal front and the highest away from the front.
The Pareto optimal solutions of this test function must lie inside the first octant of the unit sphere in a three-objective plot.It is more difficult than DTLZ1.

Parameter Values.
The parameter values of the proposed algorithm, UKMOPSO, are adopted as follows.
(i) Parameters for PSO: the linearly descending inertia weight [33,34] is within the interval [0.1, 1], and the acceleration coefficients  1 and  2 are both taken as 2.
(ii) Parameter for PAM: the minimal and maximal numbers of clusters are  min = 2 and  max = 10.
(iii) Population size: the population size pop is 200.
(iv) Parameters for the uniform design: the number of subintervals  is 64; the number of the sample points or the population size of each subinterval  0 is 31; set  0 = 31 and  1 = 5.
(v) Stopping condition: the algorithm terminates if the number of iterations is larger than the given maximal generations of 20.
All the parameter values in UMOGA [23] are set equal to the original values in [23]; the different and additional parameter values between UMOGA and UKMOPSO are as follows: the number of subintervals  is 16;  =  (the number of variables);  1 = 7;   = 0.02.
NSGA-II in [25] adopted a population size of 100, a crossover probability of 0.8, a mutation probability of 1/ (where  is the number of variables), and maximal generations of 250.In order to make the comparisons fair, we have used population size of 100 and maximal generations of 40 in NSGA-II, so that the total number of function evaluations in NSGA-II, UKMOPSO, and UMOGA is the same.

Performance Metric.
Based on the assumption that the true Pareto front of a test problem is known, many kinds of performance metrics have been proposed and used by many researchers such as [2,9,24,25,29,[35][36][37][38][39][40].Three of them are adopted in the paper to compare the performance of the proposed algorithm with them.The first metric is  metric [29,38].It is taken as the quantitative metric of the solution quality and often used to show that the outcomes of one algorithm dominating the outcomes of another algorithm.It is defined as where  ≺  represents  being dominated by ;  represents the Pareto front obtained by Algorithm , and  represents the Pareto front obtained by Algorithm  in a typical run.Furthermore, || represents the number of elements of .
The metric value (, ) = 1 means that all points in  are dominated by or equal to points in .In contrast, (, ) = 0 means that none of the points in  are covered by the ones in .
The second metric is the IGD metric (namely, Inverted Generational Distance) [39][40][41].It is the mean value of the distances from the set of solutions uniformly distributed in the true Pareto front to the set of solutions obtained by an algorithm in objective space.Let  * be a set of uniformly distributed points along the true PF (Pareto Front).Let  be an approximate set to the PF; the average distance from  * to  is defined as where (V, ) is the minimum Euclidean distance between V and the points in .If | * | is large enough to represent the PF very well, IGD(,  * ) could measure both the diversity and convergence of  in a sense.To have a low value of IGD(,  * ), the set  must be very close to the PF and cannot miss any part of the whole PF.The smaller the IGD value for the set of obtained solutions is, the better the performance of the algorithm will be.
The third measure is the measure of maximum spread (MS) [2,24,35], which is proposed in [42,43] True Pareto front NSGA-II where  is the number of objectives,  max

Results.
For each test problem, we perform the proposed algorithm (called UKMOPSO) for 30 independent runs and compare its performance with UMOGA [23] and NSGA-II [25].The values of several metrics, the  metric, IGD metric, and MS metric, are shown in Tables 2, 3, 4, and 5.The Pareto fronts obtained by several algorithms implemented on several test functions are illustrated in Figures 1-6.
For brevity, (UKMOPSO, UMOGA), (UMOGA, UKMOPSO), (UKMOPSO, NSGA-II), and (NSGA-II, UKMOPSO) are, respectively, marked as  12 ,  21 ,  13 , and  31 .True Pareto front NSGA-II As shown in Table 2, for the test functions, Fon and KUR,  12 = 0.769 and  12 = 0.929 mean that 76.8% and 92.9% of the solutions obtained by UMOGA are dominated by those obtained by UKMOPSO.Similarly,  21 = 0.01 means that 1% of the solutions obtained by UKMOPSO are dominated by those obtained by UMOGA.For DTLZ1 and DTLZ2,  12 = 1 and  21 = 0 mean that all solutions obtained by UMOGA are dominated by those obtained by UKMOPSO, but none of solutions from UKMOPSO is dominated by those from UMOGA.It means that the solution quality via UKMOPSO is much better than that via UMOGA for the above test functions.For ZDT1, ZDT2, ZDT3, and ZDT6, the values of the  metrics do not have too many differences between UKMOPSO and UMOGA.It means the solution qualities via both are almost identical.
From Table 3, we can see that for ZDT1, ZDT2, ZDT3, and ZDT6,  13 = 1 and  31 = 0 mean that all solutions obtained by NSGA-II are dominated by those obtained by UKMOPSO, but none of solutions from UKMOPSO is dominated by those from NSGA-II.It means that the solution quality via UKMOPSO is much better than that via NSGA-II for the above test functions.For the rest of the test functions, the solution qualities via both are almost identical.
In Table 4, for all test problems, the IGD values of UKMOPSO are the smallest among UKMOPSO, UMOGA, and NSGA-II.This means the PF found by UKMOPSO is the nearest to the true PF compared with the PF obtained by the other two algorithms; namely, the performance of UKMOPSO is the best in the three algorithms.The IGD values of UMOGA are all larger than those of NSGA-II for   all test functions.It means that the PF found by UMOGA is closer to the true PF than the PF obtained by NSGA-II.
From Table 5, we can see that all the MS values of UKMOPSO are almost close to 1 for each test function.It means that almost all the true PF is totally covered by the PF obtained by UKMOPSO.UMOGA is similar to UKMOPSO.The MS values of NSGA-II are much lesser than UKMOPSO and UMOGA, especially MS = 0.0019 for ZDT2.
Figures 1-6 all demonstrate that the PF found by UKMOPSO is the nearest to the true PF and scatters most uniformly in those obtained by three algorithms.Most of the points in the PF found by UKMOPSO overlap the points in the true PF.It means the solution quality obtained by UKMOPSO is very high.

Influence of the Uniform Crossover and PAM.
In order to find out the influence of the uniform crossover on the proposed algorithm, we compare the distribution and number of Pareto optimal solutions before and after performing the uniform crossover.One of the simulating results on the test function ZDT1 is shown in Figure 7.
From Figure 7, it can be seen that, before and after performing the uniform crossover, the number of data points in the 1st cluster and the 2nd cluster varies from 7 to 15 and from 19 to 26, respectively; namely, many new Pareto optimal solutions having not been found before performing the uniform crossover are generated, and the differences of the data points between two clusters have been decreased.This will directly influence the uniformity of the PF and acquire more uniform Pareto solutions.
PAM is used to determine which Pareto solutions are to be removed from or be inserted into the external archive.This is to maintain the diversity of Pareto optimal solutions.The  diversity can be computed by the "distance-to-average-point" measure [44] defined as where  is the population, || is the swarm size,  is the dimensionality of the problem,   is the   th value of the   th particle, and   is the   th value of the average point .
If the number of Pareto optimality is less than or equal to the size of the external archive, PAM has no influence on the proposed algorithm.Otherwise, it is used to select the different type of Pareto optimality from several clusters.Therefore, The diversity of Pareto optimality will certainly increase.We monitor the diversity of Pareto optimality before and after performing PAM on ZDT1 at a certain time.The values are, respectively, 87.62 and 117.52.This fully demonstrates that PAM can improve the diversity of Pareto optimality.

Conclusion and Future Work
In this paper, a multiobjective particle swarm optimization based on PAM and uniform design is presented.It firstly implements PAM to partition the Pareto front into  clusters in the objective space; and then it implements the uniform crossover operator on the minimal cluster so as to generate more Pareto optimality in the minimal cluster.When the size of the Pareto solution is larger than that of the external archive, PAM is used to determine which Pareto solutions are to be removed from or be inserted into the external archive.Finally, it keeps all the points in the lesser clusters and discards some points in the larger clusters.This can ensure that each of the clusters will contain approximately the same data points.Therefore, the diversity of the Pareto solutions will increase, and they can scatter uniformly over the Pareto front.The results of the experimental simulation performed on several well-known test problems indicate that the proposed algorithm obviously outperforms the other two algorithms.
This algorithm is going on for further enhancement and improvement.One attempt is to use a more efficient or approximate clustering algorithm to speed up the execution time of this algorithm.Another attempt is to extend its application scopes.
the th particle, if fit[] dominates [], then let [] = fit[] and [] = pop[]; otherwise, [] and [] are kept unchanged.If neither of them is dominated by the other, then randomly select one of them as [] and update its corresponding [].
. It can measure how well the true Pareto front (PF true ) is covered by the discovered Pareto front (PF known ) through hyperboxes

Figure 1 :
Figure 1: Pareto fronts obtained by different algorithms for test problem FON.
and  min  are the maximum and minimum values of the th objective in PF known , respectively, and  max  and  min  are the maximum and minimum values of the th objective in PF true , respectively.Note that if  min  ≥  max  , then   = 0. Algorithms with larger MS values are desirable and MS = 1 means that the true Pareto front is totally covered by the obtained Pareto front.

Figure 2 :
Figure 2: Pareto fronts obtained by different algorithms for test problem KUR.

Figure 3 :
Figure 3: Pareto fronts obtained by different algorithms for test problem ZDT1.

Figure 4 :
Figure 4: Pareto fronts obtained by different algorithms for test problem ZDT2.

Figure 5 :
Figure 5: Pareto fronts obtained by different algorithms for test problem ZDT3.

Figure 6 :
Figure 6: Pareto fronts obtained by different algorithms for test problem ZDT6.

Figure 7 :
Figure 7: Pareto optimal solutions before (a) and after (b) performing the uniform crossover.
-means and -medoids.In contrast to the means algorithm, -medoids chooses data points as centroids, which make -medoids method more robust than means in the presence of noise and outliers.The reason is that -medoids method is less influenced by outliers or other extreme values than -means.PAM (Partitioning Around Medoids) is the first and the most frequently used -medoids algorithms.It is shown in Algorithm 1.

Table 1 :
Values of the parameter  for different number of factors and different number of levels per factor.
if each objective is satisfied with   () ≤   () and at least one objective is satisfied with   () <   (), namely,  is at least as good as  with respect to all the objectives, and  is strictly better than  with respect to at least one objective, then we say that  dominates .If no other solution is strictly better than , then  is called a nondominated solution or Pareto optimal solution.
Proposed Algorithm.In MOP, for  objectives  1 ,  2 , . . .,   and any two points  and  in the feasible solution space Ω,

Table 2 :
Comparison of C metric between UKMOPSO and UMOGA.

Table 3 :
Comparison of C metric between UKMOPSO and NSGA-II.

Table 4 :
Comparison of IGD metric.

Table 5 :
Comparison of maximum spread (MS) metric.