Multiswarm Particle Swarm Optimization with Transfer of the Best Particle

We propose an improved algorithm, for a multiswarm particle swarm optimization with transfer of the best particle called BMPSO. In the proposed algorithm, we introduce parasitism into the standard particle swarm algorithm (PSO) in order to balance exploration and exploitation, as well as enhancing the capacity for global search to solve nonlinear optimization problems. First, the best particle guides other particles to prevent them from being trapped by local optima. We provide a detailed description of BMPSO. We also present a diversity analysis of the proposed BMPSO, which is explained based on the Sphere function. Finally, we tested the performance of the proposed algorithm with six standard test functions and an engineering problem. Compared with some other algorithms, the results showed that the proposed BMPSO performed better when applied to the test functions and the engineering problem. Furthermore, the proposed BMPSO can be applied to other nonlinear optimization problems.


Introduction
Many nonlinear optimization problems are attracting increasing attention from researchers, with conflicting objectives and using various random search methods. Global optimization algorithms are employed widely to solve these problems [1]. Particle swarm optimization (PSO) is a type of random optimization method, which was inspired by the flocking behavior of birds [2,3]. Kennedy and Eberhart were the first to propose PSO [4]. Compared with other swarm intelligence algorithms, PSO has a simple structure and rapid convergence rate, and it is easy to perform, which makes it an effective method for solving nonlinear optimization problems [5,6].
In recent years, many researchers have tried to improve PSO to overcome its shortcomings; that is, it exhibits premature convergence and is readily trapped by local optima [7]. In order to improve the efficiency and effectiveness of multiobjective particle swarm optimization, a competitive and cooperative coevolutionary multiobjective particle swarm optimization algorithm (CCPSO) was presented by Goh et al. in 2010 [8]. A competitive and cooperative coevolution mechanism was introduced in the proposed CCPSO, which does not handle the ZDT4 problem well, so it cannot be applied widely. Rathi and Vijay presented a modified PSO (EPSO) in 2010 [9], where two stages are used to balance the local and global search. First, the bandwidth of microstrip antenna (MSA) was modeled by a benchmark function. Second, the first output was then employed as an input to obtain the new output in the form of five parameters. The proposed EPSO is efficient and accurate. In 2011, a multiswarm self-adaptive and cooperative particle swarm optimization (MSCPSO) was proposed by Zhang and Ding [10], which employs four subswarms: subswarms 1 and 2 are basic, but subswarm 3 is influenced by subswarms 1 and 2, while subswarm 4 is affected by subswarms 1, 2, and 3. The four subswarms employ a cooperative strategy. While it achieved good performances in solving complex multimodal functions, MSCPSO was not applied to practical engineering problems. A new chaos-enhanced accelerated PSO algorithm was proposed by Gandomi et al. in 2013 [11], which delivered good performances when applied to a complex problem, but it is not easy to operate. Ding et al. developed the multiswarm cooperative chaos particle swarm optimization 2 Computational Intelligence and Neuroscience algorithm in 2013 [12], which includes chaos and multiswarm cooperative strategies. This method was proposed only to optimize the parameters of a least squares support vector machine. In order to establish and optimize the alternative path, an efficient routing recovery protocol with endocrine cooperative particle swarm optimization was proposed by Hu et al. in 2015 [13], which employs a multiswarm evolution equation. Qin et al. proposed a novel coevolutionary particle swarm optimizer with parasitic behavior in 2015 [14], where a host swarm and parasite swarm exchange information. This method performs better in terms of the solution accuracy and convergence, but the structure is complex.
In the present study, we introduce a multiswarm particle swarm optimization with transfer of the best particle (BMPSO), in order to improve the global search capacity and to avoid trapping in local optima. The proposed algorithm employs three slave swarms and a master swarm. The best particle and worst particle are selected by the PSO from every slave swarm. The best particle is then transferred to the next slave swarm to replace the worst particle. The three best particles are then stored in the master swarm. Finally, the optimal value is obtained by PSO from the master swarm. Compared with other optimization algorithms, this parasitism strategy is easier to understand. The control parameters for the proposed algorithm do not increase and it can find the optimal solution easily.
The remainder of this paper is organized as follows. In Section 2, we provide an overview of the standard PSO. Our proposed BMPSO is explained in Section 3. We present the results of our numerical experimental simulations as well as comparisons in Section 4. In Section 5, we apply our proposed BMPSO to a practical engineering problem. Finally, we give our conclusions.

Standard PSO
In 1995, the standard PSO algorithm was presented by Kennedy and Eberhart [4]. It was developed further subsequently due to its easy implementation, high precision, and fast convergence. Similar to other evolutionary algorithms, it starts from random solutions and performs iterative searches to find the best solution. The quality of the solution is evaluated based on fitness.
PSO utilizes a swarm population of no quality and volume particles. Each particle moves in a -dimensional space according to its own experience and that of neighbors while searching for the best solution. The position of the th particle is expressed by the following vector: = [ 1 , . . ., ]. The velocity is expressed by the following vector: V = [V 1 , . . . , V ]. In each step, the particles move according to the following formulae: where = 1, . . . , , 1 and 2 are acceleration constants, 1 and 2 represent two independent uniformly distributed random variables between 0 and 1, is the best previous position of the particle itself, and is the best global value. In 1998, an inertia weight was introduced into formula (1) by Shi and Eberhart [15], as shown by the following formula [18]: It was demonstrated that the inclusion of a suitable inertia weight allows the best solution to be searched more accurately. This weight can balance exploration and exploitation.
The standard PSO procedure is illustrated as follows.
Step 1. Initialize the position and velocity of each particle randomly in the population.
Step 2. Evaluate the value of the fitness function. Store the values in pbest and gbest.
Step 3. Update the position and velocity of each particle using formulae (2) and (3).
Step 5. Assess whether the conditions are met or not. If not, go back to Step 3.

General Description of BMPSO
The standard PSO is readily trapped by local optima and it is susceptible to premature convergence [19], while solving multiobjective problems with constraint conditions. Thus, we propose a new method for multiswarm PSO with transfer of the best particle called BMPSO. Our proposed algorithm can be applied to unconstrained problems but also with constraint conditions. It has the ability to escape local optima and prevent premature convergence.

Best Particle Coevolutionary Mechanism.
In this section, we provide a general description of BMPSO. The proposed method employs three slave swarms and a master swarm. There is a specific relationship among the four swarms. We propose three swarms as slaves, which are designated slave-1, slave-2, and slave-3. First, the best particle-1 of slave-1 is selected, which is then stored in the master swarm. Second, the worst particle-2 from slave-2 is replaced with the best particle-1. The best particle-2 is then found, which is stored in the same manner as particle-1. The same strategy is applied to slave-3. When the three slave swarms have been processed, the master best particle is found in the master swarm as the optimal value. This strategy is not complete until various conditions have been met. The proposed BMPSO involves cooperation and competition. This evolutionary strategy is called parasitism in nature [20]. The structure of the BMPSO is shown in Figure 1.
Computational Intelligence and Neuroscience 3 The best particle The best particle The best particle PSO PSO PSO The worst Slave-1 Slave-2 Slave-3 Master swarm particle is replaced The worst particle is replaced The worst particle is replaced Our improved algorithm comprises three slave swarms and a master swarm, where a parasitism strategy is used to balance exploration and exploitation. The control parameters for the proposed BMPSO include the number of particles, inertia weight, dimension of particles, acceleration coefficients, and iterations. The number of particles is determined by the complexity of the problem, that is, from 5 to 100. The inertia weight decides how to inherit from current velocity. The dimension of the particles is determined by the optimization problem as the dimension of the result required. The acceleration coefficients give the particles the capacity for self-summary and learning from others, and they usually have a value of 2. The number of iterations can be determined by the experimental requirements. The pseudocodes for the BMPSO are presented as follows.

Begin
Specify the population of each slave swarm Initialize the velocity and position Evaluate the fitness value Find the best particle and worst particle in each slave swarm Use the best particle to replace the worst particle in the next slave swarm Store the best particle in the master swarm Find the optimal value in the master swarm Repeat Until a terminate-condition is met End 3.2. Diversity Analysis of BMPSO. In order to explain the proposed algorithm in detail, we illustrate the search capacity of each particle in the four swarms. The worst particle is replaced by the best particle. The best particle will lead the other particles away from a local optimum. Figure 2 shows the evolutionary processes for the particles in slave-1, slave-2, slave-3, and the master swarm.
The evolutionary processes based on the Sphere function in 20 are shown in Figure 2. The graphs represent the results obtained by the proposed algorithm in a single run. Figure 2(a) shows that the particles in the four swarms performed their search behavior in a smooth manner. The diversity was improved when the best particle replaced the worst particle. This point is illustrated clearly by the experiments with the test function. Figure 2(b) shows the status of the particles during different generations based on the distances among four particles. The proposed algorithm made a greater effort to avoid becoming trapped by a local optimum after the 50th generation, while still considering the convergence speed. Thus, it maintained the diversity of the fitness value but not at the cost of the convergence speed.
The number of particles affects the optimization ability of BMPSO. In order to verify this, Figure 3 illustrates the convergence characteristics using the Sphere function as in the example ( = 10, 20, 30, 50, 80). Figure 3 shows that the final optimized fitness value tended to improve as the number of particles increased. However, this is not as obvious in the later stage. In addition, there must be a greater communication cost when the number of particles increases. In the proposed BMPSO, = 10-30 is sufficient for most problems.
In the standard PSO, the inertia weight is very important, because it affects the balance between local and global search. It describes the influence of inertia on speed. When the inertia weight is higher, the global optimization ability is better. In order to determine the appropriate value for , we performed experiments with the test function and Figure 4 shows the results obtained with different values for . Figure 4 clearly demonstrates that the optimum fitness value is not easy to reach, when the inertia weight is too small or too large. In the proposed BMPSO, = 0.5 is the optimum value.
To a certain extent, the dimension ( ) of particles represents the complexity of a problem, where the search capacity decreases as increases. We performed experiments to determine the suitable scope for using the Sphere function as an example, and Figure 5 illustrates the results obtained. Figure 5 shows that the proposed BMPSO is more effective with a smaller dimension. In the proposed algorithm, = 5-30 is a suitable dimension.
In this section, we considered the influence of different parameters on the proposed algorithm. This diversity analysis demonstrated the effectiveness of BMPSO, where the best particle shares more information with others and it replaces the worst particle during the evolutionary process. This guides the other particles to prevent them from being trapped by local optima and avoids premature convergence. The proposed algorithm balances exploration and exploitation in an effective manner.

Numerical Experiments
In order to determine whether the proposed algorithm is effective for nonlinear optimization problems, we performed experiments using standard test functions [21,22]. The proposed algorithm was simulated and verified on the MATLAB platform. The results were compared with those obtained using a modified particle swarm optimizer called W-PSO [15], an improved particle swarm optimization combined with chaos called CPSO [16], a completely derandomized selfadaptation in evolution strategies called CMAES [17], and a multiswarm self-adaptive and cooperative particle swarm optimization called MSCPSO [10].

Standard Test Functions.
In order to validate the efficiency of BMPSO, the six standard test functions were employed in experiments to search for the optimum value of the fitness function. The six test functions were Sphere, Rastrigin, Griewank, Schwefel, Elliptic, and Rosenbrock [23]. The global optima for these six standard test functions are equal to zero. The formulae for the six functions are shown in Table 1. Sphere, Schwefel, and Elliptic are typical unimodal functions, and thus it is relatively easy to search for the optimum value, which is considered to be the simple single mode state. Rastrigin and Griewank are typical nonlinear multimodal functions, with a wide search space. The peak shape emerges in high and low volatile hops, and it is usually considered difficult to handle complex multimodal problems using optimization algorithms. The global optimum value for Rosenbrock is in a smooth and narrow parabolic valley.
Thus, it is difficult to search for the optimum value because the function supplies little information for optimization algorithms. The six standard test functions comprise a class of complex nonlinear problems [24].
In Tables 2 and 3, the best results are highlighted in bold. Table 2 shows that the proposed BMPSO performed better compared with W-PSO, CPSO, CMAES, and MSCPSO for Rosenbrock on "Std." The results obtained by BMPSO were closest to the theoretical value. It should be noted that the optimal value of 0 could be found for Rosenbrock. Table 3 shows that our proposed BMPSO performed better for Sphere, Griewank, Schwefel, Elliptic, and Rosenbrock compared with the other algorithms, whereas W-PSO performed the best for Rastrigin. According to this analysis, we conclude that the proposed BMPSO has a greater capacity for

Engineering Application
We also solved an engineering problem based on lightweight optimization design for a gearbox to confirm the efficiency of the proposed BMPSO. In the development of the autoindustry, lightweight optimization design for gearboxes is attracting much attention because of energy and safety issues. There are three standard types of lightweight methods [26,27]. In order to solve the problem effectively, Yang et al. proposed a method that establishes a simplified model first, before analyzing it using ANSYS and then establishing an approximate model, with the response surface methodology, followed by final optimization with a genetic algorithm (GA) [28]. A simplified 3 model of a gearbox is shown in Figure 6. This simplified principle has no effect on the finite element analysis.
Computational Intelligence and Neuroscience 7 In the optimization process, the variables are the bottom thickness 1 , axial thickness 2 , and lateral thickness 3 . Detailed descriptions of the establishment of the fitness function can be found in [28]. The optimization problem for the lightweight optimization design of a gearbox can be represented by the following formula: For the proposed BMPSO, the parameters were set as follows: = 30, 1 = 2 = 2, = 0.5, = 1000, and = 3. The parameters for the GA were set as described in [28]. This experiment was performed with 50 independent runs. The results obtained were expressed as the average of 50 runs.
The optimization results presented in Table 3 show that the weight of the gearbox is 31.3458 kg according to the proposed BMPSO, which is 8.4682 kg less than the original and 0.6368 kg less compared with GA. Thus, it can be concluded that the proposed BMPSO performed better compared with the GA. Therefore, the proposed BMPSO is effective in handling complex problems (see Table 4). 8 Computational Intelligence and Neuroscience

Conclusions
In this study, we proposed an improved multiswarm PSO method with transfer of the best particle called BMPSO, which utilizes three slave swarms and a master swarm that share information among themselves via the best particle. All of the particles in the slave swarms search for the global optimum using the standard PSO. The best particle replaces the worst in order to guide other particles to prevent them from becoming trapped by local optima. We performed a diversity analysis of BMPSO using the Sphere function.
We also applied the proposed algorithm to standard test functions and an engineering problem. We introduced parasitism into the standard PSO to develop a multiswarm PSO that balances exploration and exploitation. The advantages of the proposed BMPSO are that it is easy to understand, has a low number of parameters, and is simple to operate. In the proposed algorithm, the strategy of using the best particle to replace the worst enhances the global search capacity when solving nonlinear optimization problems. The optimum solution is obtained from the master swarm. The diversity analysis also demonstrated the obvious improvements obtained using the proposed algorithm. Compared with previously proposed algorithms, our BMPSO delivered better performance. The result obtained by the BMPSO in an engineering problem demonstrated its efficiency in handing a complex problem.
In further research, convergence analysis of the method should be performed in detail. Our further research will also focus on extensive evaluations of the proposed BMPSO by Computational Intelligence and Neuroscience 9 solving more complex and discrete practical optimization problems.