Improved Barebones Particle Swarm Optimization with Neighborhood Search and Its Application on Ship Design

Barebones particle swarm optimization (BPSO) is a new PSO variant, which has shown a good performance onmany optimization problems. However, similar to the standard PSO, BPSO also suffers from premature convergence when solving complex optimization problems. In order to improve the performance of BPSO, this paper proposes a new BPSO variant called BPSO with neighborhood search (NSBPSO) to achieve a tradeoff between exploration and exploitation during the search process. Experiments are conducted on twelve benchmark functions and a real-world problem of ship design. Simulation results demonstrate that our approach outperforms the standard PSO, BPSO, and six other improved PSO algorithms.


Introduction
Particle swarm optimization (PSO), developed by Kennedy and Eberhart [1], is a new optimization technique inspired by swarm intelligence.Liker other evolutionary algorithms (EAs), PSO is also a population-based stochastic search algorithm, but it does not contain any crossover or mutation operator.During the search process, each particle adjusts its search behavior according to the search experiences of its previous best position ( best ) and the global best position ( best ).Due to its simplicity and easy implementation, PSO has been successfully applied to various practical optimization problems [2][3][4][5].
However, like other stochastic algorithms, PSO also suffers from premature convergence when handling complex multimodal problems.The main reason is that the attraction search pattern of PSO greatly depends on  best and  best .Once these best particles ( best and  best ) get stuck, all particles in the swarm will quickly converge to the trapped position.In order to enhance the performance of PSO, different versions of PSO have been proposed in the past decades.Shi and Eberhart [6] introduced an inertia weight  into the original PSO to achieve a balance between the global and local search.Reported results show that a linearly decreased  is a parameter setting.Parsopoulos and Vrahaits [7] proposed a unified PSO (UPSO) which is a hybrid algorithm by combining the global and local versions of PSO.In [8], a fully informed PSO, called FIPS, is proposed by employing a modified velocity model.van den Bergh and Engelbrecht [9] used a cooperative mechanism to improve the performance of PSO on multimodal optimization problems.In the standard PSO, particles are attracted by their corresponding previous best particles and the global best particle.This search pattern is a greedy method which may result in premature convergence.To tackle this problem, Liang et al. [10] proposed a comprehensive learning PSO (CLPSO), in which each particle can be attracted by different previous best positions.Computational results on a set of multimodal problems demonstrate the effectiveness of CLPSO.In [11], Wang et al. proposed a new PSO algorithm (NSPSO) to search the neighbors of particles.This can provide more chances of finding better candidate solutions.Simulation studies show that NSPSO outperforms UPSO, FIPS, CPSO-H, and CLPSO.In [12], another version of NSPSO is proposed by employing neighborhood search and diversity enhanced mechanism.
Similar to other EAs, the performance of PSO also greatly depends on its control parameters, ,  1 , and  2 .The first parameter is known as inertia weight, and the last two are acceleration coefficients.Slight differences of these parameters may result in significantly different performance.
To tackle this problem, some PSO variants based on adaptive parameters have been proposed to minimize the dependency of these parameters [13,14].Compared to these adaptive PSO algorithms, Kennedy developed a novel PSO called barebones PSO (BPSO) [15], which eliminates the velocity term and does not contain the parameters ,  1 , and  2 .In BPSO, a Gaussian sampling is used to generate new positions of particles.Empirical studies demonstrate that the performance of BPSO is competitive to the standard PSO and some improved PSO algorithms.Inspired by the idea of BPSO, some new algorithms haven been proposed.In [16], Omran et al. combined BPSO with differential evolution (DE) and the proposed barebones DE (BBDE).The reported results show that BBDE outperforms the standard DE and BPSO and it also achieves promising solutions for unsupervised image classification.Krohling and Mendel [17] introduced Gaussian and Cauchy mutations into BPSO to improve its performance.Experimental studies on a suite of well-known multimodal benchmark functions demonstrate the effectiveness of this approach.Blackwell [18] presented a theoretical analysis of BPSO.A series of experimental trials confirmed that the BPSO situated at the edge of collapse is comparable to other PSO algorithms and that performance can be still further improved with the use of an adaptive distribution.In [19], Wang embedded opposition-based learning (OBL) into BPSO to solve constrained nonlinear optimization problems.In addition, a new boundary search strategy is utilized.Simulation studies on thirteen constrained benchmark functions show that the new approach outperforms PSO, BPSO, and six other improved PSO algorithms.
In this paper, we also propose an improved barebones PSO called NSBPSO which employs a global and local neighborhood search strategies to make a balance between exploration and exploitation during the search process.In order to verify the performance of NSBPSO, twelve wellknown benchmark functions and a real-world problem on ship design are used in the experiments.Computational results show that our approach outperforms PSO, BPSO, and several other improved PSO variants in terms of the quality of solutions.
The rest of the paper is organized as follows.The standard PSO and barebones PSO are given in Section 2. In Section 3, our approach NSBPSO is proposed.Experimental studies are presented in Section 4. Section 5 presents a real-world application on ship design.Finally, the work is summarized in Section 6.

Barebones Particle Swarm Optimization
PSO is a population-based stochastic search algorithm, which simulates the behaviors of fish schooling or birds flocking.Each particle has a velocity and a position vectors.During the search space, a particle dynamically adjusts its velocity to generate a new position as follows: where   and   are the position and velocity vector for the th particle, respectively. best  is the previous best particle of the th particle and  best is the global best particle. 1 and  2 are two independently generated random numbers within [0, 1].The parameter  is known as inertia weight. 1 and  2 are acceleration coefficients.
A recent study [20] proved that the particles in PSO converge to the weighted position of  best and  best as follows: Based on the convergence characteristic of PSO, Kennedy [15] proposed a new PSO variant called barebones PSO (BPSO), in which each particle only has a position vector and eliminates the velocity vector.Therefore, BPSO does not contain the parameters ,  1 , and  2 .In BPSO, a new position is updated by Gaussian sampling as follows: where (⋅) indicates a Gaussian distribution with mean ( best +  best  )/2 and standard deviation | best −  best  |.

Barebones PSO with Neighborhood Search
Due to the intrinsic randomness, both PSO and EAs suffer from premature convergence when solving complex multimodal problems.Sometimes, the suboptima are near to the global optimum and the neighborhoods of trapped particles may contain the global optimum.For this case, searching the neighbors of particles is beneficial for finding better solutions.
Based on this idea, some neighborhood search strategies have been successfully applied to various algorithms.
In [11], Wang et al. proposed a new PSO algorithm called PSO with neighborhood search strategies (NSPSO), which utilizes one local and two global neighborhood search strategies.The NSPSO includes two operations.First, for each particle, three trial particles are generated by the above neighborhood search strategies, respectively.Second, the best one among the three trial particles and the current particle is chosen as the new current particle.Simulation studies on twelve unimodal and multimodal benchmark problems show that NSPSO achieves better results than standard PSO and five other improved PSO algorithms.
Although NSPSO has shown good search abilities, its performance is still seriously influenced by its control parameters, ,  1 , and  2 .In [11], NSPSO used an empirical parameter settings,  1 =  2 = 1.49618 and  = 0.72984.In order to minimize the effects of the control parameters on the performance of NSPSO, this paper proposes an improved PSO algorithm by combining barebones PSO and the neighborhood search strategies.
There are various population topologies, such as ring, wheel, star, Von Neumann, and random.A recent study shows that the complexity of population topology affects the performance of PSO.A population topology with few connections (low complexity) may perform well on multimodal problems, while a highly interconnected population topology may perform well on unimodal problems.In this paper, a ring topology is used by the suggestions of [11].
The ring topology assumes that particles are organized as a ring.In [21], a special ring topology is proposed by connecting the indices of particles.For example, the fourth particle  4 is connected by the third one  3 and the fifth one  5 .In other words,  3 and  5 are two immediate neighbors of  4 .Figure 1 shows the employed ring topology.Based on the ring topology, a -neighborhood radius is defined, where  is a predefined integer number.For each particle   , its neighborhood radius consists 2 + 1 particles (include itself), which are  − , . . .,  −1 ,   ,  +1 , . . .,  + .It is obvious that the parameter  satisfies 0 ≤  ≤ ( − 1)/2.Figure 1 shows the 2-neighborhood radius of  4 , where 5 particles are covered by the neighborhood.By the suggestions of [11],  = 2 is used in this paper.
Based on the -neighborhood radius, a local neighborhood search strategy is proposed.For each particle   , a local particle   is generated as follows [11]: where  1 and  2 are the position vectors of two particles,  1 and  2 , randomly selected from the -neighborhood neighborhood, 1, 2 ∈ [ − ,  + ] ∧ 1 ̸ = 2 ̸ = ,  1 ,  2 ,  3 are three random numbers within (0, 1), and In [11], the velocity of   keeps the same with   .Although the velocity mechanism is simple, it may not beneficial for the next flight of   .Therefore, we use a similar method to generate   : where  best   is the velocity vector of  best  and  1 ,  2 are the velocity vectors of  1 and  2 , respectively.
Beside the local neighbor strategy, a global neighborhood search strategy is proposed.For each particle   , a global particle   is generated as follows [11]: where  3 and  4 are the position vectors of two particles,  3 and  4 , randomly selected from the current swarm, 3, 4 ∈ [1, ]∧3 ̸ = 4 ̸ = ,  1 ,  2 ,  3 are three random numbers within (0, 1), and  1 + 2 + 3 = 1.In [11], the velocities of   keeps the same with   .So, the velocity of   ,   , and   are the same, but   (local) and   (global) are two different types of particles.Like (5), this paper uses a new method to generate   : where  best  is the velocity vector of  best and  3 ,  4 are the velocity vectors of  3 and  4 , respectively.After generating two new particles   and   , a greedy selection mechanism is used.Among   ,   , and   , we select the best one as the new   .
In our approach NSBPSO, it embeds the local and global neighborhood search strategies into barebones PSO.The neighborhood search strategies focus on searching the neighbors of particles and provide different search behaviors.The BPSO concentrates on minimizing the dependency of the control parameters (without ,  1 , and  2 ).By hybridization of BPSO and the neighborhood strategies, NSBPSO is almost a parameter-free algorithm (except for the probability of the neighborhood search), which achieves a tradeoff between exploration and exploitation.
The main steps of NSBPSO are listed as follows.
Step 1. Randomly initialize the swarm, and evaluate the fitness values of all particles.
Step 2. Initialize  best and  best .
Step 3.For each particle   , calculate its new position vector   according to (3).Evaluate the fitness value of   .If needed, update  best  and  best .
Step 4. For each particle   , if  and (0, 1) <  ns , where  and (0, 1) is a random number within [0, 1] and  ns is the probability of conducting neighborhood search, then go to Step 5; otherwise go to Step 6.
Step 5. Generate a local particle   according to (4) and ( 5).Generate a local particle   according to ( 6) and (7).Evaluate the fitness values of   and   .Among   ,   , and   , we select the best one as the new   .If needed, update  best  and  best .
Step 6.If the stop criterion is satisfied, then stop the algorithm and output the results; otherwise go to Step 3.

Experimental Study
4.1.Test Problems.In order to verify the performance of our approach, there are twelve well-known benchmark problems used in the following experiments [10].According to the properties of these problems, they are divided into two three types: unimodal problems ( 1 - 2 ), unrotated multimodal problems ( 3 - 8 ), and rotated multimodal problems ( 9 - 12 ).For rotated problems, the original variable  is left multiplied by the orthogonal matrix M to get the new rotated variable  = M * .For all test problems, they are to be minimized and their global optima are zero.The specific descriptions of these problems are presented in Table 1.

Effects of the Parameter 𝑃 ns .
The main contribution of this paper is to minimize the effects of the control parameters and improve the performance of BPSO.Although NSBPSO eliminates the control parameters, ,  1 , and  2 , it introduces two new parameters  and  ns .The parameter  is the size of neighborhood radius.The ring population topology used in this paper assumes that particles are connected by their indices.Although  2 and  3 are two neighbors of  1 , they may not be the nearest one to  1 (Euclidean distance).So, the size of the neighborhood radius does not affect the selection of particles in the local neighborhood search.Our empirical studies also confirm it (here we do not list the results of NSBSO with different -neighborhood radius).According to the suggestions of [11],  is set to 2 in this paper.The parameter  ns controls the probability of conducting neighborhood search.A larger  ns will result in more neighborhood search operations, while a smaller  ns will have less.This may affect the performance of NSBPSO.To investigate the effects of  ns , this section presents an experimental study.In the experiment, the  ns is set to 0.0, 0.1, 0.3, 0.5, 0.7, and 1.0, respectively.The performance of NSBPSO with different  ns is compared.
For other parameters of NSBPSO, we use the following settings by the suggestions of [10].The population size  is set to 40.When the number of fitness evaluations (FEs) reaches the maximum value MAX FEs, the algorithm stops running.In the experiment, MAX FEs is set to 2.0 + 05.For each test problem, NSBPSO is run 30 times and the mean fitness error values are reported.
Table 2 presents the computational results of NSBPSO under different  ns , where "Mean" represents the mean fitness error values.The best results among the comparison are shown in boldface.As seen, the performance of NSBPSO is not sensitive to the parameter  ns .A smaller ( ns < 0.3) or larger ( ns > 0.5) value of  ns almost achieves similar results.For  ns = 0.0, NSBPSO is equal to the original BPSO, because the neighborhood search operations are not conducted.For this case, the algorithm shows poor performance and falls into local minima on most test functions.When  ns = 0.1, NSBPSO significantly outperforms NSBPSO with  ns = 0.0.It demonstrates that the neighborhood search strategies are very effective.Even if we use a small  ns , NSBPSO can also obtain promising results.
The value of  ns does not affect the performance of NSBPSO, and  ns > 0 is applicable for all test problems.In this paper,  ns = 0.3 is used in the following experiments.
Figure 2 presents the convergence processes of NSBPSO with different  ns .Although different  ns of NSBPSO can find the global optimum on the majority of test functions, they show different convergence characteristics.For problem  1 , larger  ns converges faster than smaller  ns .For problem  2 ,  ns = 0.1 converges fastest than other values.For  8 ,  ns = 0.3 shows the fastest convergence speed.

Comparison of NSBPSO with
Other PSO Algorithms.In this section, experiments are conducted to compare nine PSO algorithms including the proposed NSBPSO on the 12 test problems.The involved algorithms are listed as follows.
For the sake of fair comparison, we use the same settings for the same parameters.For all algorithms, the population size  is set to 40, and the maximum number of fitness evaluations (MAX FEs) is set to 2.0 + 05.For standard PSO,  is linearly decreased from 0.9 to 0.4, and  1 =  2 = 1.49618.For NSPSO, the probability of neighborhood search  ns is set to 0.3.The parameter settings of UPSO, CPSO-H, FIPS, and CLPSO are described in [10].For NSPSO and APSO, the same parameters are used by the literature [11,13], respectively.For each test problem, each algorithm is conducted 30 times and the mean fitness error values are reported.
Table 3 lists the comparison results of NSBPSO with other eight PSO algorithms, where "Mean" represents the mean fitness error values.The best results are shown in boldface.
From the results, it can be seen that NSBPSO outperforms PSO, BPSO, and FIPS on all test problems.UPSO and APSO perform better than NSBPSO on  2 , while NSBPSO achieves better results for the rest 11 problems.CPSO-H obtains better solution than NSBPSO on  2 , while NSBPSO outperforms CPSO-H on 10 problems.For problem  6 , both NSBPSO and CPSO-H can find the global optimum.NSPSO performs    From the comparison of BPSO and PSO, BPSO outperforms PSO on 6 problems, while PSO achieves better results than BPSO for the rest 6 problems.The results demonstrate that the performance of BPSO is similar to PSO on these problems.Compared to PSO, BPSO is more competitive, because BPSO does not contain any control parameter (except for the population size), while PSO employs empirical parameter settings.By hybridization of BPSO (or PSO) and the neighborhood search, NSBPSO (or NSPSO) achieves significantly improvements on the performance of BPSO (or PSO).Compared to NSPSO, NSBPSO not only achieves better results, but also has less control parameters.
In order to compare the performance differences between NSBPSO and the other eight PSO algorithms, we conduct the Wilcoxon signed-rank test by the suggestions of [22].Table 4 shows the -values achieved by the Wilcoxon test.The  values below 0.05 are shown in boldface.As shown, NSBPSO is significantly better than all other algorithms except for CLPSO and NSPSO.Though NSBPSO is not significantly better than them, it outperforms them in the majority of test problems.

Application on Ship Design
5.1.Problem Description.This section investigates the performance of our approach NSBPSO for a conceptual ship design.The original optimization statements are presented in [23,24].The ship design optimization problem used in this paper has six design variables, three objectives, and 9 inequality constraints.The design variables are length (), beam (), depth (), draft (), block coefficient (  ), and speed in knots (  ).The ship design problem aims to minimize transportation cost (  ) and lightship weight (  ) and maximize annual cargo (  ) [24]: where   =   /  ,   = (DWT −   − DWT  ) ⋅ RTPA, and   =   +  +  .The specific model definition is described in Table 5 [25].The search ranges of the six variables , , , ,   , and   are listed in Table 6.
There are 9 inequality constraints listed as follows: (9)

Constraint Handling.
In order to deal with the constraints, an adaptive penalty method is employed by the suggestions of [19].Let () be the objective function (  ,   , or   ).The fitness evaluation function () is defined by where  is the number of inequality constraints,   = min{0,   ()} is the constraint violation for the th constraint,   () is the th constraint, () is the mean objective function value in the current swarm, and   is a penalty coefficient defined as follows: where   () is the average violation of the th constraint for all particles in the swarm.

Computational Results.
The ship design problem is a multiobjective optimization problem which has three objectives.By the suggestions of [24,25], this paper only considers single objective optimization.Therefore, the whole problem is divided into three single objective optimization problems: (1) minimize transportation cost (  ), (2) minimize lightship weight (  ), and (3) maximize annual cargo (  ).
In this section, we conduct three series of experiments for the three single optimization problems.In order to verify the performance of our approach NSBPSO, we compare it with four other algorithms.The involved algorithms are listed as follows: (1) parsons and Scott's method [24], (2) standard PSO, (3) barebones PSO (BPSO), (4) PSO with neighborhood search (NSPSO), (5) our approach (NSBPSO).
To have a fair comparison, the same parameter settings are used for common parameters.For all algorithms, the population size and the maximum number of fitness evaluations (MAX FEs) are set to 100 and 1.0+06.For standard PSO,  is linearly decreased from 0.9 to 0.4 and  1 =  2 = 1.49618.For NSBPSO and NSPSO,  ns is set to 0.3.For each optimization problem, each algorithm is run 10 times and the best results among these runs are presented.
Tables 7-9 show the computational results for the three problems.For Table 7, NSBPSO achieves the minimal transportation cost among the five algorithms; but it also obtains the minimal value of annual cargo.For the objective of   , NSBPSO is the best among the five algorithms, however, it could not obtain the best results for all three objectives.For annual cargo   , Parsons and Scott's [24] method is the best.Tables 8 and 9 can also get similar conclusions.The results demonstrate that NSBPSO shows better performance than the other three algorithms for single objective optimization problem of the ship design.When considering all objectives, we cannot conclude which algorithm is the best.To perfectly solve this problem, we may use multiobjective optimization algorithms.Figures 3, 4, and 5 present the convergence curves of PSO, BPSO, NSPSO, and NSBPSO for the three single objective problems.For minimizing transportation cost, NSBPSO shows faster convergence speed at the last stage of the evolution.For minimizing lightship weight, NSBPSO converges faster than other three PSO algorithms.For maximizing annual cargo, both NSBPSO and NSPSO show similar convergence characteristics.

Conclusions
Barebones PSO (BPSO) is a new variant of PSO which eliminates the velocity term.Although some reported results show that BPSO is better than PSO, it still gets stuck when solving complex multimodal problems.In order to enhance the performance of BPSO, this paper proposes an improved version called BPSO with neighborhood search (NSBPSO).The new approach embeds one local and one global neighborhood search strategies into the original BPSO to achieve a tradeoff between exploration and exploitation.
Compared to other improved PSO algorithms, NSBPSO is  almost parameter-free algorithm except for the probability of neighborhood search ( ns ).
Experimental studies are conducted on twelve wellknown benchmark problems, including unimodal, multimodal, and rotated multimodal problems.Computational results show that the parameter  ns does not affect the performance of NSBPSO.When  ns > 0, NSBPSO can obtain good performance.Another comparison demonstrates that NSBPSO performs better than, or at least comparable to, several other state-of-the-art PSO algorithms.Compared to PSO with neighborhood search (NSPSO), our approach NSBPSO not only achieves better results, but also has less control parameters.
For the ship design problem, NSBPSO performs better than other three algorithms when optimizing a single objective.When considering all three objectives, we cannot

Figure 2 :
Figure 2: The convergence curves of NSBPSO with different  ns .

Table 1 :
The twelve benchmark problems.

Table 2 :
Results achieved by NSBPSO with different  ns .

Table 3 :
Results achieved by the nine PSO algorithms.

Table 4 :
Results of Wilcoxon signed-rank test between NSBPSO and other eight PSO algorithms.

Table 6 :
Search ranges of the six design variables.

Table 7 :
Computational results for minimizing transportation cost.

Table 8 :
Computational results for minimizing lightship weight.

Table 9 :
Computational results for maximizing annual cargo.which algorithm is the best.Because one algorithm only achieves better results than other algorithms on one or two objectives.To tackle this problem, we can use multiobjective optimization algorithms to optimize the three objectives at the same time.This will be investigated in the future work. determine