Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems.
National Natural Science Foundation of China5167408651534004Northeast Petroleum University Innovation Foundation for PostgraduateYJSCX2015-012NEPU1. Introduction
Global optimization problems are common in engineering and other related fields [1–3], and it is usually difficult to solve the global optimization problems due to many local optima and complex search space, especially in high dimensions. For solving optimization problems, many methods have been reported in the past few years. Recently, the stochastic optimization algorithms have attracted increasing attention because they can get better solutions without any properties of the objective functions. Therefore, many effective metaheuristic algorithms have been presented, such as simulated annealing (SA) [4], differential evolution (DE) [5], genetic algorithm (GA) [6], particle swarm optimization (PSO) [7], ant colony optimization (ACO) [8], artificial bee colony (ABC) [9], and fireworks algorithm (FWA) [10].
Among these intelligent algorithms, the PSO and FWA have shown pretty outstanding performance in solving global optimization problems in the last several years. PSO algorithm is a population-based algorithm originally proposed by Kennedy and Eberhart [7], which is inspired by the foraging behavior of birds. Fireworks algorithm is a new swarm intelligence algorithm that is motivated by observing fireworks explosion. Owing to the less decision parameters, simple implementation, and good scalability, PSO and FWA have been widely applied since they were proposed, including shunting schedule optimization of electric multiple units depot [11], optimal operation of trunk natural gas pipelines [12], location optimization of logistics distribution center [13], artificial neural networks design [14], warehouse-scheduling [15], fertilization optimization [16], power system reconfiguration [17], and multimodal function optimization [18].
Although PSO and FWA are highly successful in solving some classes of global optimization problems, there are certain problems that need to be addressed when they are extended to handling complex high-dimensional optimization problems. The PSO algorithm has a significant efficiency in unimodal problems, but it can easily be trapped in local optima for multimodal problems. Moreover, the FWA is difficult to converge for the optimization problems which do not have their optimal solutions at the origin. This is because the two algorithms cannot keep the balance between the exploration and exploitation properly. Due to the optimal particle dominating the solving process, the PSO algorithm has inferior swarm diversity in the later stage of iterations and relatively poor exploration ability [19], while the fireworks and sparks in FWA are not well-informed by the whole swarm [20] and the FWA framework lacks the local search efficiency for noncore fireworks [21]. In order to improve the performance of PSO and FWA, a considerable number of modified algorithms have been proposed. For example, Nickabadi et al. presented AIWPSO algorithm, in which a new adaptive inertia weight approach was adopted [22]. By embedding a reverse predictor and adding a repulsive force into the basic algorithm, the RPPSO was developed [23]. Wang and Liu used three strategies to ameliorate the standard algorithm, including best neighbor replacement, abandoned mechanism, and chaotic searching [24]. Souravlias and Parsopoulos introduced a PSO-based variant, which could dynamically assign different computational budget for each particle based on the quality of its neighbor [25]. Based on self-adaption principle and bimodal Gaussian function, the advanced fireworks algorithm (AFWA) was proposed [26]. Liu et al. presented several methods for computing the explosion amplitude and number of sparks [27]. Pei et al. proposed to use the elite point of approximation landscape in the fireworks swarm and discussed the effectiveness of surrogate-assisted FWA [28]. Zheng et al. improved the new explosion operator, mutation operator, selection strategy, and mapping rules of FWA, which led to the formation of enhanced fireworks algorithm (EFWA) [29, 30] and dynamic search in fireworks algorithm (dynFWA) [31]. Zheng et al. proposed the new cooperative FWA framework (CoFFWA), in which the independent selection method and crowdedness-avoiding cooperative strategy were contained [21]. Li et al. investigated the operators of FWA and introduced a novel guiding spark in FWA [32] and proposed the adaptive fireworks algorithm (AFWA) [33] and bare bones fireworks algorithm (BBFWA) [34].
Hybrid algorithms can utilize various exploration and exploitation strategies for high-dimensional multimodal optimization problems, which have gradually become the new research areas. For example, Valdez et al. combined the advantages of PSO with GA and proposed a modified hybrid method [35]. In the new PS-ABC algorithm introduced by Li et al., the global optimum could be obtained by combining the local search phase in PSO with two global search phases in ABC [19]. Pandit et al. presented the SPSO-DE, in which the domain information of PSO and DE was shared with one another to overcome their respective weaknesses [36]. Through changing the generation and selection strategy of explosive spark, Gao and Diao proposed the CA-FWA [37]. Zhang et al. proposed BBO-FW algorithm which improved the interaction ability between fireworks [38]. By combining the FWA with the operators of DE, a novel hybrid optimization algorithm was proposed [20].
In this paper, by utilizing the exploitation ability of PSO and the exploration ability of FWA, a novel hybrid optimization algorithm called PS-FW is proposed. Based on the solving process of PSO algorithm, the operators of FWA are embedded into the update operation of the particle swarm. In the iteration process, in order to promote the balance of exploitation and exploration ability of PS-FW, we presented three major techniques. Firstly, the abandonment and supplement strategy is used, to abandon a certain number of particles with poor quality and to supplement the particle swarm with new individuals generated by FWA. Meanwhile, considering the information exchanges between the optimal firework and its neighbor in each dimension, the method for obtaining the explosion amplitude is designed as adaptive, and the mode of generating the explosion sparks is modified by combing the greedy algorithm. Furthermore, the conventional Gaussian mutation operator is abandoned, and the novel mutation operator based on the thought of the social cognition and learning is proposed. The performance of PS-FW is compared with several existing optimization algorithms. The experimental results show that the proposed PS-FW is more efficacious in solving the global optimization problems.
The rest of the paper is organized as follows: Section 2 describes the standard PSO and FWA. Section 3 presents the PS-FW algorithm, in which the algorithm details are proposed. Section 4 introduces the simulation results over 22 high-dimensional benchmark functions and the corresponding comparisons between PS-FW and other algorithms are executed. Finally, the conclusion is drawn in Section 5.
2. Related Work2.1. PSO Algorithm
In PSO algorithm, the particles scatter in search space of the optimization problems and each particle denotes a feasible solution. Each particle contains three aspects of information: the current position xi, the velocity vi, and the previous best position pbesti. Assume that the optimization problem is D-dimensional and M represents the size of the swarm population; then the position and velocity of ith (i=1,2,…,M) particle can be denoted as xi=(xi,1,xi,2,…,xi,D) and vi=(vi,1,vi,2,…,vi,D), respectively, while the previous best position is represented as pbesti=(pbesti,1,pbesti,2,…,pbesti,D). Besides, the best position encountered by the entire particles so far is known as current global best position gbesti=(gbest1,gbest2,…,gbestD). In each generation, vi and xi are updated by the following equations: (1)vi,kt+1=w·vi,kt+c1·r1·pbesti,kt-xi,kt+c2·r2·gbestt-xi,kt,(2)xi,kt+1=xi,kt+vi,kt+1,where c1 and c2 are two learning factors that indicate the influence of the cognitive and social components, r1 and r2 are the random real numbers in interval [0,1], respectively, and w is the inertia weight which controls the convergence speed of the algorithm.
2.2. Fireworks Algorithm
In FWA, a firework or a spark denotes a potential solution of optimization problems, while the process of producing sparks from fireworks represents a search in the feasible space. As in other optimization algorithms, the optimal solutions are obtained by successive iterations. In each iteration, the sparks can be produced by two ways: the explosion and the Gaussian mutation. The explosion of fireworks is dominated by the explosion amplitude and the number of explosion sparks. Compared to the fireworks with lower fitness, the fireworks with better fitness will have smaller explosion amplitude and more explosion sparks. Suppose that N denotes the number of fireworks; then the ith (i=1,2,…,N) firework can be denoted as x¯=(x¯i,1,x¯i,2,…,x¯i,D) for D-dimensional optimization problems. Besides, the explosion amplitude can be obtained by (3) and the sparks number can be calculated by (4): (3)Ai=A^·fx¯i-ymin+ε∑i=1Nfx¯i-ymin+ε,(4)si=Me·ymax-fx¯i+ε∑i=1Nymax-fx¯i+ε,where f(x¯) denotes the objective function value of the ith firework, i=1,2,…,N, Ai and si are the explosion amplitude and the number of explosion sparks of the ith firework, respectively, ymax=max(f(x¯i)), ymin=min(f(x¯i)), A^ and Me are two constants that dominate the explosion amplitude and the number of explosion sparks, respectively, and ε is the machine epsilon.
Moreover, the bounds of si are defined as follows: (5)si=rounda·Me,si<a·Meroundb·Me,si>b·Meroundsi,otherwise,where a, b are two constants that control the minimum and maximum of population size, respectively.
In order to generate each explosion spark of ith firework, an offset is added to x¯i according to the following equation: (6)x^ij=x¯i+Δh,where x^ij is the jth explosion spark of ith firework and Δh=Ai·rand(-1,1)·B^, where B^ is a D-dimensional vector which has z^ij values of 1 and D-z^ij values of 0, where z^ij denotes the number of randomly selected dimensions of x¯ and z^ij=D·rand(), j=1,2,…,si, where rand(-1,1) and rand() are random numbers in the intervals [-1,1] and [0,1], respectively.
Another type of sparks known as the Gaussian sparks is generated based on the Gaussian mutation operator. In each generation, a certain number of Gaussian sparks are generated and each Gaussian spark is transformed from a firework which is selected randomly. For the selected firework x¯i, its Gaussian spark is generated based on(7)x~j=O~-B~i·x¯i+Gaussian1,1·x¯i·B~i,where x~j is the jth Gaussian spark, O~ is a D-dimensional vector whose values are 1 in each dimension, B~ is a D-dimensional vector which has z~i values of 1 and D-z~i values of 0, z~i represents the number of randomly selected dimensions of x¯i and z~i=D·rand(), and Gaussian(1,-1) represents a random number subordinated to the Gaussian distribution with the mean of 1 and the standard deviation of 1.
For the purpose of passing information to the next generation, new fireworks populations are chosen to continue the iteration. All the fireworks, the explosion sparks, and Gaussian sparks have the chance to be selected for the next iteration. The location with best fitness is kept for the next generation, while the other N-1 locations are selected based on the selection operator and the selection operator is denoted as follows: (8)RXi=∑j∈KdXi,Xj=∑j∈KXi-Xj,pXi=RXi∑k∈KRXk,where K denotes the set comprised of all the original fireworks and both types of sparks, Xi, Xj, and Xk are ith, jth, and kth location of K, respectively, R(Xi) is the distance between ith location and the rest of all the locations, and p(Xi) denotes the probability of being selected for the ith location.
3. Hybrid Optimization Algorithm Based on PSO and FWA
The exploitation process focuses on utilizing the existing information to look for better solutions, whereas the exploration process attaches importance to seek the optimal solutions in the entire space. For PSO, under the guidance of their historical best solutions and the current global best solution, the particles can quickly find better solutions, and the excellent exploitation efficiency of algorithm is shown. In FWA, the fireworks can find the global optimal solution in the whole search space by performing explosion and mutation operations while the outstanding exploration capability of FWA is demonstrated. To utilize the advantages of the two algorithms, a hybrid optimization method (PS-FW) based on PSO and FWA is proposed.
3.1. Feasibility Analysis
The formation of a hybrid algorithm is mainly due to the effective combination of the operators of its composition algorithms in a certain way. To clarify the performance enhancement caused by combining the PSO algorithm with fireworks algorithm, we draw Figures 1 and 2 to illustrate the optimization mechanism. As shown in Figure 1, for standard PSO algorithm, the ith particle moves from point 1 to point 4 under the common influence of velocity inertia, self-cognition, and social information. When the operators of FWA are added, the particle is transformed into firework and performs explosion and mutation operations and eventually reaches the position of firework or sparks, such as point 5 shown in Figure 1. By performing the operators of FWA, the particle can explore better solutions in multiple directions and jump out of the local optima region as depicted in Figure 1. Thus we can argue that the operators of FWA improve the global search ability of PSO algorithm. As we know, the searching region is determined by the explosion amplitude and fireworks with poor quality have bigger amplitude, which may lead to an uncomprehensive search without considering the cooperation with other fireworks. When the firework with poor quality generates the explosion sparks and mutation sparks, the new selected location may skip over the global optima region without the attraction from the rest of fireworks and arrive at point 2. By adding the operators of PSO after the ith firework updates its location, the information of its own historical best location and current global best location is taken into account; then the new solution is found in point 5, which is shown in Figure 2. Therefore, the operators of PSO could strengthen the local search efficiency of FWA. Based on the above analysis, it is concluded that the combination of PSO and FWA is an effective way to form a superior optimization algorithm.
Optimization mechanism of adding operators of FWA to PSO algorithm.
Optimization mechanism of adding operators of PSO to FWA.
3.2. The Abandonment and Supplement Mechanism
The particles with their memory ability can be quickly converged to the current optimal solution. However, the aggregation effect of the particle swarm reduces the diversity of the population, which makes the search in the whole feasible space inefficient. In this paper, in order to enhance the balance between exploitation ability and exploration ability of PS-FW, we adopt the abandonment and supplement strategy which includes three main steps. (i) All the particles in the particle swarm x1,x2,…,xM are sorted in ascending order. Then the Pnum particles with better fitness are retained for the next iteration, and the FWnum (satisfying Pnum+FWnum=M) particles with lower fitness are abandoned. (ii) The Pnum excellent individuals denoted as xF1,xF2,…,xFPnum are used to implement the explosion operator, the mutation operator, and the selection operator. (iii) The new individuals obtained by the operators of FWA are added to the original population, to balance the number of particles and to generate the new particle swarm for the next iteration. The abandonment and supplement strategy not only retains the information of the excellent individuals so that they can participate in the subsequent calculation, but also avoids the individuals with poor quality wasting computing resources. However, the problem arises: how to determine Pnum? For this, through analyzing the process of solving the optimization problems, we should enhance the exploration ability of the algorithm and search the optimal solution in the global scope at early stage of iterations, which means the number of particles executing the operators of FWA should be the majority. In the later stage of iteration, we should focus on searching around the current global optimal solution, so the number of excellent individuals retained in the algorithm should be more. Based on the discussion above, the calculation of FWnum in this paper is shown in (9), in which FWnum decreases with iteration process. (9)FWnum=roundFWmax-FWmin·Imax-tImaxr+FWmin,where FWmax and FWmin are the upper and lower bounds of number of abandoned particles, respectively, Imax is the maximum number of iterations, t denotes the current number of iterations, round[] indicates that the values in brackets are rounded, and r represents a positive integer.
Based on the analysis above, the definition of the explosion amplitude in standard FWA limits the diversity of the explosion sparks generated by the excellent fireworks, thus decreasing the local search ability of algorithm. In the enhanced fireworks algorithm (EFWA) [29], in order to avoid the weakness of the explosion amplitude generation in FWA, a minimal explosion amplitude check mechanism is proposed, which defines the explosion amplitude less than a certain threshold to obtain the same value as the threshold while the threshold is reducing with the iteration process. Suppose that δ denotes the threshold of explosion amplitude; then the explosion amplitude less than the threshold is defined as (10) in EFWA. (10)A^=A^init-A^init-A^finalImax·2Imax-tt,where A^init and A^final are the upper and lower bounds of the explosion amplitude, respectively.
In this paper, based on the minimal explosion amplitude detection mechanism, the basic explosion amplitude of each firework is calculated according to (3), and the explosion amplitude is adjusted by the following two methods.
(1) For the fireworks whose explosion amplitude is greater than the threshold δ, a control factor λ of the explosion amplitude is added. The control factor makes the explosion sparks generated by the algorithm have larger search scope in the early stage of iterations, which can effectively enhance the exploration ability of the algorithm. In the later stage of iterations, the explosion amplitude is reduced to improve the search efficiency around the current global optimal solution. The adjustment of the explosion amplitude is shown in (11), and the control factor is calculated as shown in (12). (11)A^i=A^i·λ,∀A^i>δ,(12)λ=λmin·λmaxλmin1/1+t/Imax,where λmax and λmin are the lower and upper bounds of the control factor, respectively.
(2) When the explosion amplitude of firework x¯i is less than the threshold, the optimal firework and its neighbor information are used to determine the explosion amplitude in the hybrid algorithm. Since the PS-FW algorithm is based on the framework of PSO, the position of all individuals will approach the current best position, which leads to the fitness of current optimal individual close to its neighbor individuals. That is to say, if the explosion amplitude of a firework is too small, indicating that the firework may be located near the current best location, therefore, by considering the deviation information of all corresponding dimensions between the current best firework and its neighbor firework, a new explosion amplitude of the firework x¯i is generated. The explosion amplitude generation method can adaptively optimize the solving process, which can be interpreted from two aspects. When the algorithm is in the early iteration stage, the position of fireworks is scattered, and the deviation in dimensions between the optimal firework and its neighbor firework is larger, which leads to the larger explosion amplitude and the improved probability of finding the global optimal solution. As the algorithm enters the later iterations, the fireworks gather around the current best location, and the offset of each dimension between the current best firework and its neighbor firework is reduced, which results in the decrement of explosion amplitude and the improvement of the local search ability for PS-FW. There are two main steps to obtain the explosion amplitude. (i) Randomly select a firework x¯j around the current optimal firework according to the fitness. (ii) Update the explosion amplitude of the ith firework according to the following equation: (13)A^i=∑k=1Dx¯best,k-x¯j,kD,where x¯best,k denotes the value of the kth dimension of current optimal firework.
3.3.2. Modified Explosion Sparks Generation
In FWA, when generating an explosion spark, the offset Δh is only calculated once, which results in the same changes for all the selected dimensions and an ineffective search for different directions. In the PS-FW algorithm proposed in this paper, a new explosion sparks generation method is introduced. Firstly, when generating the explosion sparks, the location offset is performed in all the dimensions of the fireworks instead of randomly selecting part of dimensions. Furthermore, for each dimension of the fireworks, the different offsets are calculated according to (14), thereby increasing the diversity of the explosion sparks and the global search capability of the hybrid algorithm. Meanwhile, suppose that x¯temp denotes the ith firework without a location offset and x¯+ indicates the ith firework whose kth dimension adds a offset; then x¯- denotes the ith firework whose kth dimension subtracts an offset. As shown in (15), inspired by greedy algorithm, when the fireworks generate their explosion sparks, the hybrid algorithm determines which offset to be selected based on the value of objective function, which can effectively improve the local search capability of the algorithm and accelerate the convergence. (14)Δh^k=A^·Gaussian0,1,(15)x^i,kj=x¯i,k+Δh^k,fx¯+≤minfx¯temp,fx¯-x¯i,k-Δh^k,fx¯-≤minfx¯temp,fx¯+x¯i,k,fx¯temp≤minfx¯+,fx¯-,where x^i,kj and Δh^k are the value and offset of the kth dimension of the jth explosion spark for the ith firework, respectively, Gaussian(0,1) represents a random number that follows the standard normal distribution, i and j are integers in the intervals [1,Pnum] and [1,si], respectively, and min() indicates the minimum values in parentheses.
Assume that numE denotes the total number of explosion sparks generated by all fireworks, Smin and Smax represent the lower and upper bounds for the search scope, and Smin,k and Smax,k are corresponding to the bounds of kth dimension, respectively. Based on the explosion operator introduced in Sections 3.3.1 and 3.3.2, the detailed codes of explosion operator are represented in Algorithm 1.
<bold>Algorithm 1: </bold>Generating explosion sparks by the explosion operator of PS-FW.
(1) Input: Pnum particles sorted in ascending order according to their fitness.
(2) Initialize the location of fireworks: x¯i=xFi, i=1,2,…,Pnum.
(3) for i=1 to Pnum do
(4) Calculate the explosion amplitude Ai of ith firework by using (3).
(5) Calculate the number of explosion sparks si of ith firework by using (4).
(6) Update the number of explosion sparks of ith firework by using (5).
(7) if A^i>δ do
(8) Update the explosion amplitude of ith firework by using (11) and (12).
(9) else do
(10) Randomly select a firework x¯j around the current optimal firework.
(11) Update the explosion amplitude of ith firework by using (13).
(12) end if
(13) end for
(14) Initialize the total number of explosion sparks numE=0.
(15) for i=1 to Pnum do
(16) for j=1 to si do
(17) Initialize the location of the jth explosion spark: x^ij=x¯i.
(18) for k=1 to D do
(19) Calculate the offset by using (14).
(20) Update the value of kth dimension of jth explosion spark by using (15).
(21) if x^i,kj>Smax,k or x^i,kj<Smin,k do
(22) Update the x^i,kj by using (17).
(23) end if
(24) end for
(25) numE=numE+1
(26) end for
(27) end for
(28) Output: numE explosion sparks.
3.4. Novel Mutation Operator
As the Gaussian mutation operator effectively increases the diversity of feasible solutions, the performance of traditional FWA has been significantly improved. However, the numerical experiments show that the combined application of Gaussian operator and mapping operator makes the Gaussian sparks mostly concentrated around the zero point, which is the reason why FWA has the fast convergence speed for the problems with their optimal solutions at zero [31]. In order to improve the adaptability of the algorithm for the nonzero optimization problems and maintain the contribution of the mutation operator to the population diversity, a new mutation operator is proposed in the PS-FW. Compared with the standard FWA, there are two main differences in this paper. (i) In PS-FW, we randomly select a certain number of explosion sparks to generate the mutation sparks instead of using the fireworks. Because the explosion sparks have better quality compared to the fireworks based on (15), the mutation sparks generated by the explosion sparks can effectively enrich the diversity of the population and have better global search ability. (ii) In this paper, the Gaussian random number is no longer used in mutation operator and the interaction mechanism of particles in PSO is used for reference to design the mutation operator. The mutation sparks generated by our mutation operator can not only maintain the better information of the explosion sparks, but also have a proper movement towards the current best location, which leads to promoting the convergence of hybrid algorithm. The proposed mutation operator is shown as follows. (16)x~i,k=μ1·x^best,k-x^j,k+μ2·x^j,k,where x~i,k and x^j,k indicate the value of kth dimension of ith mutation spark and jth explosion spark, respectively, x^best,k is the current optimal explosion spark, μ1 and μ2 are the random number in [0,1], and j denotes the random integer of the interval [1, numE], i=1,2,…,numM, where numM indicates the total number of mutation sparks.
The detailed codes of mutation operator are represented in Algorithm 2.
<bold>Algorithm 2: </bold>Generating mutation sparks by the mutation operator of PS-FW.
(1) Input: numE explosion sparks and best explosion spark
x^best
(2) for i=1 to numM do
(3) Generate a random integer j in the interval [1, numE].
(4) Initialize the location of the ith mutation spark:
x~i=x^j.
(5) Calculate the number of dimensions to perform
the mutation: z~i=D⋅rand().
(6) Randomly select z~i dimensions of x~i.
(7) for each dimension x~i,k∈ pre-selected z~i dimensions
of x~i do
(8) Calculate the value of x~i,k by using (16).
(9) if x~i,k>Smax,k or x~i,k<Smin,k do
(10) Update the value of x~i,k by using (17).
(11) end if
(12) end for
(13) end for
(14) Output: numM mutation sparks.
3.5. Main Process of PS-FW
In PS-FW, the algorithm consists of two main stages, which are initialization stage and iterations stage. In the initialization phase, we need to initialize the position and velocity of the particle swarm, as well as to initialize the control parameters. In the iterative phase, the PS-FW algorithm inherits all the parameters and operators of the PSO algorithm, and all particles are used as the main carrier for storing feasible solutions. Firstly, in each iteration, the particles update their speed and position according to the operators of the PSO algorithm and then perform the abandonment and supplement operation. Besides, in the process of generating the supplement particles by using the operators of FWA, we first generate numE explosion sparks according to the excellent Pnum particles and the modified explosion operator; then the fitness of the explosion sparks is given. Secondly, the numM mutation sparks are generated by the explosion sparks and the novel mutation operator. Finally, the FWnum supplement individuals are selected by the combination of elite strategy and roulette strategy. When each iteration is completed, it is judged whether the termination condition is satisfied. If the stopping criterion is matched, the iteration will be stopped and the best solutions are output. Otherwise, the iteration phase will be repeated.
In the procedures above, there are two points to be noted. (i) In the implementation process of the hybrid algorithm, it is necessary to detect whether the position of individuals is within the feasible scope while the individuals consist of particles, fireworks, explosion sparks, and mutation sparks. As shown in (17), if the position of individuals exceeds the feasible scope, it is adjusted by using the mapping criteria in the EFWA algorithm [29]. (17)Yi,k=Smin,k+e·Smax,k-Smin,k∀Yi,k>Smax,k or Yi,k<Smin,k,where Yi,k indicates the value of the kth dimension of the individual and e is a random number in [0,1].
(ii) The selection strategy of FWA based on the density of feasible solutions is abandoned in the PS-FW algorithm. Although it is possible to maintain the diversity of the population by selecting the location which has fewer individuals around with a larger probability, relatively more time is wasted by calculating the spatial distance between the individuals and the efficiency of the algorithm is reduced. Therefore, a selection strategy based on fitness is applied in PS-FW, which means the elite strategy is used to retain the best individual directly into the next iteration and the remaining FWnum-1 locations are selected by the roulette criterion according to the fitness.
According to the description above, the main codes of the PS-FW algorithm are given in Algorithm 3.
<bold>Algorithm 3: </bold>The main codes of PS-FW algorithm.
(1) Input: Objective function f(x) and constraints.
(2) Initialization
(3) Parameters initialization: assign values to M, wmax, wmin, c1, c2, A^, Me, ε, δ, a, b, r, numM, Imax, FWmax, FWmin, λmin, λmax
(4) Population initialization: generate the random values for xi and vi of each particle in the feasible domain,
calculate the gbest of initial population.
(5) Set pbesti=xi, (i=1,2,…,M) and t=0.
(6) Iterations
(7) while t≤Imax
(8) t=t+1
(9) for i=1 to M
(10) for j=1 to D
(11) Update the velocity of particle xi by using (1).
(12) Update the position of particle xi by using (2).
(13) if xi,k>Smax,k or xi,k<Smin,k
(14) Update the value of xi,k by using (17).
(15) end if
(16) end for
(17) end for
(18) Calculate FWnum by using the (9).
(19) Sort the particle population in ascending order and select the Pnum particles with better fitness.
(20) Generate numE explosion sparks by using Algorithm 1.
(21) Calculate the fitness of explosion sparks and storage the best explosion spark x^best.
(22) Generate numM mutation sparks by using Algorithm 2.
(23) Select the FWnum individuals from the explosion sparks and mutation sparks by using the selection strategy.
(24) Combine the Pnum particles with FWnum individuals to generate the new population.
(25) Calculate gbest and pbesti of new population.
(26) end while
(27) Output: gbest=(gbest1,gbest2,…,gbestD)
4. Problems, Experiments, and Discussion4.1. Test Problems
In order to evaluate the efficacy and accuracy of the proposed algorithm, the performance of PS-FW is tested by the 22 high-dimensional benchmark functions. The test problems which consist of multimodal functions and unimodal functions are listed in Table 1, and the corresponding optimal solutions and search scope are presented in Table 1. Compared with solving unimodal problems, it is difficult to find the global optimum of multimodal problems because the local optima will induce the optimization algorithms’ fall into their surroundings. Therefore, if the algorithm can efficiently find the optimal solutions of multimodal functions, it can be proved that the algorithm is an excellent optimization algorithm.
In this section, we compare the performance of the PS-FW with the PSO and FWA based on the 22 benchmark functions. In order to explore global optimization capability of the three algorithms on solving the high-dimensional optimization problem, three experiments with different dimensions are carried out. The dimensions of experiments are set to D=30, D=60, and D=100, respectively, and each algorithm is used to solve all the benchmark functions 20 times independently. In order to make a fair comparison, the general control parameters of algorithms such as the maximum number of iterations (Imax) and the population size (M) are set to be of the same value. Imax is set to 1000 and M is set to 50 for each function. Besides, the algorithms used in the experiment are coded by MATLAB 14.0, and the experiment platform is a personal computer with Core i5, 2.02 GHz CPU, 4 G memory, and Windows 7. For the purpose of eliminating the impact on performance caused by the difference in parameter settings, the main control parameters of PS-FW algorithm are consistent with those of PSO and FWA, and the other detailed control parameters are shown in Table 2.
The parameter setting of the algorithms.
Algorithm
Parameter settings
PSO
wt=wmax-twmax-wminImax, wmax=0.95,
wmin=0.4, c1=c2=1.45
FWA
A^=40, Me=50, a=0.04, b=0.8,
numM=30, ε=1E-100
PS-FW
w(t)=wmax-twmax-wminImax, wmax=0.95,
wmin=0.4, c1=c2=1.45, A^=40,
Me=50, a=0.04, b=0.8, numM=30,
ε=1E-100, δ=1E-6, λmin=1E-25,
λmax=1, FWmax=30, FWmin=20, r=2
For all the benchmark functions, the mean and standard deviation of best solutions obtained by PS-FW and other algorithms in 20 independent runs are recorded, and the optimization results are shown in Tables 3–5. Meanwhile, the ranks are also presented in tables, and the three algorithms are ranked mainly based on the mean of best solutions. In addition, the average convergence speed of the proposed PS-FW is compared with other algorithms for functions f12, f13, and f20; therefore the convergence curves are shown in Figure 3.
Comparison of the optimization results obtained by PS-FW, PSO, and FWA with D=30 for functions f1 to f22 (the best ranks are marked in bold).
f
D
PSO
FWA
PS-FW
f1
30
Mean
8.8371E+01
1.3360E-151
5.8928E-264
Std
4.3475E+01
5.8057E-151
0
Rank
3
2
1
f2
30
Mean
7.1542E-02
0
0
Std
1.2385E-01
0
0
Rank
2
1
1
f3
30
Mean
5.5766E+02
2.6882E+01
0
Std
7.4828E+02
8.3997E-01
0
Rank
3
2
1
f4
30
Mean
6.6547E+01
0
0
Std
3.6430E+01
0
0
Rank
2
1
1
f5
30
Mean
6.5810E+01
0
0
Std
4.0117E+01
0
0
Rank
2
1
1
f6
30
Mean
0
0
0
Std
0
0
0
Rank
1
1
1
f7
30
Mean
1.4156E+04
7.6585E-83
4.5128E-122
Std
1.0006E+04
3.3383E-82
1.8821E-121
Rank
3
2
1
f8
30
Mean
1.0419E-03
9.6596E-304
0
Std
1.0584E-03
0
0
Rank
3
2
1
f9
30
Mean
6.3165E-01
7.4698E-54
3.1588E-97
Std
6.0679E-01
2.3638E-53
1.2719E-96
Rank
3
2
1
f10
30
Mean
1.5661E+01
3.2521E-78
1.8666E-137
Std
5.0924E+00
1.1460E-77
8.0013E-137
Rank
3
2
1
f11
30
Mean
-7.2662E+03
-1.0511E+04
-1.2483E+04
Std
6.7867E+02
1.9893E+02
1.2661E+02
Rank
3
2
1
f12
30
Mean
6.9734E-01
6.6542E-01
0
Std
2.8586E-01
5.0080E-01
0
Rank
3
2
1
f13
30
Mean
1.7831E+01
6.5460E+00
1.4998E-32
Std
8.6204E+00
8.6700E-01
0
Rank
3
2
1
f14
30
Mean
6.6576E-08
4.5613E-191
2.1563E-291
Std
5.4575E-08
0
0
Rank
3
2
1
f15
30
Mean
0
0
0
Std
0
0
0
Rank
1
1
1
f16
30
Mean
2.8937E+02
1.5997E-45
1.5471E-111
Std
1.5937E+02
3.5711E-45
6.0668E-111
Rank
3
2
1
f17
30
Mean
0
9.8737E+44
0
Std
0
4.3038E+45
0
Rank
1
2
1
f18
30
Mean
1.5069E+01
0
0
Std
4.0495E+00
0
0
Rank
2
1
1
f19
30
Mean
2.8450E+07
1.0123E-145
1.8302E-252
Std
1.2385E+08
3.1288E-145
0
Rank
3
2
1
f20
30
Mean
3.8005E+02
4.2079E+01
1
Std
8.5739E+01
4.6125E+00
0
Rank
3
2
1
f21
30
Mean
4.5577E+01
1.71130E+01
0
Std
2.3091E+01
2.1499E+00
0
Rank
3
2
1
f22
30
Mean
7.0166E-01
1.1989E-149
3.5102E-292
Std
5.9846E-01
5.2258E-149
0
Rank
3
2
1
Average rank
2.5455
1.7273
1
Overall rank
3
2
1
Comparison of the optimization results obtained by PS-FW, PSO, and FWA with D=60 for functions f1 to f22 (the best ranks are marked in bold).
f
D
PSO
FWA
PS-FW
f1
60
Mean
4.1677E+03
2.1235E-146
2.4481E-248
Std
4.4284E+03
6.3705E-146
0
Rank
3
2
1
f2
60
Mean
3.2482E+00
0
0
Std
9.6094E-01
0
0
Rank
2
1
1
f3
60
Mean
7.1638E+04
4.5073E+01
9.2568E-30
Std
5.5811E+04
1.8390E+01
1.9330E-29
Rank
3
2
1
f4
60
Mean
3.2219E+02
0
0
Std
4.1863E+01
0
0
Rank
2
1
1
f5
60
Mean
3.7498E+02
0
0
Std
5.3191E+01
0
0
Rank
2
1
1
f6
60
Mean
1.3162E+01
0
7.1054E-16
Std
1.1773E+00
0
1.4211E-15
Rank
3
1
2
f7
60
Mean
3.2017E+04
4.9633E-68
1.2294E-93
Std
1.4529E+04
1.48899E-67
4.9341E-93
Rank
3
2
1
f8
60
Mean
1.1343E+00
1.2096E-288
0
Std
3.2234E+00
0
0
Rank
3
2
1
f9
60
Mean
2.6902E+01
4.4049E-51
1.5914E-92
Std
5.4555E+00
1.3214E-50
4.8189E-92
Rank
3
2
1
f10
60
Mean
5.5140E+01
1.35612E-73
3.9617E-130
Std
2.1038E+01
4.06287E-73
1.7268E-129
Rank
3
2
1
f11
60
Mean
-1.1892E+04
-1.8005E+04
-2.4998E+04
Std
1.1022E+03
1.4727E+03
1.7201E+02
Rank
3
2
1
f12
60
Mean
3.4856E+01
1.9695E+00
0
Std
5.9316E+01
7.7525E-01
0
Rank
3
2
1
f13
60
Mean
6.2329E+01
1.5355E+01
1.4998E-32
Std
2.0956E+01
5.4415E+00
0
Rank
3
2
1
f14
60
Mean
2.2365E-07
1.6432E-187
1.5707E-278
Std
2.3968E-07
0
0
Rank
3
2
1
f15
60
Mean
0
0
0
Std
0
0
0
Rank
1
1
1
f16
60
Mean
8.0994E+02
1.7189E-38
6.8924E-104
Std
3.0726E+02
5.15482E-38
2.9641E-103
Rank
3
2
1
f17
60
Mean
0
2.4945E+145
0
Std
0
5.7208E+145
0
Rank
1
2
1
f18
60
Mean
3.9564E+01
0
0
Std
5.3138E+00
0
0
Rank
2
1
1
f19
60
Mean
5.7753E+08
6.6011E-137
4.5120E-251
Std
2.7159E+08
1.9631E-136
0
Rank
3
2
1
f20
60
Mean
5.3645E+03
1.4665E+02
1
Std
6.2256E+03
2.8947E+01
0
Rank
3
2
1
f21
60
Mean
1.9709E+02
4.8085E+01
0
Std
2.8605E+01
7.7355E+00
0
Rank
3
2
1
f22
60
Mean
1.5314E+00
1.5711E-142
1.3216E-280
Std
5.9245E-01
4.7133E-142
0
Rank
3
2
1
Average rank
2.6364
1.7273
1.0455
Overall rank
3
2
1
Comparison of the optimization results obtained by PS-FW, PSO, and FWA with D=100 for functions f1 to f22 (the best ranks are marked in bold).
f
D
PSO
FWA
PS-FW
f1
100
Mean
6.3501E+03
1.7672E-142
9.7833E-245
Std
2.9204E+03
4.3844E-142
0
Rank
3
2
1
f2
100
Mean
1.1830E+02
0
0
Std
5.1822E+01
0
0
Rank
2
1
1
f3
100
Mean
1.7018E+05
8.3094E+01
1.0341E-26
Std
6.6940E+04
2.2198E+01
3.8500E-26
Rank
3
2
1
f4
100
Mean
4.7288E+02
0
0
Std
1.0713E+02
0
0
Rank
2
1
1
f5
100
Mean
5.1626E+02
0
0
Std
1.4819E+02
0
0
Rank
2
1
1
f6
100
Mean
1.3582E+01
0
1.0659E-15
Std
2.3679E+00
0
1.6281E-15
Rank
3
1
2
f7
100
Mean
2.7218E+06
2.70634E-58
2.1860E-71
Std
8.2328E+05
8.11903E-58
4.7535E-71
Rank
3
2
1
f8
100
Mean
1.4283E+01
1.5868E-280
0
Std
3.8266E+01
0
0
Rank
3
2
1
f9
100
Mean
2.7189E+01
4.2938E-46
1.1555E-90
Std
5.0564E+00
1.1238E-45
2.7315E-90
Rank
3
2
1
f10
100
Mean
1.2486E+02
2.64613E-69
2.2792E-128
Std
2.3963E+01
7.93838E-69
9.7764E-128
Rank
3
2
1
f11
100
Mean
-1.5770E+04
-2.4526E+04
-4.1743E+04
Std
1.2531E+03
1.6861E+03
4.3502E+02
Rank
3
2
1
f12
100
Mean
1.2670E+02
4.2335E+00
0
Std
4.8966E+01
1.40825853
0
Rank
3
2
1
f13
100
Mean
2.4848E+02
3.1912E+01
1.4998E-32
Std
6.1955E+01
7.6762E+00
0
Rank
3
2
1
f14
100
Mean
4.7875E-07
6.5204E-175
6.4751E-275
Std
6.7428E-07
0
0
Rank
3
2
1
f15
100
Mean
0
0
0
Std
0
0
0
Rank
1
1
1
f16
100
Mean
1.4995E+03
1.9628E-14
2.4731E-93
Std
5.8180E+02
5.86607E-14
8.4009E-93
Rank
3
2
1
f17
100
Mean
0
2.0047E+232
0
Std
0
6.7205E+232
0
Rank
1
2
1
f18
100
Mean
6.8687E+01
0
0
Std
1.3221E+01
0
0
Rank
2
1
1
f19
100
Mean
1.4528E+10
3.3916E-130
9.0096E-250
Std
1.2994E+10
9.8384E-130
0
Rank
3
2
1
f20
100
Mean
9.0245E+03
2.6557E+02
1
Std
3.8036E+03
4.7674E+01
0
Rank
3
2
1
f21
100
Mean
4.0256E+03
9.1975E+01
0
Std
1.6131E+04
1.7966E+01
0
Rank
3
2
1
f22
100
Mean
1.6273E+00
4.0925E-137
4.9253E-273
Std
4.1513E-01
3.2175E-137
0
Rank
3
2
1
Average rank
2.6364
1.7273
1.0455
Overall rank
3
2
1
Convergence curves of PSO, FWA, and PS-FW for functions f12, f13, and f20.
f12 with D=30
f12 with D=60
f12 with D=100
f13 with D=30
f13 with D=60
f13 with D=100
f20 with D=30
f20 with D=60
f20 with D=100
According to the ranks shown in Tables 3–5, the average values of best solutions for the proposed PS-FW outperform those of the other algorithms. Besides, the performance of PS-FW over standard deviation of best solutions is also better than the rest of the algorithms. For 22 problems with D=30, the PS-FW can obtain the global optimum of f2, f3, f4, f5, f6, f8, f12, f15, f17, f18, f20, and f21, which shows excellent ability for solving optimization problems. As the dimensions of problems increase, the hybrid algorithm maintains outstanding performance and obtains the optimal solutions of the 10 functions, except for functions f3 and f6, compared with results in Table 3. When the dimensions of problems are 60 and 100, PS-FW can get the global optimum of functions f3 and f6, but not each run can succeed. This is because functions f3 and f6 are multimodal problems and the number of local optima increases rapidly as the dimensions of the problems increase, which adds the difficulty of avoiding trapping in the local optima. In addition, according to the ranks and values shown in Tables 3–5, the PS-FW can get the highest rank for all the functions. It is also needed to point out that the PS-FW obtains more stable solutions than PSO and FWA for all problems with the increasing of dimensionality. The convergence speed of the three algorithms can be seen in Figure 3, and the descend rate of average best solutions of PS-FW is obviously higher than the other two algorithms. This is because the advantages of PSO and FWA are combined into the PS-FW so that the hybrid algorithm enhances its global and local search ability. Therefore, PS-FW is efficient and robust in dealing with the high-dimensional benchmark functions.
From the above analysis, it is possible to show that the PS-FW algorithm performs well in solving the functions in Table 1. However, because the optimums of these functions are mostly at the origin, we need to further explore the performance of PS-FW algorithm on the nonzero problems. Then the experiment of nonzero problems is carried out to prove the comprehensive performance of PS-FW. In this experiment, the optimums of test functions derived from Table 1 are shifted and the specific values are displayed in Table 6. In addition, in order to achieve a fair comparison between the experiments, the parameters settings of three algorithms are consistent with Table 2 and the dimension is set to D=30. The optimization results of three algorithms are shown in Table 7 and the convergence curves of three algorithms over functions f12, f13, and f20 are displayed in Figure 4.
The benchmark functions with shift optima.
Name
Original optima
Shift optima
Sphere
[0,0,…,0]
[70,70,…,70]
Griewank
[0,0,…,0]
[70,70,…,70]
Rastrigin
[0,0,…,0]
[3,3,…,3]
Noncontinuous Rastrigin
[0,0,…,0]
[5,5,…,5]
Ackley
[0,0,…,0]
[20,20,…,20]
Rotated Hyper-Ellipsoid
[0,0,…,0]
[70,70,…,70]
Schwefel’s problem 2.21
[0,0,…,0]
[70,70,…,70]
Schwefel’s problem 2.22
[0,0,…,0]
[70,70,…,70]
Step
[-0.5,-0.5,…,-0.5]
[5,5,…,5]
Levy
[1,1,…,1]
[5,5,…,5]
Sum squares
[0,0,…,0]
[5,5,…,5]
Zakharov
[0,0,…,0]
[5,5,…,5]
Bent-Cigar
[0,0,…,0]
[70,70,…,70]
Trigonometric 2
[0.9,0.9,…,0.9]
[70,70,…,70]
Mishra 11
[0,0,…,0]
[5,5,…,5]
Comparison of the optimization results obtained by PS-FW, PSO, and FWA for functions in Table 6 (the best ranks are marked in bold).
f
D
PSO
FWA
PS-FW
f1
30
Mean
1.0851E+03
2.2555E+00
0
Std
1.1893E+03
3.8190E-01
0
Rank
3
2
1
f2
30
Mean
4.7829E+00
6.2867E-01
0
Std
1.5089E+00
5.3523E-02
0
Rank
3
2
1
f4
30
Mean
1.2559E+02
9.8052E+00
0
Std
4.7596E+01
1.6323E+00
0
Rank
3
2
1
f5
30
Mean
1.6140E+02
2.2289E+01
0
Std
3.7649E+01
2.7981E+00
0
Rank
3
2
1
f6
30
Mean
1.0739E+03
7.0977E+00
0
Std
1.1986E+03
4.3511E-01
0
Rank
3
2
1
f7
30
Mean
1.5716E+04
2.2295E+03
4.45263E-65
Std
8.7224E+03
2.4129E+02
2.87935E-65
Rank
3
2
1
f9
30
Mean
4.7379E+01
2.1052E+01
8.96847E-72
Std
1.5948E+01
1.4289E+00
1.31198E-71
Rank
3
2
1
f10
30
Mean
1.6846E+03
2.2370E+02
0
Std
2.6627E+02
7.4690E+01
0
Rank
3
2
1
f12
30
Mean
1.1359E+02
2.1375E+01
0
Std
4.1907E+01
2.9107E+00
0
Rank
3
2
1
f13
30
Mean
3.2776E+02
6.4154E+01
1.4998E-32
Std
8.5157E+01
1.0092E+01
0
Rank
3
2
1
f15
30
Mean
0
2.9887E-04
0
Std
0
1.3027E-03
0
Rank
1
2
1
f16
30
Mean
8.0214E+00
3.1159E+02
1.53313E-06
Std
8.1866E+00
2.0373E+02
1.06687E-06
Rank
2
3
1
f19
30
Mean
2.4875E+09
2.2700E+08
0
Std
1.3163E+09
2.7319E+07
0
Rank
3
2
1
f20
30
Mean
2.0564E+03
9.2562E+02
1
Std
7.9311E+02
7.6748E+01
0
Rank
3
2
1
f22
30
Mean
1.7217E+00
1.4009E+00
0
Std
1.1645E+00
4.6093E-01
0
Rank
3
2
1
Average rank
2.8000
2.0667
1
Overall rank
3
2
1
Convergence curves of PSO, FWA, and PS-FW for functions f12, f13, and f20.
f12 with D=30
f13 with D=30
f20 with D=30
From Table 7, we can know that the PS-FW algorithm keeps high performance and can obtain the optimal solutions of 11 functions in Table 6. Besides, the PS-FW achieves the best rank of three algorithms for all the functions with shift optimums, which present the powerful solving ability over optimization problems with nonzero optimums. By comparing Table 7 with Table 3, it is known that fireworks algorithm is relatively weak in searching for nonzero optimums. However, the PS-FW algorithm that derives from the fireworks algorithm and covers operators of PSO shows better performance, which demonstrates the correctness of the combination of the two algorithms. In addition, the result of PS-FW over function 16 is worse than the previous experiment. This is because f16 is a multimodal function and the slight deviations from the optimums can cause the significant increase in the value of the objective function. By observing the convergence curves in Figure 4, we can state that the convergence speed of the PS-FW also remains fast. In order to determine whether the convergence performance of PS-FW algorithm is superior to the other two algorithms more clearly, we compute the number of successful runs (success rate) and the average number of iterations in successful runs for each function in Table 6. The optimal solutions obtained by different algorithms are various, so we define the convergence criterion for each function. The convergence criterion can be introduced as that if the best solutions ffind found by each of algorithms are satisfying (18) in a run [39], the run is considered to be successful, and the minimum number of iterations satisfying the convergence criterion is counted to calculate the average number of iterations. (18)ffind-fopti<τ,where fopti is the optimum of function and τ denotes the error of algorithm.
Suppose that ST denotes the number of successful runs, AI indicates the average number of iterations in successful runs, and U denotes the iterations number when there are no successful runs after 20 runs and its value is set to greater than Imax; then Table 8 is shown as follows.
Comparison of successful rates and average number of iterations for PS-FW, PSO, and FWA with τ=10-4 for function f15 and τ=101 for other functions (the best ranks are marked in bold).
f
PSO
FWA
PS-FW
f1
ST
0
20
20
Rank
2
1
1
AI
U
201.7
28.4
Rank
3
2
1
f2
ST
19
20
20
Rank
2
1
1
AI
9.6
4.6
2.8
Rank
3
2
1
f4
ST
0
11
20
Rank
3
2
1
AI
U
584.8
228.8
Rank
3
2
1
f5
ST
0
0
20
Rank
2
2
1
AI
U
U
104.9
Rank
2
2
1
f6
ST
0
20
20
Rank
2
1
1
AI
U
343
9.8
Rank
3
2
1
f7
ST
0
0
20
Rank
2
2
1
AI
U
U
93.8
Rank
2
2
1
f9
ST
0
0
20
Rank
2
2
1
AI
U
U
26.7
Rank
2
2
1
f10
ST
0
0
20
Rank
2
2
1
AI
U
U
41.1
Rank
2
2
1
f12
ST
0
0
20
Rank
2
2
1
AI
U
U
11.8
Rank
2
2
1
f13
ST
0
0
20
Rank
2
2
1
AI
U
U
3.5
Rank
2
2
1
f15
ST
20
19
20
Rank
1
2
1
AI
505.3
679.6
13.1
Rank
2
3
1
f16
ST
16
0
20
Rank
2
3
1
AI
224
U
208.7
Rank
2
3
1
f19
ST
0
0
20
Rank
2
2
1
AI
U
U
208.9
Rank
2
2
1
f20
ST
0
0
20
Rank
2
2
1
AI
U
U
160.8
Rank
2
2
1
f22
ST
20
20
20
Rank
1
1
1
AI
94.2
123.2
9.3
Rank
2
3
1
Average rank of ST
1.9
1.8
1
Overall rank of AI
2.3
2.2
1
According to the statistical results and ranks presented in Table 8, the success rate and the average iterations number of PS-FW in 20 runs are both superior to other algorithms. For all the benchmark functions in Table 6, the proposed PS-FW can satisfy the convergence criterion for all the 20 runs, whereas the other algorithms can only converge to the criterion for several functions. In addition, the PS-FW obtains the highest ranks for the average number of iterations in successful runs and can converge to the criterion by a relatively small number of iterations. In summary, the PS-FW outperforms the other algorithms in terms of stability and convergence speed and is an efficacious algorithm for optimization problems whose optimums are at origin or are shifted.
4.3. Comparison of PS-FW with PSO Variants
In this section, we compare the performance of the proposed PS-FW with several existing variants of PSO which are introduced in a published paper. The comparison is based on the 12 benchmark functions introduced in the paper of Nickabadi et al. [22] and the orders of functions are consistent with that in this paper. In order to make a fair comparison, the run times and maximum iterations of PS-FW are set to 30 and 200,000, respectively, and the other parameters are set to be the same as those in Section 4.2. The dimension of test problems is set to D=30, and the mean and standard deviation of best solutions obtained by algorithms are calculated. The contrast results are presented in Table 9, and the rank of each algorithm is counted and shown.
Comparison of the optimization results obtained by PS-FW and six PSO variants (the best ranks are marked in bold).
f(x)
PS-FW
stdPSO
CPSO
CLPSO
FIPS
Frankenstein
AIWPSO
f1
Mean
0
5.198E-40
5.146E-13
4.894E-39
4.588E-27
2.409E-16
3.370E-134
Rank
1
3
7
4
5
6
2
Std
0
1.1301E-78
7.7588E-25
6.7814E-78
1.9577E-53
2.0047E-31
5.1722E-267
Rank
1
3
7
4
5
6
2
f2
Mean
0
2.1625E-02
2.1245E-02
0
2.4776E-04
1.4736E-03
2.8524E-02
Rank
1
5
4
1
2
3
6
Std
0
4.5019E-04
6.3144E-04
0
1.8266E-06
1.2846E-05
7.6640E-04
Rank
1
4
5
1
2
3
6
f3
Mean
0
2.5404E+01
8.2648E-01
1.3217E+01
2.6714E+01
2.8156E+01
2.5003E+00
Rank
1
5
2
4
6
7
3
Std
0
5.9031E+02
2.3449E+00
2.1480E+02
2.0025E+02
2.3132E+02
1.5978E+01
Rank
1
7
2
5
4
6
3
f4
Mean
0
3.4757E+01
3.6007E-13
0
5.8502E+01
7.3836E+01
1.6583E-01
Rank
1
4
2
1
5
6
3
Std
0
1.0636E+02
1.5035E-24
0
1.9185E+02
3.7055E+02
2.1051E-01
Rank
1
4
2
1
5
6
3
f5
Mean
0
2.0956E+01
5.3717E-13
1.3333E-01
6.1883E+01
7.0347E+01
1.1842E-16
Rank
1
5
3
4
6
7
2
Std
0
1.8327E+02
5.9437E-24
1.1954E-01
1.4013E+02
2.9600E+02
4.2073E-31
Rank
1
6
3
4
5
7
2
f6
Mean
0
1.4921E-14
1.6091E-07
9.2371E-15
1.3856E-14
2.1792E-09
6.9870E-15
Rank
1
5
7
3
4
6
2
Std
0
1.8628E-29
7.8608E-14
6.6156E-30
2.3227E-29
1.7187E-18
4.2073E-31
Rank
1
4
7
3
5
6
2
f7
Mean
0
1.4582E+00
1.8889E+03
1.9217E+02
9.4634E+00
1.7315E+02
1.9570E-10
Rank
1
3
7
6
4
5
2
Std
0
1.1783E+00
9.9106E+06
3.8433E+03
2.5976E+01
9.1577E+03
1.2012E-19
Rank
1
3
7
5
4
6
2
f8
Mean
0
1.2375E-02
1.0764E-02
4.0642E-03
3.3047E-03
4.1690E-03
5.5241E-03
Rank
1
7
6
3
2
4
5
Std
0
2.3107E-05
2.7698E-05
9.6184E-07
8.6680E-07
2.4012E-06
1.5358E-05
Rank
1
6
7
3
2
4
5
f10
Mean
0
3.4621E-26
5.4282E-14
9.9748E-39
2.6033E+02
5.1953E+04
1.8317E-137
Rank
1
4
5
3
6
7
2
Std
0
4.0873E-51
8.2868E-27
3.7661E-84
2.1785E+04
1.1136E+09
3.4534E-273
Rank
1
4
5
3
6
7
2
f11
Mean
-1.2542E+04
-1.0995E+04
-1.2127E+04
-1.2546E+04
-1.1052E+04
-1.1221E+04
-1.2569E+04
Rank
3
7
5
2
6
4
1
Std
1.4900E+02
1.3753E+05
3.3795E+04
4.2567E+03
9.4421E+05
2.7708E+05
1.1409E-25
Rank
2
5
4
3
7
6
1
f12
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f13
Mean
1.4998E-32
1.1422E-29
2.0913E-15
1.4998E-32
1.0273E-28
5.5136E-18
1.4998E-32
Rank
1
2
5
1
3
4
1
Std
0
3.2335E-57
1.2954E-29
1.2398E-94
1.0052E-56
1.4501E-34
1.2398E-94
Rank
1
3
6
2
4
5
2
According to the results of Table 9, the PS-FW outperforms the other six PSO variants on both the average values and standard deviation of best solutions after 200,000 iterations. Among the 12 benchmark functions, the PS-FW can obtain the optimum of 10 functions, which manifests the highly powerful ability to find the global optimal solution. In addition, the PS-FW acquires the highest rank over almost all the test problems except the function f11, which indicates the PS-FW has significant improvement than other algorithms. Besides the analysis of numerical results obtained by PS-FW and other algorithms, we applied the nonparametric statistical tests to prove the superiority of the PS-FW. The Friedman test and Bonferroni-Dunn test are adopted to compare the performance of PS-FW with the other algorithms.
The Friedman test is a multiple comparison test, to detect the significant differences among algorithms based on the sets of data [40]. The algorithms are ranked in Friedman test, which means the algorithm with the best performance is ranked minimum, the worst gets the maximum rank, and so on. In this section, the mean and standard deviation of best solutions based on Table 9 are conducted with the Friedman test; therefore the results are given in Table 10. Through observing the results of Friedman test in Table 10, all the p value are lower than the level of significance considered α=0.01, which indicates that the significant differences among the seven algorithms do exist. According to the ranks obtained by the Friedman test in Table 10, the PS-FW has the best performance on the mean and standard deviation of best solutions followed by ALWPSO, CLPSO, and the other four algorithms. Therefore, we can conclude that the accuracy of solutions obtained by PS-FW is better than other algorithms. However, the Friedman test can only detect whether there are significant differences among all the algorithms, but is unable to conduct the proper comparisons between PS-FW and each of the other algorithms. Hence the Bonferroni-Dunn test is executed to check the superiority of PS-FW.
The results of Friedman test for the PS-FW and other PSO variants over the mean and standard deviation of best solutions based on Table 9 (the best ranks are marked in bold).
Mean
Std
Results
N
12
12
Chi-square
35.33
37.18
p value
3.72E-06
1.62E-06
Friedman ranks of Algorithms
PS-FW
1.58
1.5
stdPso
4.83
4.67
CPSO
5.08
5.17
CLPSO
3.17
3.25
FIPS
4.75
4.67
Frankenstein
5.58
5.75
AIWPSO
3
3
The Bonferroni-Dunn test can be very intuitive to detect the significant difference between the two or more algorithms. For Bonferroni-Dunn test, the judgment condition for the existence of significant difference between the two algorithms is that their mean ranks differ by at least the critical difference (CD), and the equation of calculating the critical difference is as follows [41]: (19)CDα=qαNiNi+16Nf,where Ni and Nf are the number of algorithms and benchmark functions and the critical values qα at the probability level a are presented as follows:(20)q0.05=2.77,q0.1=2.54.
By utilizing (19) and (20), the critical difference is shown as follows: (21)CD0.05=2.44,CD0.1=2.24.
Here we carry out the Bonferroni-Dunn test for the mean of best solutions, success rate, and average number of iterations of successful runs on the basis of the ranks obtained by the Friedman test. In order to provide a more intuitive display of the results obtained by Bonferroni-Dunn test, we illustrate the critical differences among the seven algorithms in Figure 5. For the purpose of comparing the algorithms clearly, a horizontal line which indicates the threshold for the best performing algorithm (the one with pink color) is drawn in the graphs. In addition, another two lines which represent each level of significance considered in the paper are also drawn, and their heights are equal to the sum of minimum rank and the corresponding CD. Then if the bars exceed the lines of significant level, the corresponding algorithms are proved to have worse performance than the best performing algorithm. By observing the results of Bonferroni-Dunn test in Figure 5(a), the bar of the PS-FW has the lowest height among all the algorithms, and the heights of bars corresponding to the stdPSO, CPSO, FIPS, and Frankenstein exceed the lines of significant level, which indicates that the PS-FW performs significantly better than these four algorithms over the solutions accuracy. In addition, the PS-FW acquires the best rank over the standard deviation according to Figure 5(b), and the PS-FW has the obvious advantage compared to the stdPSO, CPSO, FIPS, and Frankenstein. Therefore, we can conclude that the PS-FW is the best performing algorithm followed by ALWPSO, CLPSO, and other four algorithms, and the advantages of PS-FW on the efficiency and solutions accuracy compared with other algorithms are definitely proved.
The bar chart of Bonferroni-Dunn test for PS-FW and other PSO variants over mean and standard deviation of best solutions based on Table 10.
Mean
Standard deviation
Besides the above analysis, we count the number of successful runs and the average number of iterations in successful runs for the PS-FW over 12 benchmark functions, and the statistical results are presented in Table 11. In this section, a successful run means the algorithm can obtain the optimum within the 200,000 iterations. As shown in Table 11, the PS-FW can converge to the optimal solution in each of runs over the vast majority functions, which manifests the robustness of PS-FW in solving the optimization problems. In order to compare the convergence speed of PS-FW with other algorithms fairly, the average numbers of iterations in successful runs are compared over the six functions f1, f4, f6, f7, f10, and f11 introduced in Nickabadi et al.’s paper. According to the numerical results in Table 11, the PS-FW can converge to the optimal solution for all the six functions within 12,000 iterations, whereas the other algorithms have difficulty in obtaining the optimum for functions f1, f6, f7, and f10 after 200,000 iterations or can converge to the optimum for functions f4, f11 with a lot more iterations based on the convergence curves in the paper by Nickabadi et al. Therefore, we can argue that the robustness and convergence speed of PS-FW are superior to the other algorithms.
The statistical results of PS-FW in terms of success rate and average number of iterations in successful runs for 12 benchmark functions.
Functions
ST
AT
f1
30
3828.0
f2
30
882.6
f3
30
11266.5
f4
30
1853.8
f5
30
2134.7
f6
30
755.1
f7
30
5910.4
f8
30
2281.1
f10
30
6304.7
f11
29
1100.5
f12
30
7516.0
f13
0
U
4.4. Experiments to Analyze the PS-FW Control Parameters
In this section, we investigate the impact of the control parameters on the performance of PS-FW. From the previous introduction, the PS-FW has several control parameters including the parameters adopted from PSO and FWA. Here we only analyze the three main control parameters which are the control factors of explosion amplitudes λmin, λmax and the number of mutation sparks numM. In order to test the impact of changes in control parameters on performance exhaustively, six different combinations of parameters were selected and experimented on. Each set of parameters corresponds to 20 runs based on 22 functions introduced in Table 1, and the dimensions of problems are set to 100. Moreover, the other parameters settings of PS-FW except λmin, λmax, and numM are the same as those in Section 4.2. In addition, the six combinations of control parameters are represented as six optimization strategies, and their detailed parameters settings are shown in Table 12, and the control parameters of Section 4.2 are marked as Strategy-1 and are presented. As shown in Table 12, we take a contrasting method that changes a parameter and keeps the other parameters unchanged. Then the optimization results and the corresponding ranks of different strategies are shown in Tables 13 and 14, and the results focus on mean and standard deviation of best solutions obtained by different strategies. From the results of Tables 13 and 14, the PS-FW with Strategy-6 and Strategy-7 has the best performance for almost all the benchmark functions and can obtain the highest ranks over both the mean and standard deviation of best solutions. By adopting Strategy-6 and Strategy-7, the PS-FW can get the optimum of 16 functions for the whole 20 runs, especially including the functions f1, f3, f6, f14, f19, and f22 which cannot find the global best solutions by other optimization strategies of PS-FW. Therefore, the excellent performance of PS-FW with Strategy-6 and Strategy-7 proves the correctness of proposed mutation operator and indicates that increasing the number of mutation sparks can enhance the global search capability of the algorithm. However, according to the “no free lunch theorem” [42], there is no algorithm that can perform better than others on all the problems; hence the PS-FW with Strategy-6 and Strategy-7 has poor performance for function f7. It is because function f7 has a wide search scope so that the solutions have little changes in the later iterations if λmin is small, which results in a relatively slow convergence speed for PS-FW despite the increase in the number of mutation sparks. For other strategies of PS-FW, the different strategies have their own advantages for various test functions, the PS-FW with Strategy-1 performs well for functions f1, f3, f6, f9, and f19, and the good solutions can be obtained by PS-FW over functions f7, f16 under Strategy-2 and Strategy-3. Meanwhile, the PS-FW with Strategy-4 and Strategy-5 works well in solving the functions f10 and f22. In addition, the PS-FW can obtain the optimum of functions f2, f4, f5, f8, f12, f15, f17, f18, f20, and f21 and keep outstanding performance in other functions under the whole seven strategies. Therefore, the robustness of the proposed algorithm is strongly proved. To compare the convergence speeds for different strategies of PS-FW, the convergence curves over several functions are shown in Figure 6. By observing the curves in Figure 6, the superiority of Strategy-6 and Strategy-7 in terms of convergence speed has been demonstrated, and the PS-FW with all strategies can converge to solutions that are very close to the optimums. Then we conduct the Friedman test and the Bonferroni-Dunn test for the mean and standard deviation of best solutions obtained by different optimization strategies, so as to determine the impact degree of each control parameter on the performance of PS-FW. The results of Friedman test for different strategies of PS-FW are shown in Table 15, and the results of Bonferroni-Dunn test in terms of mean and standard deviation based on Table 15 are presented in Figures 7 and 8.
The detailed parameters settings of the different optimization strategies for PS-FW (the square brackets represent the rounding operations).
Strategies
λmax
λmin
numM
Strategy-1
1
1E-25
30
Strategy-2
1
1E-10
30
Strategy-3
1
0.1
30
Strategy-4
0.8
1E-25
30
Strategy-5
0.6
1E-25
30
Strategy-6
1
1E-25
[0.5⋅numE]
Strategy-7
1
1E-25
[0.7⋅numE]
The mean, standard deviation, and corresponding ranks of best solutions obtained by different optimization strategies of PS-FW for functions f1 to f13 (the best ranks are marked in bold).
f(x)
Strategy-1
Strategy-2
Strategy-3
Strategy-4
Strategy-5
Strategy-6
Strategy-7
f1
Mean
9.7833E-245
6.6617E-217
8.1065E-224
1.4930E-224
6.8133E-231
0
0
Rank
2
6
5
4
3
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f2
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f3
Mean
1.0341E-26
7.1483E-16
2.5737E-13
1.3156E-09
2.2836E-09
0
0
Rank
2
3
4
5
6
1
1
Std
3.8500E-26
1.3157E-15
7.1641E-13
4.2629E-09
4.5987E-09
0
0
Rank
2
3
4
5
6
1
1
f4
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f5
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f6
Mean
7.1054E-16
2.3093E-15
1.4211E-15
2.3093E-15
2.4869E-15
0
0
Rank
2
4
3
4
5
1
1
Std
1.4211E-15
1.6945E-15
1.7405E-15
1.6945E-15
1.6281E-15
0
0
Rank
2
4
5
4
3
1
1
f7
Mean
2.1860E-71
7.0151E-123
3.5034E-126
2.7732E-62
2.0900E-65
5.7053E-83
2.3724E-87
Rank
5
2
1
7
6
4
3
Std
4.7535E-71
1.8052E-122
1.2502E-125
1.2084E-61
9.0599E-65
5.7716E-83
9.9762E-87
Rank
5
2
1
7
6
4
3
f8
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f9
Mean
1.1555E-90
2.5372E-78
1.6308E-76
2.6199E-86
1.4655E-89
1.3155E-117
6.1364E-130
Rank
3
6
7
5
4
2
1
Std
2.7315E-90
1.1059E-77
4.7755E-76
7.7290E-86
6.2719E-89
5.7340E-117
2.6737E-129
Rank
3
6
7
5
4
2
1
f10
Mean
2.2792E-128
5.5926E-118
9.1955E-124
3.0530E-130
2.8788E-130
6.7603E-161
1.6779E-167
Rank
5
7
6
4
3
2
1
Std
9.7764E-128
2.4326E-117
3.4455E-123
9.2801E-130
1.1346E-129
2.9329E-160
0
Rank
5
7
6
3
4
2
1
f11
Mean
-4.1743E+04
-4.1279E+04
-4.1366E+04
-4.1366E+04
-4.1345E+04
-4.1757E+04
-4.1790E+04
Rank
3
6
4
4
5
2
1
Std
4.3502E+02
4.1356E+02
3.5331E+02
4.1470E+02
3.4657E+02
2.6837E+02
1.4566E+02
Rank
7
5
4
6
3
2
1
f12
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f13
Mean
1.4998E-32
1.4998E-32
1.4998E-32
1.4998E-32
1.4998E-32
1.4998E-32
1.4998E-32
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
The mean, standard deviation, and corresponding ranks of best solutions obtained by different optimization strategies of PS-FW for functions f14 to f22 (the best ranks are marked in bold).
f(x)
Strategy-1
Strategy-2
Strategy-3
Strategy-4
Strategy-5
Strategy-6
Strategy-7
f14
Mean
6.4751E-275
4.6790E-268
5.0050E-272
1.2035E-283
9.7967E-265
0
0
Rank
3
5
4
2
6
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f15
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f16
Mean
2.4731E-93
2.5574E-102
1.0668E-102
9.2122E-91
7.8026E-91
2.5290E-114
1.7103E-116
Rank
5
4
3
7
6
2
1
Std
8.4009E-93
1.0215E-101
3.2290E-102
3.7019E-90
3.0225E-90
4.6404E-114
6.2900E-116
Rank
5
4
3
7
6
2
1
f17
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f18
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f19
Mean
9.0096E-250
2.3878E-201
1.5857E-189
5.9464E-249
1.5925E-244
0
0
Rank
2
5
6
3
4
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f20
Mean
1
1
1
1
1
1
1
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f21
Mean
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
f22
Mean
4.9253E-273
8.5544E-231
1.4963E-229
3.8782E-275
4.3846E-276
0
0
Rank
4
5
6
3
2
1
1
Std
0
0
0
0
0
0
0
Rank
1
1
1
1
1
1
1
The results of Friedman test for the different strategies of PS-FW over the mean and standard deviation of optimal solutions based on Tables 13 and 14 (the best ranks are marked in bold).
Mean
Std
Results
N
22
22
Chi-square
40.23
22.38
p value
4.10E-07
1.03E-03
Friedman ranks of algorithms
Strategy-1
3.91
4.14
Strategy-2
4.75
4.25
Strategy-3
4.52
4.23
Strategy-4
4.5
4.52
Strategy-5
4.64
4.27
Strategy-6
2.95
3.41
Strategy-7
2.73
3.18
Convergence curves of PS-FW with different strategies for functions f1, f9, f10, and f22.
f1
f9
f10
f22
The bar chart of Bonferroni-Dunn test for different strategies over the mean of best solutions based on Table 15.
Strategy-7 as the best rank
Strategy-6 as the best rank
The bar chart of Bonferroni-Dunn test for different strategies over the standard deviation of best solutions based on Table 15.
Strategy-7 as the best rank
Strategy-6 as the best rank
According to the results of Friedman test in Table 15, the p value is lower than the level of significance considered α=0.05 for both the mean and standard deviation of bets solutions, which indicates that the performance of seven strategies of PS-FW has the significant difference. By observing the ranks obtained by the Friedman test in Table 15, the PS-FW with Strategy-7 has the best performance followed by Strategy-6, Strategy-1, and so on, and the PS-FW with Strategy-2 performs the worst relative to other strategies over the average values of best solutions. In Bonferroni-Dunn test, the values of critical difference are the same as those in Section 4.2, and the lines of best rank and significant level are also drawn in Figures 7 and 8. Through checking the bars corresponding to the different strategies of PS-FW in Figure 7(a), the heights of bars for Strategy-1 to Strategy-5 exceed the lines of significant level. Hence Strategy-7 represents the best combination of control parameters among all the seven strategies, and the PS-FW with Strategy-7 performs significantly better than the other strategies except Strategy-6. In addition, the PS-FW with Strategy-6 has significant superiority compared with Strategy-2 to Strategy-5 over the average values of best solutions based on Figure 7(b). Besides, as shown in Figure 8, the hybrid algorithm with different strategies has relatively small gaps in standard deviation, Strategy-7 emerges as the best performer over the standard deviation of best solutions followed by Strategy-6, Strategy-1, and other strategies, and Strategy-4 has the worst performance.
Therefore, based on the analysis above, the solutions accuracy and convergence speed of PS-FW are determined by the control parameters λmin, λmax, and numM. Compared with λmin and λmax, the number of mutation sparks has a greater impact on the performance of PS-FW. Hence we can appropriately increase the number of mutation sparks when solving the difficult multimodal global optimization problems. In addition, the value of λmin can be increased properly for solving the optimization problems with large range such as function f7. Considering that the increase in the number of mutation sparks will make the computing time longer, to improve the computational efficiency, Strategy-1 which ranks third in seven strategies is used to conduct the experiments in Sections 4.2 and 4.3 in this paper. As expected, we should choose the suitable control parameters for various problems by taking all the aspects into consideration.
5. Conclusion
In this paper, a hybrid algorithm named PS-FW is proposed to solve the global optimization problems. In PS-FW, the exploitation capability is applied to find the optimal solution and make the hybrid algorithm converge quickly whereas the exploration ability of FWA is used to search for the better solutions in the entire feasible space. Moreover, the abandonment and supplement mechanism, the modified explosion operator, and the novel mutation operator are proposed to enhance both the global and local search ability of algorithm. Then the validity of PS-FW is confirmed by the 22 well-known high-dimensional benchmark functions. The results show that PS-FW is an efficacious, fast converging, and robust optimization algorithm by comparing with the PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO over solving global optimization problems.
The future work is to refine the PS-FW by testing more complex high-dimensional optimization problems. Furthermore, we will try to apply the algorithm to multiobjective optimization problems and real-world problems such as spatial layout optimization, route optimization, and structural parameter optimization.
Conflicts of Interest
The authors declare that there are no conflicts of interest regarding the publication of this paper.
Acknowledgments
This study was funded by National Natural Science Foundation of China (nos. 51674086 and 51534004) and Northeast Petroleum University Innovation Foundation for Postgraduate (no. YJSCX2015-012NEPU).
TanY.IslamN.RanaS.AhsanR.GhaniS.An Optimized Design of Network Arch Bridge using Global Optimization AlgorithmVinotE.ReinboldV.TriguiR.Global Optimized Design of an Electric Variable Transmission for HEVsGabereN.StornR.PriceK.Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous SpacesKaeloP.AliM. M.Integrated crossover rules in real coded genetic algorithmsKennedyJ.EberhartR.Particle swarm optimization4Proceedings of the IEEE International Conference on Neural Networks (ICNN ’95)November-December 1995Perth, Western Australia1942194810.1109/ICNN.1995.4889682-s2.0-0029535737DorigoM.BirattariM.StützleT.Ant colony optimizationKarabogaD.An idea based on honey bee swarm for numerical optimization2005Kayseri, TurkeyErciyes UniversityTanY.ZhuY.Fireworks algorithm for optimizationWangJ.LinB.JinJ.Optimizing the shunting schedule of electric multiple units depot using an enhanced particle swarm optimization algorithmWuX.LiC.JiaW.HeY.Optimal operation of trunk natural gas pipelines via an inertia-adaptive particle swarm optimization algorithmHuaX.HuX.YuanW.Research optimization on logistics distribution center location based on adaptive particle swarm algorithmGarroaB. A.VázquezR. A.Designing artificial neural networks using particle swarm optimization algorithmsYeS.MaH.XuS.YangW.FeiM.An effective fireworks algorithm for warehouse-scheduling problemZhengY.SongQ.ChenS.Multiobjective fireworks optimization for variable-rate fertilization in oil crop productionMohamed ImranA.KowsalyaM.KothariD. P.A novel integration technique for optimal network reconfiguration and distributed generation placement in power distribution networksLiJ.TanY.Loser-out tournament based fireworks algorithm for multi-modal function optimizationLiZ.WangW.YanY.LiZ.PS-ABC: A hybrid algorithm based on particle swarm and artificial bee colony for high-dimensional optimization problemsZhengY.-J.XuX.-L.LingH.-F.ChenS.-Y.A hybrid fireworks optimization method with differential evolution operatorsZhengS.LiJ.JanecekA.TanY.A cooperative framework for fireworks algorithmNickabadiA.EbadzadehM. M.SafabakhshR.A novel particle swarm optimization algorithm with adaptive inertia weightLiL.LiuF.LongG.GuoP.BieX.Modified particle swarm optimization for BMDS interceptor resource planningWangC.-F.LiuK.A novel particle swarm optimization algorithm for global optimizationSouravliasD.ParsopoulosK. E.Particle swarm optimization with neighborhood-based budget allocationXueJ.-J.WangY.LiH.MengX.-F.XiaoJ.-Y.Advanced fireworks algorithm and its application research in PID parameters tuningLiuJ.ZhengS.TanY.The improvement on controlling exploration and exploitation of firework algorithmProceedings of the International Conference in Swarm Intelligence2013Berlin, Heidelberg, GermanySpringer112310.1007/978-3-642-38703-6_2PeiY.ZhengS.TanY.TakagiH.Effectiveness of approximation strategy in surrogate-assisted fireworks algorithmZhengS.JanecekA.TanY.Enhanced fireworks algorithm62Proceedings of the IEEE Congress on Evolutionary ComputationJune 2013Cancun, Mexico2069207710.1109/CEC.2013.6557813ZhengS.YuC.LiJ.TanY.Exponentially decreased dimension number strategy based dynamic search fireworks algorithm for solving CEC2015 competition problemsProceedings of the IEEE Congress on Evolutionary Computation (CEC '15)2015Sendai, Japan1810.1109/CEC.2015.72570102-s2.0-84963628503ZhengS.JanecekA.LiJ.TanY.Dynamic search in fireworks algorithmProceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)July 2014China3222322910.1109/CEC.2014.69004852-s2.0-84908576810LiJ.ZhengS.TanY.The Effect of Information Utilization: Introducing a Novel Guiding Spark in the Fireworks AlgorithmLiJ.ZhengS.TanY.Adaptive fireworks algorithmProceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)July 2014Berlin, Heidelberg, ChinaSpringer3214322110.1109/CEC.2014.69004182-s2.0-84908607106LiJ.TanY.The bare bones fireworks algorithm: A minimalist global optimizerValdezF.MelinP.CastilloO.Modular Neural Networks architecture optimization with a new nature inspired method using a fuzzy combination of Particle Swarm Optimization and Genetic AlgorithmsPanditM.ChaudharyV.DubeyH. M.PanigrahiB. K.Multi-period wind integrated optimal dispatch using series PSO-DE with time-varying Gaussian membership function based fuzzy selectionGaoH.DiaoM.Cultural firework algorithm and its application for digital filters designZhangB.ZhangM.-X.ZhengY.-J.A hybrid biogeography-based optimization and fireworks algorithmProceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC '14)July 2014Beijing, China3200320610.1109/CEC.2014.69002892-s2.0-84908569395AmoshahyM. J.ShamsiM.SedaaghiM. H.A novel flexible inertia weight particle swarm optimization algorithmFriedmanM.A comparison of alternative tests of significance for the problem of m rankingsDunnO. J.Multiple comparisons among meansWolpertD. H.MacreadyW. G.No free lunch theorems for optimization