A Velocity-Combined Local Best Particle Swarm Optimization Algorithm for Nonlinear Equations

Many people use traditional methods such as quasi-Newton method and Gauss–Newton-based BFGS to solve nonlinear equations. In this paper, we present an improved particle swarm optimization algorithm to solve nonlinear equations. -e novel algorithm introduces the historical and local optimum information of particles to update a particle’s velocity. Five sets of typical nonlinear equations are employed to test the quality and reliability of the novel algorithm search comparing with the PSO algorithm. Numerical results show that the proposed method is effective for the given test problems. -e new algorithm can be used as a new tool to solve nonlinear equations, continuous function optimization, etc., and the combinatorial optimization problem. -e global convergence of the given method is established.


Introduction
Many practical problems in engineering technology, information security, and so on can be solved by nonlinear equations [1,2]. Because the complex of nonlinear equations, it is difficult to solve them, especially for high-dimensional nonlinear equations. Newton's method and its improved form are extensively used at present, but the Newton-Raphson method has limitations [3]. eir convergence and performance characteristics can be highly sensitive to the initial guess of the solution supplied to the methods. Also, it is difficult to select a good initial guess for most systems of nonlinear equations. Many researchers according to different nonlinear equations put forward various solutions. e Jacobian-free Newton-Krylova method is widely used in solving nonlinear equations arising in many applications; however, an effective precondition is required for each iteration, and determining such may be hard or expensive. Xu and Coleman [4] propose an efficient two-sided beclouding method to determine the lower triangular half of the sparse Jacobian matrix via automatic differentiation. With this lower triangular matrix, an effective preconditioner is constructed to accelerate the convergence of the Newton-Krylova method. Paper [5] introduces ANTIGONE, algorithms for continuous/integer global optimization of nonlinear equations, a general mixed-integer nonlinear global optimization framework. Yuan and Zhang [6,7] presented a three-term Polak-Ribiere-Polyak conjugate gradient algorithm for largescale nonlinear equations. Yan [8] introduced a new unified two-parameter wave model, connecting integrable local and nonlocal vector nonlinear Schrodinger equations. Fan and Lu [9] present a modified trust region algorithm for nonlinear equations with the trust region radii converging to zero. e trust region algorithm s preserves the global convergence as the traditional trust region algorithms. Moreover, it converges nearly q-cubically under the local error bound condition, which is weaker than the no singularity of the Jacobian at a solution. Paper [10] proposed and analyzed novel energypreserving algorithms for solving the nonlinear Hamiltonian wave equation equipped with homogeneous Neumann boundary conditions. Paper [11] proposed a norm descent derivative-free algorithm for solving large-scale nonlinear symmetric equations without involving any information of the gradient or Jacobian matrix by using some approximate substitutions. Yuan and Hu [12] proposed a new three-term  conjugate gradient algorithm under the Yuan-Wei-Lu line  search technique to solve large-scale unconstrained optimization problems and nonlinear equations. e numerical method is used to solve systems of equations in paper [13].
Evolutionary algorithms are also used to solve systems of equations, such as [14,15]. is paper studies how to solve nonlinear equations using particle swarm optimization algorithm (PSO). e PSO algorithm was inspired by Shi and Eberhart [16] using social analogy of swarm behavior in populations of natural organisms, and it has been vastly developed. Jie Zhang et al. in paper [17] presented a hybrid clustering algorithm based on PSO using dynamic crossover. Chauhan et al. developed a novel inertia weight strategy for particle swarm optimization, in which he proposes three new nonlinear strategies for selecting inertia weight which plays a significant role in particle's foraging behavior [18]. Pan et al. [19] proposed an improved consensus protocol on the basis of the velocity and position equations of the canonical PSO algorithm, and transformed the dynamical PSO system into one new linear discrete-time system including random variables. e boundary of consensus region is given to better select the parameters in the PSO algorithm. e PSO has been successfully applied in many areas: function optimization, scheduling, fuzzy system control, and other areas. e core concept of PSO is changing the velocity and accelerating each particle toward its previous best position (pbest) and the global best position (gbest). Each particle modifies its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest. Supposing the search space is an N-dimension space, the swarm is made up of m particles. Each particle is represented as e velocity of the particle is denoted as vel i � (vel i1 , vel i2 , . . . , vel i d ). e best previous position of the i-th particle is represented as p i � (p i1 , p i2 , . . . , p i d ). e global best position of all particles is denoted as p gi � (p g1 , p g2 , . . . , p g d ). e velocity and the position are updated according to the following iterative equation: in which (i) w is inertia weight (ii) c 1 and c 2 are acceleration coefficient related to pbest and gbest (iii) r 1 and r 2 are random number uniform distribution U (0, 1) José García-Nieto and Enrique Alba used a parallel PSO for gene selection of high-dimensional microarray datasets [20]. Lin et al. [21] proposed a novel method for training the parameters of an adaptive network-based fuzzy inference system (ANFIS) that emphasizes the use of gradient descent (GD) methods.
Haddou and Maheu [22] developed some new smoothing techniques to solve general nonlinear complementarity problems. Litvinov et al. [23] carried out a survey on universal algorithms for solving the matrix Bellman equations over semirings and especially tropical and idempotent semirings. Chen et al. [24] developed a fast Fourier-Galerkin method for solving the nonlinear integral equation which is reformulated from a class of nonlinear boundary value problems. Wang and Zhang [25] presented a new family of two-step iterative methods to solve nonlinear equations. is paper presents the improved PSO algorithm, in order to optimize the complex nonlinear equations.

A Velocity-Combined Local Best Particle
Swarm Optimization Algorithm for Equation 2.1. e Iterative Formula of VCLBPSO Algorithms. Although the PSO algorithm can converge very quickly towards the nearest optimal solution for many optimization problems, the PSO experiences difficulties in reaching the global optimal solution [26]. e fact is that the diversity of the swarm decreases when the swarm approaches the nearest optimal solution. Significant efforts have been devoted to improve the efficiency of the algorithm and enhance the diversity of the population. Gobbi et al. [27] researched on a kind of local approximation-based multiobjective optimization algorithm with applications.
In this article, we will work to increase the diversity of population information. In this new algorithm, when the current time run is completed, the best position of population particle will be used in the position update of next time run. Relatively, to the next round optimization, the current optimal particle is historical local best and differently from the next round global optimum. We call the improved PSO algorithm as the velocity-combined local best particle swarm optimization algorithm, VCLBPSO. e historical local information increases the diversity of reference information used by particles, which is expected to be useful to overcome the problem of premature convergence.
A swarm of m particles moves through a D-dimensional search space. A particle is defined by its current position and velocity v i d (k). Each particle remembers its better location information p i � (p i1 , p i2 , . . . , p i d ), in which it has found the best solution of the optimization function f. Basing on the different combination between historical local best information and updated velocity, the VCLBPSO algorithm can be extended to four ways: (1) e iterative formula of basic VCLBPSO is as follows: 2 Mathematical Problems in Engineering where equations (3) and (1) are exactly the same. w, c 1 , c 2 , r 1 , r 2 and p i d , p g d have the same meaning with that in equation (1). p l d (t) is the previous t-th time run searched local best solution for optimization problem, and P → L (t) � (p l1 (t), p l2 (t), . . . , p l D (t)).λis a constant coefficient that is used to balance the previous t-th time run searched local best and this run searched optimal information and to update velocity. Here, each particle updates its position mainly through a generation ago position and the velocity which updated with the previous t-th time run searched local best and this times run searched optimal information.
(2) e random combination way called velocity randomly combined local best particle swarm optimization algorithm, for short VRCLBPSO, and its iterative formula is as follows: where, w, c 1 , c 2 , r 1 , r 2 and p i d , p g d , p l d (t) have the same meaning with that in equation (3). In equation (7), r 3 ∈ (0, 1) is a random number. (3) Entirely, velocity randomly combined the local best particle, for short EVRCLBPSO. Its iterative formula is as follows: where, w, c 1 , c 2 , r 1 , r 2 and p i d , p g d , p l d have the same meaning with that in equation (3). And r 3 ∈ (0, 1)is an m×D matrix. If r 3 ∈ (0, 1) is 1 × n dimension vector, and this algorithm for short NVRCLBPSO and its iterative formula is similarly to EVRCLBPSO.

e Method of Nonlinear Equations Converting to Function Optimization.
To easily solve nonlinear equations, the mathematical expression of nonlinear equations transforming to function optimization is as follows.
Supposing a system of nonlinear equations is made up of n functions, involving n unknown variables, described as follows: where f i (X), i � 1, 2, . . . , m is a system of nonlinear functions, X � (x 1 , x 2 , . . . , x n ) T is an unknown vector to it.
In accordance with the algorithm principle, construct a fitness function is as follows: e problem for solving the system of nonlinear equations (15) is transformed into the optimization problem of minimum values of fitness function (16). According to mathematical principles, when F(X) obtains the optimal value 0, f i (x i ) is also 0, and nonlinear equations are solved. When F(X) is optimized by the improved PSO algorithm and obtaining the optimal value, f i (x i ) also is nearly to solve.

e Pseudocode of VCLBPSO Algorithm for Nonlinear Equations
(i) FUCTION VCLBPSOE () (ii) FOR optimization time from 1 to t (t usually is equation to (12)) (iii) Randomly engender original population and velocity (iv) Compute the fitness of all particles using (16) (v) Record the best fitness of all particles (vi) WHILE the current number of iterations is less than the end iteration (vii) Update the particle velocity using (3) of VCLBPSO algorithm (viii) Modify the value of velocity using the update formula of VCLBPSO algorithm (4) or VRCLBPSO algorithm (8) and NVRCLBPSO algorithm (12) (ix) Update the particle position using (5) (x) rough comparing, get the optimal fitness of two generations, and record the better one as individual historical best Mathematical Problems in Engineering 3 (xi) Record the best particle as this run global best in population (xii) END (xiii) Using (6), update the historical local best P → L (t). And if run time is the first, command the VCLBPSO algorithm is the same as basic PSO (1)-(2) (xiv) END (xv) Output the position of particle, and obtain the optimal fitness value

Experimental Approach
e application research of PSO in nonlinear equations has been investigated by some scholars, but it is limited. R. Brits et al. introduced the concept of shrinking particle neighborhood used in PSO to optimize systems of unconstrained equations [28]. Ouyang et al. proposed the hybrid particle swarm optimization algorithm to solve systems of nonlinear equations [29]. Mo et al. introduced the conjugate direction method into particle swarm optimization in order to improve PSO to solve systems of nonlinear equations [30]. ey proposed methods that are mainly used to optimize the low-dimensional equations. Y. Mo and H. Liu used conjugate direction particle swarm optimization solving systems of nonlinear equations. In this paper, the improved PSO algorithm will mainly be used to optimize the mid-dimensional and high-dimensional nonlinear equations, besides low-dimensional.
Five nonlinear equations are used to test the effect of the improved PSO algorithm.

Experimental Results and Discussion
To test the effect of the improved algorithm for nonlinear equations, the same parameters of different algorithms are set exactly the same. Each problem to be optimized will be run 32 times by different methods. In the following, we will compare the effectiveness of the improved algorithm and the ordinary PSO in the case of nonlinear equations with different dimensions. e experimental results are listed in Table 1. Relevant parameters are listed in the same table. In Table 1, Dim denotes problem's dimension; the Ps and Genn indicate population size and algorithm terminate generation, respectively. Table 1 confirms that the improved algorithms obtained better results in the case of different dimensions. Table 1 shows that introduction history local best information can increase a particle's flight reference information diversity. is keeps the particles search to avoid falling into local optimal solution too early and the accuracy of convergence is improved. Improved PSO can obtain the small minimum, average, and standard deviation. In general, the VRCLBPSO, NVRCLBPSO, and NVRCLBPSO for low-dimensional problem are efficient, and the NVRCLBPSO search is more stable. e EVRCLBPSO is the best for high-dimensional problems. eir convergence and the best distribution of 32 run time figures are shown in Appendix A, in which the d is the dimension of problems.
From Table 1 and Figure 1 of Appendix A, we discover that the convergence rate of the improved method for each problem is clearly faster than ordinary PSO. Also, for each problem with higher dimension, the EVRCLBPSO algorithm has the better convergence speed and more optimal search ability. When optimizing high-dimensional nonlinear equation problems, PSO easily falls into local optimum. While this paper presented a method successfully jumped out of the local optimal solution. Numerical results show that the proposed method is effective for some nonlinear equation problems. And the global convergence of the given method is established.

Conclusions and Perspectives
So far, the research of PSO applied to optimization equations is very limited, especially for high-dimensional nonlinear equations. We propose an improved PSO algorithm for solving nonlinear equations in this paper. Experimental results comparing with the basic PSO algorithm show that the improved method has achieved better effect in both of convergence speed and accuracy. Particularly for high-dimensional nonlinear equations, the effect is very obvious. e application scenarios of this improved algorithm are relatively widespread, and it can be applied to solve unconstrained optimization problems, constrained optimization problems, equation problems, engineering practice problems, etc.
Nonlinear equations are an important question. erefore, its optimization is very meaningful and valuable for many practical problems. It will be a valuable direction to apply the PSO algorithm and its variant to optimizing nonlinear equations, especially high-dimensional.

Data Availability
e data used to support the findings of this study are included within the article.