A new algorithm is presented for solving the nonlinear complementarity problem by combining the particle swarm and proximal point algorithm, which is called the particle swarm optimization-proximal point algorithm. The algorithm mainly transforms nonlinear complementarity problems into unconstrained optimization problems of smooth functions using the maximum entropy function and then optimizes the problem using the proximal point algorithm as the outer algorithm and particle swarm algorithm as the inner algorithm. The numerical results show that the algorithm has a fast convergence speed and good numerical stability, so it is an effective algorithm for solving nonlinear complementarity problems.
1. Introduction
The nonlinear complementarity problem is an important class of nonsmooth optimization problems and is widely used in mechanical, engineering, and economic fields. The algorithm research has attracted great attention from scholars at home and abroad. Algorithms for solving such problems mainly include the Lemke algorithm, homotopy method, projection algorithm, Newton algorithm, and interior point algorithm [1–10]. These algorithms were mostly based on the gradient method and relied on the selection of an initial point. How to select the appropriate initial point is a very difficult problem; therefore, in recent years, many scholars at home and abroad have been dedicated to the bionic intelligent algorithm not depending on the initial point and the gradient information. In 1995, Kennedy and Eberhart proposed the particle swarm algorithm (PSO) [11] of the predatory behavior of birds for the first time. It was similar to the genetic algorithm, an optimization tool based on iteration; its advantages were being easy to understand, easy to implement, and no need to adjust many parameters, and it was widely recognized by academic circles, who then put forward some improved algorithms. At present, the algorithm has been successfully applied in function optimization, neural network training, fuzzy system control, and so forth. In 2008, Zhang and Zhou [12] proposed a mixed algorithm for solving nonlinear minimax problems by combing the particle swarm optimization algorithm with the maximum entropy function method, and then Zhang extended the method to solve nonlinear L-1 norm minimization problems [13] and nonlinear complementarity problems [14] and achieved good results. But these algorithms were random algorithms depending on the probability essentially, where the entropy function was only playing the role of conversing problems. Sun et al. [15]. proposed a social cognitive optimization algorithm (SCO) based on entropy function for solving nonlinear complementarity problems. Yamashita et al. [16] proposed a new technique that utilizes a sequence generated with the PPA algorithm in 2004. And these algorithms cannot guarantee convergence to global optimal point with one hundred percent certainty even if the entropy function is a convex function. Proximal point algorithm is a global convergent algorithm for solving the convex optimization problem. Based on this, this paper researches a particle swarm optimization and proximal point algorithm for the discrete nonlinear complementary problem with every component function being a convex function, and the convergence to global optimal point with one hundred percent is guaranteed. First of all, the nonlinear complementary problem is transformed into an unconstrained optimization problem of a smooth function using the maximum entropy function, and then we construct the particle swarm optimization-proximal point algorithm by combing the particle swarm optimization algorithm with the proximal point algorithm, effectively giving full play to their respective advantages and optimizing the problem using the mixed algorithm. The numerical results show that the algorithm is a new effective algorithm with a fast convergence rate and good numerical stability.
2. The Smoothing of Nonlinear Complementary Problem
Let f:Rn→Rn be continuously differentiable; the nonlinear complementarity problem (NCP(f)) solves x∈Rn, in order to make
(1)x≥0,f(x)≥0,xTf(x)=0,
when f(x)=Mx+q (where M is an n×n matrix and q∈Rn is constant vector), the NCP becomes linear complementarity problem, that is, LCP(q,M).
Definition 1 (see [<xref ref-type="bibr" rid="B2">2</xref>]).
If φ(a,b):R2→R satisfies
(2)φ(a,b)=0⟺a≥0,b≥0,ab=0,
then φ(a,b) is an NCP function.
We can transform solving the NCP(F) into solving the nonlinear equations by using Fisher-Burmeister function as follows:
(3)[φ(x1,f1(x))⋮φ(xn,fn(x))]=0.
Equation (3) can be transformed into the optimization problem
(4)minxmax1≤i≤n{|φ(xi,fi(x))|},
where each component φ(xi,fi(x)) is smooth function of vector x∈Ω⊂Rn. However, the problem is a complex of nondifferentiable optimization problems.
We can transform the nonlinear complementary problem into smooth optimization problem by using the maximum entropy function method as follows.
Definition 2 (see [<xref ref-type="bibr" rid="B9">9</xref>, <xref ref-type="bibr" rid="B10">10</xref>]).
Let
(5)Fp(x)=1pln{∑i=1nexp[p|φ(xi,fi(x))|]}
be maximum entropy function on x∈Ω⊂Rn in (3); that is,
(6)Fp(x)=1pln{∑i=1nexp[pφ(xi,fi(x))]hhhh+exp[-pφ(xi,fi(x))]∑i=1nexp[pφ(xi,fi(x))]}.
Theorem 3 (see [<xref ref-type="bibr" rid="B9">9</xref>, <xref ref-type="bibr" rid="B10">10</xref>]).
Function Fp(x) goes along with the increase of parameter p monotone decrease for any x∈Ω⊂Rn and is limited to max1≤i≤n{|φ(xi,fi(x))|} (p→∞); that is,
(7)Fs(x)≤Fr(x),s≤rlimp→∞Fp(x)=max1≤i≤n{|φ(xi,fi(x))|}.
The initial ideas of entropy function method originated from the published paper of Kreisselmeier and Steinhauser [17] in 1979. The engineering and technical personnel like it at home and abroad because it is easy to prepare common software for solving many types of optimization problems, and it can provide the solution accuracy required with some convexity conditions. Since the 1980s, the method is widely used in structural optimization and engineering design and other fields. In recent years, the method gained the better results for solving constrained and unconstrained minimax problems, linear programming problems, and semi-infinite programming.
Large enough p by the above theorem shows that we can use maximum entropy function instead of the objective function max1≤i≤n{|φ(xi,fi(x))|}; then the previous nonsmooth problems are changed into a differentiable function of unconstrained optimization problems. As is known to all, when p is limited, we can get the approximate solution of the original problem, but when p is appropriate we can guarantee high precision. For a problem with constraints, it can use the penalty function method into unconstrained problems.
3. The Idea of PSO Algorithm
PSO algorithm [11] is an evolutionary computation technique, originated from the behavior of birds preying. It is similar to the genetic algorithm, an optimization tool based on iteration. The system is initialized with a set of random solutions; through the iterative searching for the optimal value, there is no crossover and mutation in the genetic algorithm, but by searching with particles in solution space following the optimal particle, its optimization has been widely recognized by academe. In an n-dimensional target search space, there are m particles forming a group, where the ith particle in the tth iteration is expressed as a vector xit=(xi1t,xi2t,…,xint), i=1,2,…,m. The position of each particle is a potential solution; flying speed is the n-dimensional vector Vit=(vi1t,vi2t,…,vint) correspondingly. In each iteration, note that Pij=(pi1,pi2,…,pin) is the optimal position of searching for the ith particle itself so far; Plj=(pl1,pl2,…,pln) is the optimal position of the whole particle swarm so far. In the (t+1)th iteration, the speed and position update of the ith particle are
(8)vint+1=vint+c1r1(pin-xint)+c2r2(pln-xint),(9)xint+1=xint+vint+1,
where i=[1,m], j=[1,n], c1 and c2 are two learning factors, and r1 and r2 are pseudorandom numbers distributed on [0,1] Uniformly. vis∈[-vmax,vmax], where vmax is constant, set by the user themselves.
Shi and Eberhart presented the inertia weight PSO [18]:
(10)vint+1=ωvint+c1r1(pin-xint)+c2r2(pln-xint),
where ω is inertia weight in (10), controlling the previous velocity influence on the current speed. When ω is bigger, the influence of previous velocity is greater, and ability of global search of algorithm is weak; when ω is smaller, the influence of previous velocity is smaller, and ability of global search of algorithm is stronger. We can jump out of local minima by adjusting the size of ω.
Equation (9) was improved in [19]:
(11)xist+1=xist+(rand[]+k)vist+1+10-6rand[].
Termination condition is the minimum adaptive threshold predetermined, which meets the maximum number of iterations or the optimal location searched by particle swarm according to the specific problems.
4. The Idea of PPA Algorithm
In 1970, Martinet [20] proposed the proximal point algorithm, which is a global convergence algorithm for a convex optimization problem; then Rockafellar [21] researched and extended the algorithm.
Consider the following optimization problem:
(12)min(x,y)f(x,y),
where f:Rn+m→(-∞,+∞] is a closed regular convex function.
Proximal point algorithm (PPA) produces the iterative sequence {(xk,yk)} for solving (12), which is as follows:
(13)(x0,y0)∈R++m+n(xk+1,yk+1)=argx,y∈R++m+nmin{f(x,y)+λkD((x,y),(xk,yk))},
where D(·,·) is a distance function; the proximal point algorithm uses D(·,·)=(1/2)∥(x,y)-(xk,yk)∥2 early. In recent years, many scholars put forward similar distance functions to meet the convexity, such as Bregman function, similar to entropy function.
Proximal point algorithm is a classical deterministic algorithm. If the optimization problem is a convex optimization problem, the iterative sequence {(xk,yk)} produced by the algorithm converges to the global optimal point. This paper assumes that each component of the nonlinear complementarity problem (12) is a convex function; then the corresponding entropy function is a convex function; thus, we can ensure that the iterative sequences converge to global optimal with proximal point algorithm as the outer algorithm.
5. Particle Swarm Optimization-Proximal Point Algorithm for Nonlinear Complementarity Problems
Combined with the proximal point algorithm, the particle swarm optimization-proximal point algorithm for nonlinear complementarity problems is as follows.
5.1. The Steps of the Outer Algorithm (See Figure <xref ref-type="fig" rid="fig1">1</xref>)
Take initial point for x0, the parameters p>0, λk>0; let k≔1.
Take PSO algorithm in order to solve the problem of adjacent point iteration, we get xk+1.
Check whether it has reached the requirements of precision, if it has, the algorithm stops; otherwise, k≔k+1, to 2.
5.2. The Inner Particle Swarm Algorithm
Initialize the particle swarm of N, and set the initial position and velocity of each particle.
Calculate fitness value of each particle.
Compare the fitness value of each particle with the optimal position Pij experienced if good, as the current optimal position.
Compare the fitness value of each particle with the optimal position Pij global experienced if good, as the current global optimal position.
Evolve the position and speed of particle by (9) and (10).
If the termination condition is satisfied, we output the solution; otherwise, it returns to (2).
6. Numerical Results
We take a nonlinear complementary problem with four different functions in [2, 3, 5, 7] to verify the validity of the new algorithm, and the comparative results with those of [14–16] are as follows.
Example [2, 3, 5, 7]. Consider
(14)f1(x)=3x12+2x1x2+2x22+x3+3x4-6,f2(x)=2x12+x1+x22+10x3+2x4-2,f3(x)=3x12+x1x2+2x22+2x3+9x4-9,f4(x)=x12+3x22+2x3+3x4-3.
The solution of the problem is (6/2,0,0,1/2)T and (1,0,3,0)T. Here, p is 105, the two learning factors are c1=c2=2, where ω reduces from 1.0 to 0.4, group size is 20, the number of maximum evolution is 100, and the search range is [-2,2].
We programme the algorithm using VC++6.0 in Windows XP with CPU being Pentium 4 2.93 GHz and memory being 512 MB; the results are in Table 1, where the inner algorithm runs after 10 times, taking the worst solution as the initial iteration point in the next step, and required accuracy is 0.0001.
The results of PSO-PPA algorithm.
The outer loop iterations
Initial point
The worst solution of the inner PSO algorithm in ten times
Data analysis: the algorithms in papers [2, 3, 5, 7] were all deterministic algorithms; therefore, there is no comparison with “the worst solution” and “success rate of searching” in Table 2. We calculate the maximum error by |Fp(x)-f(x)|, where we take the worst solution in ten times as the calculation error. From Table 1, we know that it needed 5 iterations of outer circulation and 100 the maximum iteration times in inner circulation. The paper [14] needs 5000 generations of evolution, but the computing speed of our algorithm is much more faster than it.
The pure PPA algorithm could not find the advantages probably for nonmonotonic NCP problems, which is proved in [16], so we find the near solution with PSO firstly and then solve iteratively by using PSO-PPA algorithm for nonmonotonic NCP problems. Compared with pure PPA algorithm, the PSO-PPA algorithm do not need the initial point, and its optimization speed is faster. The results are in Table 3.
The results of PSO-PPA algorithm and PPA algorithm [16]. Here, we take the initial point (4, 4, 4, 4), and α=0.8.
In this paper, first of all, we use the maximum entropy function method for smoothing and then propose a new efficient algorithm for solving nonlinear complementarity problems combined with the proximal point algorithm, without using the initial point and derivative information. This algorithm not only provides a new algorithm for nonlinear complementarity problems, but also expands the application range of the particle swarm algorithm. The experimental data shows that the algorithm has a fast convergence rate and good numerical stability and is an effective algorithm for nonlinear complementarity problems.
Acknowledgments
The work is being supported by the Scientific Research Foundation of the Education Department in Shaanxi Province of China (Grants nos. 11JK0493 and 12JK0887) and the Natural Science Basic Research Plan in Shaanxi Province of China (Grants nos. S2014JC9168 and S2014JC9190).
XiuN. H.GaoZ. Y.The New Advances in Methods for Complementarity ProblemsHanJ. Y.XiuN. H.QiH. D.BiaoQ.Chang-yuW.Shu-xiaZ.A method for solving nonlinear complementarity problems and its convergence propertiesGuo-qingC.BingC.A new ncp-function for box constrained variational inequalitys and a related newdon-type methodLi-meiZ.A differential equation approach to solving nonlinear complementarity problem based on aggregate functionChenB.XiuN.A global linear and local quadratic noninterior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functionsChang-fengM.Guo-pingL.Mei-siC.A positve interior-point algorithm for nonlinear complementarity problemsKanzowC.PieperH.Jacobian smoothing methods for nonlinear complementarity problemsHuangZ.ShenZ.Entropy method for one sort of nonlinear minimax problemXing-siL.The effective solution of a class of non-differentiable optimization problemsKennedyJ.EberhartR.Particle swarm optimizationProceedings of the IEEE International Conference on Neural NetworksDecember 1995IEEE Press194219482-s2.0-0029535737ZhangJ. K.LiL. F.ZhouC.Improved particle swarm optimization for class of nonlinear min-max problemsZhangJ. K.Maximum entropy particle swarm optimization for nonlinear l-1 norm minimization problemZhangJ. K.Particle swarm optimization for nonlinear complementarity problemsSunJ. Z.WangS. Y.ZhangJ. K.SCO algorithm based on entropy function for NCPYamashitaN.DanH.FukushimaM.On the identification of degenerate indices in the nonlinear complementarity problem with the proximal point algorithmKreisselmeierG.SteinhauserR.Systematic control design by optimizing a performance indexProceedings of the IFAC Symposium on Computer Aided Design of Control Systems1979Zurich, Switzerland113117ShiY.EberhartR.Modified particle swarm optimizerProceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98)May 1998Piscataway, NJ, USAIEEE Press69732-s2.0-0031700696ZhangJ. K.LiuS. Y.ZhangX. Q.Improved particle swarm optimizationMartinetB.Régularisation d'inéquations variationnelles par approximations successivesRockafellarR. T.Monotone operators and the proximal point algorithm