A smoothing inexact Newton method is presented for solving nonlinear complementarity problems. Different from the existing exact methods, the associated subproblems are not necessary to be exactly solved to obtain the search directions. Under suitable assumptions, global convergence and superlinear convergence are established for the developed inexact algorithm, which are extensions of the exact case. On the one hand, results of numerical experiments indicate that our algorithm is effective for the benchmark test problems available in the literature. On the other hand, suitable choice of inexact parameters can improve the numerical performance of the developed algorithm.
1. Introduction
In the study of equilibria problems from economics, engineering, and management sciences, a complementarity problem (CP) often appears as the prominent mathematical model of the equilibria problems. Thus, it is the most practical interest to develop a robust and efficient algorithm for solving CP in the past decades (see the very recently published book [1] and the references therein). In this paper, we consider a nonlinear complementarity problem (denoted by NCP(F), for short): find a vector x∈Rn such that
(1)x≥0,F(x)≥0,xTF(x)=0,
where F:Rn→Rn is continuously differentiable function. Due to the extensive applications, NCP(F) has attracted great attention of researchers (see, e.g., [2, 3] and the references therein). On the one hand, there have been many theoretical results on the existence of solutions and their structural properties. On the other hand, many attempts have been made to develop implementable algorithms for the solution of NCP(F).
A popular way to solve the NCP(F) is to reformulate 1 to a nonsmooth equation via an NCP-function. Function ϕ:R2→R is said to be the NCP-function if
(2)ϕ(a,b)=0⟺a≥0,b≥0,ab=0.
Define Φ:Rn→Rn, given by
(3)Φx=ϕx1,F1xϕx2,F2x⋮ϕxn,Fnx.
Then, problem 1 is equivalent to
(4)Φ(x)=0.
Thus, any efficient algorithm for solving 4 can be directly applied to find the solution of problem 1.
Smoothing method is a fundamental approach to solve the nonsmooth equation 4. In this connection, one can see, for example, [4–16]. The basic idea of this method is to construct a smooth function to approximate Φ. Let μ>0 be a given smoothing parameter. We define a continuously differentiable function Φμ:Rn→Rn such that for any μ>0 and x∈Rn there holds
(5)Φ(x)-Φμ(x)⟶0,asμ⟶0.
Then, problem 4 is approximated by a smooth equation:
(6)Φμ(x)=0.
Let {μk} be a given positive sequence which tends to 0. Then, we can obtain an approximate solution of 4 by solving 6 with μ=μk.
Recently, there are many different smoothing functions being employed to smooth the problem 4. Among them, the Fischer-Burmeiste function and the minimum function are two popular ones, which are defined by
(7)ϕ(a,b)=a+b-a2+b2,∀(a,b)∈R2,(8)ϕ(a,b)=min{a,b},∀(a,b)∈R2,
respectively. With the 2-norm of (a,b) in the Fischer-Burmeiste function being replaced by a more general p-norm with p∈(1,∞), Chen and Pan proposed a family of new NCP-function in [6]. By combining the Fischer-Burmeiste function and the minimum function, Liu and Wu presented a smoothing function in [11] as follows:
(9)ϕθa,b=a+b-θa-b2+(1-θ)(a2+b2),hhhhhhhhhhhhhhhhhhθ∈[0,1]∀(a,b)∈R2.
In [13], a symmetric perturbed Fischer-Burmeister function is constructed:
(10)ϕμ,a,b=(1+μ)(a+b)-a+μb2+b+μa2+μ2,hhhhhhhhhhhhhh∀(μ,a,b)∈R3.
Very recetly, in [15], a more general smoothing function with the p-norm (p∈(1,∞)) was presented. It is shown that for the nonmonotone smoothing Newton algorithm developed in [14] the numerical performance of algorithm is greatly improved if p=1.1.
In this paper, we first write 8 as
(11)mina,b=12a+b-a-b,∀(a,b)∈R2,
then, we intend to investigate a new smoothing method of ·, and in virtue of this new method, we will design a smoothing inexact Newton algorithm to solve the obtained smooth equations. Since an inexact parameter at each iteration is admissive to obtain an inexact Newton search direction, the developed algorithm is more helpful to numerical computation than the similar ones available in the literature. By suitably choosing a sequence of inexact parameters in advance, numerical performance of the developed algorithm in this paper can be improved. On the other hand, without the assumption of strict complementarity, we can establish the theory of convergence for our algorithm, which is weaker than that in the existing results.
The rest of this paper is organized as follows. In Section 2, we study a smoothing method of the absolute function. Section 3 is devoted to development of a smoothing inexact Newton algorithm to solve the nonlinear complementarity problem. In Section 4, the global convergence and the superlinear convergence are established. Numerical results are reported in Section 5. Some final remarks are made in Section 6.
The following notions will be used throughout this paper. For any vector or matrix A, AT denotes the transposition of A. Rn denotes the space of n-dimensional real vectors. R+n and R++n denote the nonnegative and the positive subspaces in Rn, respectively. For any vector ν∈Rn, diag{νi:i∈N} denotes a diagonal matrix, whose ith diagonal element is νi and vec{νi:i∈N} the vector ν, N represents the set {1,2,…,n}. I represents the identity matrix with a suitable dimension. · stands for the 2-norm. For any α,β∈R++, α=O(β) and α=o(β) represent that α/β is uniformly bounded and that α/β tends to zero as β→0, respectively.
2. Smoothing the Absolute Function
In this section, we will study a smoothing method of the absolute function.
We first present a function φμ:R→R, given by
(12)φμ(t)=2πarctantμ.
It is clear that
(13)limμ→0+φμ(t)=1,ift>0,0,ift=0,-1,ift<0.
Note that the generalized derivative of the absolution function · is calculated by
(14)sign(t)=1,ift>0,[-1,1],ift=0,-1,ift<0.
We can conclude that, except for t=0, φμ(t) is a good approximation to the generalized derivative of t with a sufficiently small μ. Actually, the following result was proved in [17].
Proposition 1.
For any given constant α>0, there is a constant Mα>0 independent of μ and t such that
(15)0≤sign(t)-φμ(t)≤Mαμ,∀t:t≥α,∀μ>0,0≤φμ(t)-sign(t)≤Mαμ,∀t:t≤-α,∀μ>0.
By Proposition 1, to obtain an approximation of t, we calculate the integral of φμ:
(16)∫φμ(t)dt=2π∫arctantμdt=tφμ(t)-1πμln1+t2μ2≜ϕμ(t).
Then, it is natural that we use ϕμ(t) to approximate t. Actually, we have the following result (see [17]).
Proposition 2.
(1) For any μ>0, it holds that
(17)|t|≤ϕμ(t),∀t∈R.
The above inequality holds strictly for all t≠0.
(2) For any t∈Rn, limμ→0+ϕμ(t)=|t|.
(3) limμ→0+dist(ϕμ′(t),∂h(t))=0, where h(t)=t and dist(v,S) denotes the distance from the point v to the set S.
3. A Smoothing Inexact Newton Algorithm for NCP(F)
In this section, we will develop a smoothing inexact Newton algorithm for solving a smooth equation obtained by reformulating the NCP(F).
Since
(18)minxi,Fix=12xi+Fix-xi-Fix≜ϕi(x),hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh(i∈N),
we construct an approximation of ϕi(x) by Proposition 2, defined by
(19)ϕiμ,x=12xi+Fi(x)-ϕμxi-Fix=12xi+Fi(x)-2πxi-Fixarctanxi-Fi(x)μ1+xi-Fix2μ2hhhhhhh+1πμln1+xi-Fix2μ2.
Define Φμ:Rn→Rn, given by
(20)Φμ(x)=ϕ1(μ,x)⋮ϕn(μ,x).
Then, Φ(x)=0 is approximated by a smooth equation:
(21)Φμ(x)=0.
Remark 3.
The above smoothing method has been used to deal with NCP(F) in [17]. Then, by solving a generalized Newton equation:
(22)∇ΦμxkTd+Φ(xk)=0,
so as to obtain a search direction d at k-iteration for the developed algorithm in [17]. Different from the standard Newton method, Φ(xk) is employed to replace Φμ(xk) in 22.
Taking into account the advantage of the standard smoothing Newton method (see, e.g., [12, 15, 16, 18]) in adjusting the the value of smoothing parameter automatically, we further transform problem 21 into a smooth optimization problem.
Denote z=(μ,x)∈R++×Rn. We define a function H:R++×Rn→R++×Rn by
(23)H(z)=μΦ(z)
with Φ(z)=(ϕ1(z),ϕ2(z),…,ϕn(z))T. Then, corresponding to any solution x* of 21, z*=(0,x*) is an optimal solution of the following minimization problem:
(24)minΨ(z)≜H(z)2=μ2+Φ(z)2.
Conversely, if z*=(μ*,x*) is an optimal solution of problem 24 with Ψ(z*)=0, then, x* solves the system of 21.
Next, we focus on developing an efficient algorithm to solve problem 24. Before presentation of such an algorithm, we further investigate the properties of problem 24. The following definitions are useful to describe the properties of H.
Definition 4.
A matrix M∈Rn×n is said to be a P0 matrix if all principal minors of M are nonnegative.
Definition 5.
A function F:Rn→Rn is said to be a P0 function if for all x,y∈Rn with x≠y, there holds that
(25)maxi∈Nxi-yiFix-Fiy≥0.
Definition 6 (see [19, 20]).
Suppose that Ψ:Rn→Rn is a locally Lipschitz continuous function, which has the generalized Jacobian ∂Ψ(x) in the sense of Clarke [21], it is said to be semismooth (or strongly semismooth) at a point x∈Rn if and only if for any V∈∂Ψ(x+h), as h→0,
(26)Vh-Ψ′(x,h)=o(h),orOh2,Ψx+h-Ψx-Ψ′(x,h)=o(h),orOh2.
We now prove the following results.
Lemma 7.
Let z=(μ,x) and H be defined by 23. Then, consider the following:
H is continuously differentiable at any z=(μ,x)∈R++×Rn with its Jacobian matrix
(27)H′(z)=10A(z)B(z),
where
(28)A(z)=12vec1πln1+xi-Fix2μ2:i∈N,B(z)=12I-Dz+I+DzF′x,D(z)=2πdiagarctanxi-Fi(x)μ:i∈N.
Furthermore, if F is a P0-function, then, H′ is nonsingular for any μ>0.
H is locally Lipschitz continuous and semismooth on Rn+1.
Proof.
(i) Since Φ is continuous differentiable at any z=(μ,x)∈R++×Rn, then H is continuous differentiable. For any μ>0, by straightforward calculation, it yields 27 from the definition of H.
Note that, for all i∈N, -1<D(z)ii<1. It is clear that I-D(z) and I+D(z) are two positive diagonal matrices. Since F is a P0-function, F′ is also a P0-matrix for all x∈Rn. Thus, the principal minors of the matrix (I+D(z))F′(x) are nonnegative. By Definition 4, we know that the matrix (I+D(z))F′(x) is a P0-matrix. From Theorem 3.3 in [7], it follows that the matrix B(z) is nonsingular. Then, it is concluded that the matrix H′(z) is nonsingular.
(ii) It is clear that H is locally Lipschitz continuous and semismooth on Rn+1. The proof is completed.
With the properties of H in Lemma 7, we first present an algorithm to solve problem 24 similar to the idea in [18, 22–25].
Algorithm 8 (a smoothing inexact Newton method).
Step 0. Choose constants δ, γ∈(0,1), σ∈(0,1/2), μ0>0 such that μ0γ<1. Given an initial point x0∈Rn, choose a sequence {θk}⊂R++ such that θk∈(0,1-μ0γ). Set z0:=(μ0,x0) and k:=0.
Step 1. If H(zk)=0, then the algorithm stops. Otherwise, compute
(29)β(zk):=γmin1,Ψzk,h(zk):=μ0βzk,θkΦzk.
Step 2. Compute Δzk:=(Δμk,Δxk)∈R×Rn by
(30)H(zk)+H′(zk)Δzk=h(zk).
Step 3. Set αk:=δmk, where mk is the smallest nonnegative integer m such that
(31)Ψzk+δmΔzk≤1-2σ1-μ0γ-θkδmΨ(zk).
Step 4. Set zk+1:=zk+αkΔzk and k:=k+1. Return to Step 1.
Remark 9.
Similar to the idea in [26], we develop Algorithm 8 by incorporating an inexact parameter θk at each iteration to obtain an inexact Newton direction of search in 30. Generally, we choose a sequence {θk} in advance, such that limk→∞θk=0. Suitable choice of {θk} can be used to improve the numerical performance of Algorithm 8 by generating an inexact Newton direction Δzk in Step 2 of Algorithm 8. The difference between Algorithm 8 and that developed in [26] lies in the distinct smoothing method. In [26], instead of the smoothing function 19, the Fischer-Burmeister function is adopted.
On the other hand, without the assumption of strict complementarity, we will establish the theory of global and local superlinearly convergences for Algorithm 8 in Section 4 under weaker conditions than the existing results.
If θk≡0, then, Algorithm 8 reduces to a smoothing Newton algorithm, which is similar to that developed in [18]. However, the definition of h in this paper is different from that in [18].
Denote
(32)Ω:=z=μ,x∈Rn+1:μ≥μ0βz.
The following result shows that Algorithm 8 is well-defined.
Theorem 10.
Suppose that F is a continuous differentiable P0-function.
For the system of linear equations 30 in the unknown variable Δzk, there exists a unique solution.
In finitely many back-tracking steps, αk in Step 3 of Algorithm 8 is obtained to satisfy 31.
Let {zk} be the sequence generated by Algorithm 8. Then, for all k>0, zk∈Ω.
Proof.
We prove the first result.
Since F is a continuously differentiable P0-function, it follows from Lemma 7 that the matrix H′ is nonsingular at zk as μk>0. It implies that the system of linear equations 30 in the unknown variable Δzk has a unique solution. Thus, Step 2 of Algorithm 8 is well-defined.
We now prove the second result.
By 30, we have
(33)Δμk=-μk+μ0β(zk).
From the definitions of Ψ(zk) and β(zk), it follows that, for all k≥0,
(34)β(zk)≤γΨzk1/2,μk≤Ψzk1/2.
Thus, for any α∈(0,1)(35)μk+αΔμk2=1-αμk+αμ0βzk2=1-α2μk2+2(1-α)αμ0μkβ(zk)+α2μ02βzk2≤1-α2μk2+2αμ0γΨ(zk)+O(α2).
Denote
(36)φ(α)=Φ(zk+αΔzk)-Φ(zk)-αΦ′(zk)Δzk.
Since Φ is continuous differentiable at z∈R++×Rn, then φ(α)=o(α); we conclude from 36 that
(37)Φ(zk+αΔzk)2=Φ(zk)+αΦ′(zk)Δzk+φ(α)2=(1-α+αθk)Φ(zk)+φ(α)2≤1-α+αθk2Φ(zk)2+o(α).
It yields
(38)Ψ(zk+αΔzk)=μk+12+Φ(zk+αΔzk)2≤1-α2μk2+2αμ0γΨ(zk)+1-α+αθk2Φ(zk)2+o(α)+O(α2)≤1-α+αθk2Ψ(zk)+2αμ0γΨ(zk)+o(α)≤Ψ(zk)-2α(1-θk)Ψ(zk)+α21-θk2Ψ(zk)+2αμ0γΨ(zk)+o(α)≤1-21-μ0γ-θkαΨ(zk)+o(α).
Since θk<1-μ0γ, there exists a constant α¯∈(0,1) such that, for any α∈(0,α¯] and σ∈(0,1), there holds that
(39)Ψ(zk+αΔzk)≤[1-2σ(1-μ0γ-θk)α]Ψ(zk).
This demonstrates that Step 3 of Algorithm 8 is well-defined at each iteration.
Finally, we prove zk∈Ω for all k>0.
It is clear that μ0β(z0)≤μ0γ≤μ0. In other words, z0∈Ω. Suppose that zk∈Ω as k≥1. Then, μk≥μ0β(zk). By 31, we get Ψ(zk)≥Ψ(zk+1); then β(zk)≥β(zk+1). By 33, we have
(40)μk+1=(1-α)μk+αμ0β(zk)≥(1-α)μ0β(zk)+αμ0β(zk)≥μ0β(zk)≥μ0β(zk+1).
The last inequality implies that the desired result holds for k+1. By mathematical induction method, we concluded that zk∈Ω for all k>0.
We have completed the proof of Theorem 10.
Remark 11.
By Theorem 10, we know that Algorithm 8 is well-defined, and either it stops in finitely many steps or generates an infinite sequence {zk=(μk,xk)} with μ∈R++ and zk∈Ω for all k≥0. In the subsequent section, we will analyze the convergence of this sequence.
4. Convergence
In this section, we will establish the global convergence and the superlinear convergence for Algorithm 8.
We first prove the following result.
Lemma 12.
Let Φμ be defined by 20. If F is a P0-function, then, for any μ>0, Φμ is coercive in x. That is,
(41)limx→∞Φμ(x)⟶+∞.
Proof.
As x→∞, there exists a vector sequence {xk} which is unbounded. Then, there is a component i0∈{1,2,…,n} such that {xi0k} is unbounded.
Define an index set J={i∈N:{xik}isunbounded}. Then, J is a nonempty set. Without loss of generality, we assume that {xjk}→+∞, for all j∈J.
Let the sequence x^k be defined by
(42)x^ik=0,ifi∈J,xik,ifi∉J,i∈N.
Then, it is clear that {x^k} is bounded. Since F is a P0-function, by Definition 5, we have
(43)0≤maxi∈N{(xik-x^ik)[Fi(xk)-Fi(x^k)]}=maxi∈J{xik[Fi(xk)-Fi(x^k)]}=xjk[Fj(xk)-Fj(x^k)],
where j is one of the indices at which the max is attained. Since j∈J, and j can be supposed to be independent of k, we know xjk→+∞ as k→+∞.
Next, we continue the proof in the following six directions.
Case 1 (xjk→+∞ and xjk-Fj(xk)→+∞). Since {Fj(x^k),k∈N} is bounded by the continuity of Fj and the definition of x^k, we know that Fj(xk)↛-∞ from 43. Thus,
(44)ln1+xjk-Fjxk2μ2⟶+∞.
It yields
(45)ϕjμ,xk=12xjk+Fjxk1+xjk-Fj(xk)2μ2hhh-2πxjk-Fj(xk)arctanxjk-Fj(xk)μhhh+1πμln1+xjk-Fj(xk)2μ2⟶+∞.
Case 2 (xjk→-∞ and xjk-Fj(xk)→+∞). It is clear that
(46)Fj(xk)-xjk⟶-∞,ln1+xjk-Fj(xk)2μ2⟶+∞.
In virtue of
(47)limu→-∞ln(1+u2)u=0,
we obtain
(48)ϕjμ,xk=12xjk+Fj(xk)1+xjk-Fjxk2μ2hhh-2πxjk-Fj(xk)arctanxjk-Fj(xk)μhhh+1πμln1+xjk-Fjxk2μ2=Fjxk-xjk1+xjk-Fjxk2μ2hhh+μ2πln1+xjk-Fjxk2μ2+xjk⟶+∞.
Case 3 (xjk→+∞ and xjk-Fj(xk)→-∞). In the same reason as in Case 1, Fj(xk)↛-∞. Thus,
(49)ln1+xjk-Fjxk2μ2⟶+∞.
It yields
(50)ϕjμ,xk=12xjk+Fj(xk)1+xjk-Fj(xk)2μ2hhh-2πxjk-Fj(xk)arctanxjk-Fj(xk)μhhh+1πμln1+xjk-Fj(xk)2μ2=xjk+μ2πln1+xjk-Fjxk2μ2⟶+∞.
Case 4 (xjk→-∞ and xjk-Fj(xk)→-∞). Similar to Case 2, we can obtain
(51)ϕjμ,xk=12xjk+Fj(xk)1+xjk-Fj(xk)2μ2hhh-2πxjk-Fj(xk)arctanxjk-Fj(xk)μhhh+1πμln1+xjk-Fj(xk)2μ2=xjk+μ2πln1+xjk-Fj(xk)2μ2=xjk-Fjxk+μ2πln1+xjk-Fjxk2μ2hhh1+xjk-Fjxk2μ2+Fjxk⟶+∞.
Case 5 (xjk→+∞ and xjk-Fj(xk) is bounded). On the one hand, it is clear that
(52)w=-2π(xjk-Fj(xk))arctanxjk-Fj(xk)μ+1πμln1+xjk-Fjxk2μ2
is bounded. On the other hand, xjk→+∞ and xjk-Fj(xk) is bounded; we know Fj(xk)→+∞. Thus, xjk + Fj(xk)→+∞. It yields
(53)ϕjμ,xk=12xjk+Fjxk+w⟶+∞.
Case 6 (xjk→-∞ and xjk-Fj(xk) is bounded). Similar to Case 5, it is easy to prove that ϕjμ,xk→+∞.
The proof is completed.
Remark 13.
By Lemma 12, we can remove the assumption that the level set of the merit function is bounded. In addition, different from [13, 22, 27], the result of Lemma 12 is obtained in this paper for the nonsymmetric smoothing function.
Before statement of main results, we need the following assumption.
Assumption 14.
The solution set S of NCP(F)1 is nonempty and bounded.
Remark 15.
Assumption 14 is a relatively weak condition to ensure the convergence of Algorithm 8. For example, in [26], it is assumed that the level sets
(54)L(z0)={z∈Rn+1,Φ(z)≤Φ(z0)}
are bounded to prove the convergence of algorithm. Up to our knowledge, for the Fischer-Burmeister smoothing function, 54 is proved to be true under the condition that F in NCP(F)1 is a uniform P-function.
However, with our smoothing method, we can prove that 54 holds. Since the proof is only involved with the condition that F is a P0-function, Assumption 14 is weaker than that in [26].
With Lemma 12 and Assumption 14, we are now in a position to establish the convergence theory for Algorithm 8.
Theorem 16.
Let {zk=(μk,xk)} be the iteration sequence generated by Algorithm 8. Under Assumption 14, the following statements are true.
{Ψ(zk)} and {μk} generated by Algorithm 8 are two monotonically decreasing and bounded sequences, whose limits are 0.
Any accumulation point of {zk} is a solution of 24.
Under Assumption 14, {zk} has at least one accumulation point z*=(μ*,x*) with H(z*)=0 and x*∈S.
Proof.
(i) From Steps 2 and 3 of Algorithm 8, it is clear that {Ψ(zk)}, {β(zk)}, and {μk} are three monotonically decreasing and bounded sequences.
(ii) By Lemma 12, we conclude that the sequence {zk} is bounded. Then, without loss of generality, we suppose that as k→∞, there exists z* such that
(55)zk⟶z*,β(zk)⟶β*,Ψ(zk)⟶Ψ*,μk⟶μ*.
If Ψ*>0, then, by the definition of β(zk), β*>0 and μ*>0. From Lemma 7, it follows that H′(z*) is nonsingular. Thus, there exist a closed neighborhood N(z*) and a constant α¯∈(0,1], such that, for any z∈N(z*) and nonnegative integer m satisfying δm∈(0,α¯], the following inequality holds true:
(56)Ψzk+δmΔzk≤[1-2σ(1-μ0γ-θk)δm]Ψ(zk).
If k is large enough such that mk≤m and δmk≥δm, then,
(57)Ψzk+1≤[1-2σ(1-μ0γ-θk)δmk]Ψ(zk)≤[1-2σ(1-μ0γ-θk)δm]Ψ(zk).
Therefore, as k→∞, it follows from Ψ*>0 that
(58)2σ(1-μ0γ-θk)≤0.
It contradicts (1-μ0γ-θk)>0. We conclude that Ψ(zk)→0 and μk→0.
(iii) By Assumption 14, we know that Φ-1(0) is nonempty and bounded. Thus, {zk} has at least one accumulation point z*=(μ*,x*) with H(z*)=0 and x*∈S.
Theorem 17.
Suppose that Assumption 14 is satisfied and z*=(μ*,x*) is an accumulation point of the sequence {zk} generated by Algorithm 8. If all V∈∂H(z*) are nonsingular, then, {zk} converges to z* superlinearly; that is, zk+1-z*=o(zk-z*). Moreover, μk+1=o(μk).
Proof.
By Theorem 16, we have H(z*)=0 and x*∈S. Because all V∈∂H(z*) are nonsingular, it follows that for all zk sufficiently close to z*,(59)H′zk-1=O(1).
From Lemma 7, it follows that H(·) is semismooth at z*. Hence, for all zk sufficiently close to z*, we have
(60)H(zk)-H(z*)-H′(zk)(zk-z*)=ozk-z*.
On the other hand, Lemma 7 implies that H(·) is locally Lipschitz continuous near z*. Therefore, for all zk sufficiently close to z*, we have
(61)H(zk)=Ozk-z*.
Since limk→∞θk=0, it is concluded that θkH(zk)=ozk-z*. Thus, by the definitions of h(z) and β(z), we have
(62)h(zk)≤μ0βzk+θkΦzk≤μ0γΨ(zk)+θkHzk=ozk-z*.
Then, in view of 59, 60, and 62, it is obtained that
(63)zk+Δzk-z*=zk+H′zk-1[-H(zk)+h(zk)]-z*≤H′zk-1·Hzk-Hz*-H′zkzk-z*+hzk=o(zk-z*).
On the other hand, from 61, it follows that
(64)Ψzk+Δzk=H(zk+Δzk)2=O(zk+Δzk-z*2)=ozk-z*2=o(Hzk2)=oΨzk.
Thus, as zk sufficiently close to z*, we have zk+1=zk+Δzk. It yields
(65)μk+1=μk+Δμk=μ0γH(zk)2.
In virtue of 65, we obtain
(66)μk+1μk=H(zk)2H(zk-1)2=oΨzk-1Ψ(zk-1).
As zk sufficiently close to z*, we know μk+1=o(μk). The proof has been completed.
5. Numerical Experiments
In this section, we test the numerical performance of Algorithm 8 for solving benchmark test problems in NCP.
Algorithm 8 is implemented in MATLAB2008a on a PC 2.00 GHZ CPU with 2.00 GB RAM with the operation system of Windows 7. Throughout the experiments, the parameters in Algorithm 8 are chosen as follows:
(67)μ0=0.01,σ=0.25,δ=0.85,γ=0.2,θk=14k+1.
We use H(zk)<10-8 as the termination criterion.
Numerical results are reported in Tables 1–9, where we use the following denotations for conciseness:
IT: the number of iterations,
ST: the initial point x0,
CT: the CPU time depleted by the algorithm,
x*: a solution of the NCP,
xk: the final value of x,
o: zero vector with n dimension,
e: unit vector with n dimension,
F: The algorithm fails to get a solution.
Numerical results of Problem 1.
ST
IT
xk
xTy
Hk
CPU
o
4
(0,0.0000,0)
4.83E-09
8.38E-09
0.085
e
5
(0,0.4998,0)
3.44E-11
3.97E-11
0.103
10e
6
(0,1.0000,0)
1.05E-11
9.09E-12
0.124
-e
5
(0,0.0023,0)
2.09E-09
3.61E-09
0.105
-10e
6
(0,0.0071,0)
2.49E-11
4.28E-11
0.124
-100e
6
(0,0.0399,0)
1.50E-09
2.50E-09
0.125
Numerical results of Problem 2.
ST
IT
xk
xTy
Hk
CPU
e
5
(0.37,0,3,0)
1.27E-08
3.91E-09
0.304
o
5
(0,0,0,0)
6.59E-11
3.81E-11
0.292
-2e
6
(0.002,0,0,0)
1.32E-08
3.31E-09
0.356
-11e
6
(1.85,0,0,0)
1.06E-08
2.64E-09
0.416
-20e
6
(0.05,0,0,0)
1.38E-08
3.46E-09
0.366
Numerical results of Problem 3.
ST
IT
xk
xTy
Hk
CPU
e
6
x2
2.99E-08
1.42E-09
0.679
10e
7
x2
3.54E-08
1.77E-09
0.729
102e
8
x2
3.02E-08
1.54E-09
0.877
103e
9
x1
1.07E-08
9.99E-09
0.965
Solution of Problem 4 with random initial points.
ST
IT
xTy
Hk
CPU
e
18
8.52E-10
1.49E-10
5.539
o
22
2.50E-10
4.38E-11
6.538
-e
38
3.41E-09
5.96E-10
11.306
rand(5,1)
16
4.20E-10
7.35E-11
4.796
Numerical results of Problem 5.
ST
IT
xTy
Hk
CPU
o
6
1.38E-08
4.97E-09
0.858
e
7
5.21E-09
1.89E-09
1.109
-e
7
1.32E-08
4.77E-09
1.003
-10e
7
1.32E-08
4.77E-09
0.990
-102e
8
1.32E-08
4.76E-09
1.142
-103e
8
1.32E-08
4.76E-09
1.079
-104e
9
1.40E-08
2.47E-09
1.268
Numerical results of Problem 6.
ST
IT
xTy
Hk
CPU
e
7
4.01E-11
1.39E-11
0.248
10e
9
8.61E-09
2.97E-09
0.358
102e
10
1.13E-09
3.91E-10
0.316
103e
9
2.45E-09
8.48E-10
0.359
-e
8
2.46E-11
8.51E-12
0.294
-10e
15
9.46E-12
3.28E-12
0.504
-103e
24
8.96E-10
3.10E-10
0.759
Numerical results of Problem 7.
n
IT
xTy
Hk
CPU
128
8
1.30E-10
1.48E-09
0.040
256
8
1.15E-10
1.90E-09
0.155
512
9
1.55E-11
3.50E-10
1.010
800
10
1.99E-11
6.44E-10
2.701
1000
10
1.58E-11
4.99E-10
4.738
Effect of inexact parameter in Problem 3.
ST
θk
IT
xTy
Hk
CPU
e
0
6
7.95E-09
3.89E-10
0.70
0.2
6
1.46E-08
7.27E-10
0.66
0.4
7
1.30E-09
8.41E-11
0.73
0.6
9
4.77E-09
4.60E-10
0.91
0.8
13
7.14E-08
4.05E-09
1.33
10^2e
0
F
F
F
F
0.2
8
2.97E-08
1.52E-09
0.92
0.4
10
4.25E-12
1.71E-12
1.12
0.6
10
8.98E-09
1.43E-09
1.01
0.8
15
3.66E-08
6.99E-09
1.57
Effect of inexact parameter in Problem 6.
ST
θ0
IT
xTy
Hk
CPU
e
0
6
2.01E-09
6.78E-10
0.22
0.2
6
1.18E-08
4.06E-09
0.22
0.4
7
7.50E-11
4.66E-11
0.27
0.6
9
1.32E-11
1.99E-10
0.27
0.8
13
9.29E-09
7.16E-09
0.39
10e
0
F
F
F
F
0.2
8
9.13E-09
3.15E-09
0.28
0.4
10
9.03E-11
3.11E-11
0.34
0.6
11
7.19E-12
4.06E-11
0.36
0.8
14
8.25E-09
9.51E-09
0.44
The test problems are from the literature (see, e.g., [22, 27, 28]).
Problem 1.
In problem 1, x∈R3 and F(x):R3→R3 is given by
(68)F(x)=x2x3-x2+x3+1.
This problem has infinitely many solutions (0,λ,0), where λ∈[0,1]. The test results are listed in Table 1 by using different initial points.
Problem 2 (modified Mathiesen problem).
In problem 1, x∈R4 and F(x):R4→R4 is given by
(69)F(x)=-x2+x3+x4x1-(4.5x3+2.7x4)(x2+1)5-x1-(0.5x3+0.3x4)(x3+1)3-x1.
This problem has infinitely many solutions (λ,0,0,0), where λ∈[0,3]. The solutions are degenerate for λ=0 or λ=3 and nondegenerate for λ∈(0,3). With different starting points, we report results in Table 2.
Problem 3 (Kojima-Shindo problem).
In problem 1, x∈R4 and F(x):R4→R4 is given by
(70)F(x)=3x12+2x1x2+2x22+x3+3x4-62x12+x1+x22+10x3+2x4-23x12+x1x2+2x22+2x3+9x4-9x12+x22+2x3+3x4-3.
This problem has one degenerate solution x1=6/2,0,0,1/2T and one nondegenerate solution x2=(1,0,3,0)T. We use different initial points and the test results are listed in Table 3.
Problem 4.
In problem 1, x∈R5 and F(x)=(F1(x),…,F5(x))T where
(71)Fix=2exp∑i=15xi-i+22xi-i+2,i=1,…,5.
This problem has one solution (0,0,1,2,3). We use different starting points and the last initial point x0 is randomly generated whose elements are in the interval (0,1). The test results are listed in Table 4.
Problem 5.
In problem 1, x∈R7 and F(x):R7→R7 is given by
(72)F(x)=2x1-x3+x5+3x6-1x2+2x5+x6-x7-3-x1+2x3+x4+x5+2x6-4x7+1x3+x4+x5-x6-1-x1-2x2-x3-x4+5-3x1-x2-2x3+x4+4x2+4x3-1.5.
The test results are listed in Table 5 by using different initial points.
Problem 6.
In problem 1, x∈R4 and F(x):R4→R4 is given by
(73)F(x)=x13-8x2-x3+x23+3x2+x3+2x33-3x4+2x43.
In this problem, F(x) is a P0-function. It has only one solution (2,0,1,0). With different initial points, the results are listed in Table 6.
Problem 7.
In problem 1, x∈Rn and F(x)=Mx+q with
(74)Mii=4i-1+1,i=1,2,…,n,Mij=Mii+1,i=1,2,…,n-1,j=i+1,…,n,Mji=Mjj+1,j=1,2,…,n-1,i=j+1,…,n,q=-1,-1,…,-1T.
This problem has only solutions x*=(1,0,…,0)T. From the initial point x0=(1,1,…,1)T, we solve this problem with different dimensions. The test results are listed in Table 7.
In the end of this section, we intend to test the effect of the inexact parameter θk on the efficiency of Algorithm 8.
In Tables 8 and 9, for Problems 3 (not a P0-function) and 6, we take different values of θk, θk=0,0.2,0.4,0.6,0.8, and implement Algorithm 8 to find the solutions of Problems 3 and 6, respectively.
From Tables 8 and 9, it is revealed that, for θk=0 (which corresponds to the smoothing exact Newton method), Algorithm 8 may fail for some initial points. On the other hand, a suitable value of inexact parameter may greatly improve the efficiency of Algorithm 8.
From the numerical results, we conclude as follows:
In Tables 1–7, the choice of initial point only incurs weak impact on the CPU time and the iteration number of Algorithm 8. It indicates that the developed algorithm in this paper is robust even if for the randomly generated initial point.
From the results in Tables 8 and 9, the inexact parameter θk may play critical role in improve the numerical performance of Algorithm 8.
6. Final Remarks
In this paper, a smoothing inexact Newton method has been proposed for solving nonlinear complementarity problems based on a new smoothing function. Then, an implementable algorithm was developed. Under a suitable assumption, the global convergence and the superlinear convergence have been established for the algorithm. Results of numerical experiments indicate that our algorithm is effective for the benchmark test problems available in the literature.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research is supported by the Major Program of the National Social Science Foundation of China (14ZDB136), the National Natural Science Foundation of China (Grant nos. 71210003, 11461015), and the Natural Science Foundation of Hunan Province (13JJ3002).
GabrielS. A.ConejoA. J.FullerJ. D.HobbsB. F.RuizC.2013180New York, NY, USASpringerInternational Series in Operations Research and Management Science10.1007/978-1-4419-6123-5MR2962098HarkerP. T.PangJ.-S.Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications1990481–316122010.1007/bf01582255MR10737072-s2.0-0344704362FerrisM. C.PangJ. S.Engineering and economic applications of complementarity problems199739466971310.1137/s0036144595285963MR1491052ZhangJ.ZhangK.-C.A variant smoothing Newton method for P0-NCP based on a new smoothing function200922511810.1016/j.cam.2008.06.012MR24901652-s2.0-58549097844ChenX.Superlinear convergence of smoothing quasi-Newton methods for nonsmooth equations199780110512610.1016/s0377-0427(97)80133-1MR14557482-s2.0-0031117146ChenJ. S.PanS. H.A family of NCP functions and a descent method for the nonlinear complementarity problem200840338940410.1007/s10589-007-9086-0MR24112012-s2.0-45749154599ChenB.HarkerP. T.Smooth approximations to nonlinear complementarity problems199772403420MR14436262-s2.0-003149311810.1137/s1052623495280615ChenX. H.MaC. F.A regularization smoothing Newton method for solving nonlinear complementarity problem20091031702171110.1016/j.nonrwa.2008.02.010MR25029772-s2.0-60549117291ChenX.Smoothing methods for complementarity problems and their applications: a survey2000431324710.1016/s0453-4514(00)88750-5MR17683852-s2.0-0034148176ChenX.QiL.SunD.Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities19986722251954010.1090/s0025-5718-98-00932-6MR14582182-s2.0-0032380644LiuX.-H.WuW.Coerciveness of some merit functions over symmetric cones20095360361310.3934/jimo.2009.5.603MR25162972-s2.0-68049142137HuangZ. H.HanJ. Y.ChenZ. W.Predictor-corrector smoothing Newton method, based on a new smoothing function, for solving the nonlinear complementarity problem with a P0 function20031171396810.1023/a:10236483059692-s2.0-0038403655HuangZ.HanJ.XuD.ZhangL.The non-interior continuation methods for solving the P0 function nonlinear complementarity problem20014491107111410.1007/bf02877427MR1860828ZhuJ. G.LiuH. W.LiuC. H.A family of new smoothing functions and a nonmonotone smoothing Newton method for the nonlinear complementarity problems2011371-264766210.1007/s12190-010-0457-9MR28315612-s2.0-80052617340ZhuJ.HaoB.A new class of smoothing functions and a smoothing Newton method for complementarity problems20137348149710.1007/s11590-011-0432-xMR30227852-s2.0-84874436277TangJ.DongL.ZhouJ.FangL.A smoothing Newton method for nonlinear complementarity problems201332110711810.1007/s40314-013-0015-9MR31012802-s2.0-84879829043LiQ.LiD. H.A smoothing Newton method for nonlinear complementarity problems2011132141152MR2804999ZhangL. P.HanJ. Y.HuangZ. H.Superlinear/quadratic one-step smoothing Newton method for P0-NCP200521111712810.1007/s10114-004-0412-5MR21288292-s2.0-13844253901MiffR.Semi-smooth and semi-convex function in constrained optimization199715957972QiL. Q.SunJ.A nonsmooth version of Newton's method199358335336710.1007/bf01581275MR12167912-s2.0-0027543961ClarkeF. H.1983New York, NY, USAWileyMR709590FangL.A new one-step smoothing Newton method for nonlinear complementarity problem with P0-function201021641087109510.1016/j.amc.2010.02.001MR26072182-s2.0-77949915059HuangZ.-H.QiL.SunD.Sub-quadratic convergence of a smoothing Newton algorithm for the P0- and monotone LCP20049942344110.1007/s10107-003-0457-8MR20517052-s2.0-12244279249LiuS.TangJ.MaC.A new modified one-step smoothing Newton method for solving the general mixed complementarity problem201021641140114910.1016/j.amc.2010.02.006MR26072232-s2.0-77949914067QiL.Convergence analyis of some algorithms for solving nonsmooth equations199318227244RuiS.-P.XuC.-X.A smoothing inexact Newton method for nonlinear complementarity problems201023392332233810.1016/j.cam.2009.10.018MR25777702-s2.0-73249126513TangJ.LiuS.MaC.One-step smoothing Newton method for solving the mixed complementarity problem with a P0 function200921523262336QiL.SunD.Smoothing functions and smoothing Newton method for complementarity and variational inequality problems2002113112114710.1023/a:10148613313012-s2.0-0036255350