A Smoothing Inexact Newton Method for Generalized Nonlinear Complementarity Problem

Based on the smoothing function of penalized Fischer-Burmeister NCP-function, we propose a new smoothing inexact Newton algorithm with non-monotone line search for solving the generalized nonlinear complementarity problem. We view the smoothing parameter as an independent variable. Under suitable conditions, we show that any accumulation point of the generated sequence is a solution of the generalized nonlinear complementarity problem. We also establish the local superlinear quadratic convergence of the proposed algorithm under the BDregular assumption. Preliminary numerical experiments indicate the feasibility and efficiency of the proposed algorithm.


Introduction
Consider the generalized nonlinear complementarity problem, denoted by GNCP F, G, K , which is to find a vector x ∈ R n such that where F, G : R n → R m are continuously differentiable mappings.K is a nonempty closed convex cone in R m and K • denotes its polar cone.GNCP F, G, K finds important applications in many fields, such as engineering and economics, and is a wide class of problems that contains the classical nonlinear complementarity problem abbreviated as NCP ; see 1-3 and references therein.To solve it, one usually reformulates it as a minimization problem over a simple set or an unconstrained optimization problem; see 4 for the case that K is a general cone, and see 3, 5 for the case that K R n .The conditions under which a stationary point of the reformulated optimization is a solution of the GNCP F, G, K were also provided in the literature.
In this paper, we consider the GNCP F, G, K for the case that m n, and K is a polyhedral cone in R n ; that is, there exist A ∈ R s×n and B ∈ R t×n such that It is easy to verify that its polar cone K • has the following representation: In the following, the GNCP F, G, K is specialized over a polyhedral cone, and in the subsequent analysis, we abbreviate it as GNCP for simplicity.In 1 , Andreani et al. reformulated the problem as a smooth optimization problem with simple constraints and presented the sufficient conditions under which a stationary point of the optimization problem is a solution of the concerned problem.Wang et al. 6 reformulated the problem as a system of nonlinear and nonsmooth equations and proposed a nonsmooth L-M method to solve this problem and proved that the algorithm is both globally and superlinearly convergent under mild assumptions.Zhang et al. 7 rearranged the GNCP over a polyhedral as a smoothing system of equations, then developed a smoothing Newton-type method for solving it, and proved that their method has local superlinear quadratic convergence under certain conditions.There are a lot of computations to decide whether the linear system is solvable or not, which is in Step 2 of the algorithm presented in 7 .The inexact approach is one way to overcome this difficulty.Inexact Newton methods have been proposed for solving NCP 8-10 .Their main idea is to solve the linear system only approximately.It seems reasonable to ask if this kind of method can be applied to the GNCP, and this actually constitutes the main motivation of this paper.In this paper, we propose a new smoothing inexact Newton algorithm with non-monotone line search for solving GNCP by using a new type of smoothing function.We view the smoothing parameter as an independent variable.The forcing parameter of inexact Newton method links the norm of residual vector to the norm of mapping at the current iterate.Under suitable conditions, we show that any accumulation point of the generated sequence is a solution of the GNCP, and we also establish the local superlinear quadratic convergence of the proposed algorithm under the BD-regular assumption.Some numerical examples indicate the feasibility and efficiency of the proposed algorithm.
The rest of this paper is organized as follows.In Section 2, we show some preliminaries.Stationary point and nonsingularity conditions of the GNCP are discussed in Section 3. In Section 4, we propose a kind of algorithm for solving GNCP and obtain the convergence properties of this kind of algorithm.Numerical experiments are exhibited in Section 5 and the conclusion is stated in the last section.
At the end of this section, we indicate some standard notions used in this paper.For a continuously differentiable function F : R n → R m , we denote the Jacobian of F at x ∈ R n by F x ∈ R m×n , whereas the transposed Jacobian is denoted by ∇F x .In particular, if m 1, ∇F x is a column vector.We use x y to denote the inner product of vectors x, y ∈ R n , and use x i or x i to denote the ith component of the vector x ∈ R n .The null space of a matrix B is denoted by N B .

Preliminaries
In this section, we first review some definitions and basic results.Definition 2.1.A matrix M ∈ R n×n is said to be a P 0 -matrix if every principle minor of M is nonnegative.
Definition 2.2.A function F : R n → R n is said to be a P 0 -function if for all x, y ∈ R n with x / y, there exists an index i 0 ∈ {1, 2, . . ., n} such that For vector a ∈ R n , we use D a for diag a .For a P 0 -matrix, the following conclusion holds 11 .
The concept of semismoothness was originally introduced by Mifflin for functionals 13 .Qi and Sun extended the definition of semismooth function to vector-valued functions 14 .Convex functions, smooth functions, and piecewise linear functions are examples of semismooth functions.A function is semismooth at x if and only if all its component functions are semismooth.The composition of semismooth functions is still a semismooth function.
Lemma 2.6 see 14 .Suppose that ϕ : R n → R m is a locally Lipschitz function and semismooth at x. Then In this paper, we use the following smoothing approximation function: of the penalized Fischer-Burmeister NCP-function: where α > 0, a, b, ∈ R, and t max{0, t}.This NCP-function has turned out to have stronger theoretical properties than the widely used Fischer-Burmeister function and other NCP-functions suggested previously see 16 .The latter term penalizes violations of the complementarity condition and plays a significant role from both a theoretical and a practical point of view.
It is easy to see that the following Lemma is true from 6 .
Lemma 2.7.x ∈ R n is a solution of the GNCP if and only if there exist λ ∈ R s and μ ∈ R t , such that where Based on the relation between φ α •, • and φ α •, •, • , we can establish the following smoothing function to the GNCP.Denote

2.10
For simplicity, we let y x, λ, μ and z , y and denote where

2.14
(2) H α z is semismooth on R 1 n s t and is strongly semismooth on R 1 n s t if F x and G x are both Lipschitz continuous on R n .
(3) T α z is continuously differentiable on R 1 n s t with ∇T α z V H α z for any V ∈ ∂H α z and f α , y is continuously differentiable with ∇f α 0, y V ψ α 0, y for any V ∈ ∂ψ α 0, y .

Stationary Point and Nonsingularity Conditions
Generally, for an optimization problem, we can obtain its stationary point when we use the existing optimization methods to solve it.Now we should study how to guarantee that every stationary point of 2.12 is a solution of the GNCP.In the following, we will discuss the conditions.Theorem 3.1.Let z , x, λ, μ be a stationary point of 2.12 .If ≥ 0 and ∇F x −1 ∇G x is positive definite in N B , then x is a solution of the GNCP.

3.2
From Lemma 2.8, we have

3.4
From the forth equation of 3.4 , we have W ∈ N B .Premultiplying the second equation of 3.4 by W ∇F x −1 , one has 3.5 Combining 3.5 with the third and forth equations of 3.4 , we obtain From Lemma 2.8, we have that D 2 x D 4 x D 1 x D 3 x is positive definite.Since W ∈ N B from the forth equation of 3.4 , together with the positive definiteness of ∇F x −1 ∇G x in N B , we have U 0, W 0.

3.7
Subtituting U 0 into the first equation of 3.4 , we have 0.

3.8
Together the second equation of 3.4 with 3.8 and 3.7 , we have Since ∇f x is nonsingular, premultiplying 3.9 by F x ∇F x −1 , one has Hence, V 0. We complete the proof.
Theorem 3.2.Let z , x, λ, μ be a stationary point of 2.12 , > 0,B has full row rank, and ∇F x ∇G x is positive definite, then V is nonsingular for any V ∈ ∂H α z .Proof.By Lemma 2.8, we know that any element V ∈ ∂H α z can be written as where D 1 x , D 2 x , D 3 x , and D 4 x are defined in Lemma 2.8.
In order to complete the proof, we only need to prove that is nonsingular.That is to say, if and only if

3.15
According to the first equation of 3.15 , we have

3.16
Combining 3.16 with the third equation of 3.15 , one has Premultiplying by p ∇F x and recalling the second equation of 3.15 , we get By using the assumption that ∇F x ∇G x is positive, we obtain p 0, 3.19 which, combining with 3.16 , gives q 0.

3.20
By using the assumption that B has full row rank and the third equation of 3.15 , we obtain r 0, 3.21 which completes the proof.

Algorithm and Convergence Property
In this section, we formally present our smoothing inexact Newton-type algorithm with nonmontone line search for solving H α z 0 by using the smoothing penalized FB function φ α , a, b .This nonmontone line search method was used to solve the NCP problem in 17 .Furthermore, we show the local superlinear quadratic convergence properties of the algorithm.
Step 2. If H α z k 0, then stop; otherwise, let ρ k ρ z k . Step

Mathematical Problems in Engineering
Step 4. Let ξ k δ m , where m is the smallest nonnegative integer such that Step 5. Set z k 1 z k ξ k Δz k , k k 1, and
From Algorithm 4.1 and 17 , it is easy to see that the following remark is true.ii T α z k ≤ C k for any k.
iii ρ z k 1 ≤ ρ z k for any k.
iv 0 ρ z k ≤ k for any k.
v k > 0 and k 1 ≤ k for any k.Proof.From Theorem 3.2, we know that H α z k is nonsingular.Hence, Step 3 is well defined at the kth iteration.
In the following, we show that the Step 4 is well defined.For ξ ∈ 0, 1 , we let and then

4.6
If T α z k ≤ 1, then H α z k ≤ 1, which, together with 4.3 , implies that As a result, for any k, From 4.6 and 4.9 , one has
Theorem 4.4.Suppose that {z k k , x k , λ k , μ k } is an infinite sequence generated by Algorithm 4.1.Then the arbitrary accumulation point z of the sequence {z k } is a stationary point of 2.12 .
Proof.i Suppose that z , y is an arbitrary accumulation point of {z k }, then there exists an infinite subsequence According to Remark 4.2 i , iii , and v , we obtain that the sequence {C k }, {ρ k }, and { k } are convergent.Without loss of generality, we assume that

4.13
It is easy to see that C ≥ 0, ρ ≥ 0, and ≥ 0. By Remark 4.2 i and ii , we know that which means that {T α z k } is bounded.
In the following, suppose that ρ / 0. Without loss of generality, we assume that {T α z k } is convergent and denote Combining 4.3 with the assumption ρ / 0, we have T > 0. Furthermore, by using 4.14 and Remark 4.2 iv , we obtain that C > 0 and > 0. Hence, we can deduce that

4.17
We now break up the proof of i into two cases.
1 Assume that α k ≥ e > 0 for all k, where e is a constant.In this case, by 4.4 and 4.2 , it follows that for any k, which, together with the boundness of

4.19
On the other hand, by η max ∈ 0, 1 and the definition of Q k given in 4.4 , we have

for any k.
As a result, we obtain that lim The stepsize α k α k /δ does not satisfy the line search condition 4.2 for any sufficiently large k, that is, holds for sufficiently large k.Since T α z k ≤ C k , the aforementioned inequality becomes Since > 0 and T α • is continuously differentiable at z, from Lemma 2.8, we have

4.23
where the first equality holds from 4.4 in the form of limit , the first inequality holds from 4.22 in the form of limit ; the second equality holds from 4.1 in the form of limit , and the third inequality holds from ρ H α z ≤ γ T by using 4.17 .Hence it follows from 4.23 and T > 0 that −1 γ 0 ≥ −σ 1 − γ 0 , which contradicts the fact that σ ∈ 0, 1/2 and γ 0 < 1.We complete the proof.
Similar to the proof of Theorem 4.3 in 7 , the following theorem holds.

Numerical Experiments
In this section, we implement Algorithm 4.1 for solving some GNCPs in order to see the behavior of Algorithm 4.1.The parameters used in the algorithm are chosen as follows: The following problems are tested in 5-7 .where m i : R n → R, i 1, 2, . . ., n, and and m y ψ Ay b with ψ : R n → R n being twice continuously differentiable.The following choices of function ψ define our test problems.The numerical results are shown in Tables 1, 2, and 3 with the following four kinds of initial points: a 0, 0, . . ., 0 , b −0.5, −0.5, . . ., −0.5 , c −1, −1, . . ., −1 , d 0.5, 0.5, . . ., 0.5 .In Tables 1, 2, and 3, ST denotes initial point, IT is the iterative number, TV is the final value of T when the algorithm terminates, and CPU denotes the computing time in the computer, respectively.For the starting point, we choose λ 0 0.5, 0.5, . . ., 0.5 ∈ R n and the termination criterion for the Algorithm is T z k ≤ 10 −6 .
From Tables 1, 2, and 3, we can see that Algorithm 4.1 is efficient in solving this kind of problem.
In the following, we will compare Algorithm 4.1 denoted by Inexact with the exact Newton algorithm with nonmontone line search denoted by Exact .Here, our test problem is the previous problem 1 , that is, ψ i x −0.5 − x i , i 1, 2, . . ., n.

5.4
The initial point is a or b .The numerical results are shown in Table 4.
From Table 4, we can see that Algorithm 4.1 is prior to the exact smoothing Newton method when the GNCP is of relatively large scale.

Conclusion
In this paper, we combine the smoothing function of penalized Fischer-Burmeister NCPfunction with nonmontone line search in 17 to present a new smoothing algorithm to solve the generalized nonlinear complementarity problem.We obtain that the iteration sequence generated by Algorithm 4.1 converges to a solution of the generalized nonlinear complementarity problem locally superlinear quadratic .Preliminary numerical results show the efficiency of the algorithm.

2 . 6 In 6 ,
Wang et al. reformulated the GNCP as a system of nonlinear equations based on the following Fischer function 15 φ a, b √ a 2 b 2 − a − b for a, b ∈ R. In 7 , Zhang et al. established a type of smoothing reformulation of GNCP based on the following smoothing approximation function to the Fischer function φ , a, b √ a 2 b 2 α 2 −a−b for a, b, ∈ R, where α > 0 is a constant.

Remark 4 . 2 .
Let the sequence {C k } and {z k k , x k , λ k , μ k } be generated by Algorithm 4.1.i C k 1 ≤ C k for any k.

Theorem 4 . 3 .
Suppose that {z k k , x k , λ k , μ k } is a sequence generated by Algorithm 4.1, B has full row rank, and ∇F x k ∇G x k is positive definite, k > 0. Then Algorithm 4.1 is well defined.

Theorem 4 . 5 .
Suppose that {z k k , x k , λ k , μ k } is an infinite sequence generated by Algorithm 4.1, z is the arbitrary accumulation point of the sequence {z k }, and z is a BD-regular solution of H z 0. Then 1 the point x is a solution of the GNCP; 2 the sequence {z k } converges to z superlinearly.In particular, if F and G are locally Lipschitz continuous at z, then {z k } converges to z Q-quadratically.

Example 5 . 1 .
Consider the implicit complementarity problems with the following form: find y ∈ R n such that y − m y ≥ 0, F y ≥ 0, R n×n is a P 0 -matrix, then every matrix of the form

Table 1 :
Numerical results of Example 5.1 with n 4.

Table 2 :
Numerical results of Example 5.1 with n 8.

Table 3 :
Numerical results of Example 5.1 with n 12.

Table 4 :
Comparison of numerical results with n 800.