A Smoothing Newton Method with a Mixed Line Search for Monotone Weighted Complementarity Problems

In this paper, we present a smoothing Newton method for solving the monotone weighted complementarity problem (WCP). In each iteration of our method, the iterative direction is achieved by solving a system of linear equations and the iterative step length is achieved by adopting a line search. A feature of the line search criteria used in this paper is that monotone and nonmonotone line search are mixed used. .e proposed method is new even when the WCP reduces to the standard complementarity problem. Particularly, the proposed method is proved to possess the global convergence under a weak assumption. .e preliminary experimental results show the effectiveness and robustness of the proposed method for solving the concerned WCP.


Introduction
e weighted linear complementarity problem, introduced by Potra [1], is to find a vector (s, t, u) ∈ R n × R n × R m such that s, t ≥ 0, where P ∈ R (n+m)×n , Q ∈ R (n+m)×n , and R ∈ R (n+m)×m are the given matrices, q ∈ R n+m is a given vector, and w ∈ R n is a given nonnegative weighted vector. We denote this problem by the WMLCP. When w � 0, WMLCP (1) reduces to a mixed linear complementarity problem which has extensively been studied in the literature [2].
In [1], Potra showed that the problem of Fisher market equilibrium may be modeled as a WMLCP, and particularly, it could be solved more efficiently than a corresponding complementarity problem model; Anstreicher [3] proved that the problem of the weighted centering may also be reformulated as a WMLCP and proposed an interior point method to solve it. Since then, this kind of problem has been studied a lot. For example, Potra [4] studied some theories and proposed an interior point method to solve the WMLCP with the involved matrix being sufficient; Zhang [5] presented a smoothing Newton method to solve WMLCPs; Tang [6] proposed a variant nonmonotone smoothing algorithm to solve WMLCPs; Chi et al. [7] investigated the existence and uniqueness of the solution for a class of weighted horizontal linear complementarity problems.
Recall that a standard finite-dimensional complementarity problem (denoted by the CP) is to find a vector pair (s, t) ∈ R 2n such that where f: R n ⟶ R n is a given mapping and the subscript ⊤ denotes transpose. Problem (3) has many applications in fields such as engineering and economics (see, for example, [8]), and it has received great attention [2,[9][10][11]. Recently, a subclass of CPs, tensor complementarity problems, was also studied extensively (see, for example, survey papers [12][13][14]). Two extensions of tensor complementarity problems and an application to the problem of traffic equilibrium problems were given in [15].
Inspired by the above papers, we investigate an extension of the CP, called the weighted complementarity problem (denoted by WCP), which is to find a vector pair (s, t) ∈ R 2n such that s ≥ 0, where f: R n ⟶ R n is a given mapping. Obviously, when w � 0, WCP (4) becomes a CP. When f is a linear mapping, say f(s) � Ns + p with N being an n × n real matrix and p ∈ R n , the corresponding WCP is named as a weighted linear complementarity problem (WLCP); otherwise, it is named as a weighted nonlinear complementarity problem.
Recall that a mapping f is called to be monotone if and only if ( ≥ 0 holds for all s, t ∈ R n . If f is a monotone mapping, then the corresponding WCP (4) is called a monotone WCP. We will investigate the numerical method for solving the WCP. It is well known that CP (3) may be reformulated as a system of parameterized smoothing equations in terms of some complementarity function [16][17][18]. us, in order to achieve a solution of CP (3), one may use some Newton-type method to iteratively solve the obtained system of equations and make the smoothing parameter tend to zero. is is the so-called smoothing Newton method. is class of methods has been successfully applied to solving lots of optimization and related problems, including linear complementarity problems [19][20][21][22][23][24], linear programs [25], nonlinear complementarity problems [26][27][28][29][30], variational inequalities [27,31], semidefinite complementarity problems [32][33][34][35][36][37], system of inequalities [19,34], symmetric cone complementarity problems [35,36], mathematical programs with complementarity constraints [38,39], and absolute value equations [40]. In order to achieve the global convergence, most of the known smoothing Newton methods require the assumption that the solution set of the concerned problem is nonempty and compact. In [29], the author proposed a smoothing Newton method for solving CP (3) with a monotone mapping and showed that the smoothing Newton method is globally convergent when the problem has at least a solution. is assumption is weaker than the ones required in the global convergence of smoothing Newton methods (see, for example, [41]). Moreover, since the nonmonotone line search technique can improve the likelihood of finding a global optimal solution and convergence speed in cases where the involving function is highly nonconvex and has a valley in a small neighbourhood of some point, the nonmonotone line search criterion has been used in some smoothing Newton methods for solving CPs (see, for example, [42][43][44][45]). More nonmonotone techniques can be found in [46][47][48][49][50]. In addition, some related Newton-type methods can be found in [51][52][53][54][55].
Inspired by the above methods, we investigate the nonmonotone smoothing Newton method for solving WCP (4). We use the following assumption.
Assumption 1. f is a continuously differentiable monotone mapping and WCP (4) has at least a solution.
In terms of the algorithmic framework of the one studied in [29] for CP (3) and some nonmonotone line search criteria used in [42,43], by using a symmetric perturbed smoothing function, we propose a smoothing Newton method with a mixed line search criterion to solve WCP (4), where, in each iteration, a system of linear equations is solved to find the iterative direction and a line search is performed to achieve the iterative step length. Particularly, we show that the proposed smoothing Newton method is globally convergent if Assumption 1 is satisfied.
is assumption is weaker than most of them used in the analysis on the global convergence of smoothing Newton methods for CP (3). When WCP (4) reduces to CP (3), our proof of main results given in this article is simpler than that of the corresponding ones in [29]. e article is organized as follows. In Section 2, we describe a reformulation of WCP (4) and propose a smoothing Newton method for solving WCP (4). In Section 3, we prove the global convergence of the proposed smoothing Newton method. e preliminary numerical experiments and conclusions are given in Sections 4 and 5, respectively.
In this article, we use R n + to represent the nonnegative orthant in R n and R n ++ to represent the positive orthant in R n . Denote For any s, t ∈ R n , s i means the ith component of s and (s, t) stands for (s ⊤ , t ⊤ ) ⊤ simplicity. For any (], s, t), (] k , s k , t k ) ∈ R + × R 2n , we denote r ≔ (], s, t) and r k ≔ (] k , s k , t k ). Moreover, the solution set of WCP (4) is denoted by SOL w (f).
en, we have the following two results: Proof. ese two results can be easily proved by using the definition of ϕ c (·). We omit the proof here.
Furthermore, for any given weighted vector w ∈ R n + , we define a mapping for the WCP, G w : R 1+2n ⟶ R 1+2n , by en, by using (9) and Lemma 1 (a), it follows that e following results can be easily obtained.
□ Proposition 1. Suppose that f is a continuously differentiable monotone mapping. en, (a) For each r ∈ R ++ × R n × R n , the mapping G w is continuously differentiable at r, and where , ∀i ∈ Θ, and erefore, by (11) and the continuity of the mapping G w , we may find a solution of WCP (4) in the following way: use some Newton-type method to iteratively solve G w (r) � 0 and make ‖G w (r)‖↓0.
and a sufficiently small positive number ϵ. Set m 0 ≔ 1 and k ≔ 0.
where G w ′ is defined by (4).
In Algorithm 1, we set a selection condition: If it is satisfied for some k ∈ Ω, then η k+1 � 0; hence, by (16), we have that C k+1 � ‖G w (r k+1 )‖. In this case, the line search in the (k + 1)th iteration is a monotone line search. Moreover, if the above selection condition is not satisfied for some k ∈ Ω, then the line search in the (k + 1)th iteration is a nonmonotone line search. us, the line search criterion designed in Algorithm 1 is a mixed line search that switches between monotone and nonmonotone line search.
Algorithm 1is new even in the case of the weighted vector involved being the zero vector (in this case, the WCP reduces to the CP). Moreover, Algorithm 1 is simpler than many smoothing Newton methods for solving the CP in the sense that it requires only to solve a system of linear equations and to do a line search in each iteration of the algorithm. In fact, a similar algorithmic framework (without the nonmonotone line search) was proposed in [29] for solving (3).
In the following, we show that Algorithm 1 is well defined.

Lemma 2.
Suppose that f is a continuously differentiable monotone mapping and the sequence r k � (] k , s k , t k ) is generated by Algorithm 1. en, we have the following results: where the first inequality holds due to (16) and the second inequality holds due to (15); otherwise, we have that η k+1 � 0, and hence, by (16) and (15), we can obtain that Step 0 in Algorithm 1, it is easy to see that C 0 � ‖G w (r 0 )‖ ≤ ρ] 0 . Suppose that C l ≤ ρ] l for some l ∈ Ω, then where the first equality follows from the first equation in (14), the first inequality from the inductive assumption, and the last inequality from the above (a). us, C k ≤ ρ] k for all k ∈ Ω.
(d) By (16), we have that for any k ∈ Ω, where the inequality follows from the above (a). us, ‖G w (r k )‖ ≤ C k for all k ∈ Ω.
(e) On the one hand, since ] k ≥ 0 for all k ∈ Ω, it follows from Proposition 1 that G w ′ (r k ) (see (12)) is invertible, and hence, equation (14) is solvable for all k ∈ Ω. On the other hand, for any k ∈ Ω, if we let R k w (c) ≔ G w (r k + cdr k ) − G w (r k ) − cG w ′ (r k )dr k , then, by (14), we have that where the second inequality holds due to the above (d). Since the function G is continuously differentiable at any r ∈ R ++ × R n × R n , it follows from (a) that ‖R k w (c)‖ � o(c) for all k ∈ Ω. is means that the line search (15)

Global Convergence of Algorithm 1
We now show that Algorithm 1 is globally convergent under Assumption 1.

Theorem 1. Let Assumption 1 be satisfied. en, the following results hold:
(a) e sequence r k � (] k , s k , t k ) generated by Algorithm 1 is bounded (b) Every accumulation point of (s k , t k ) solves WCP (4) Proof. (a) We verify the boundedness of r k by the following three parts. Part 1. We verify the boundedness of ] k . Such a result holds directly from Lemma 2 (c).

Part 2.
We verify the boundedness of s k . Suppose, on the contrary, that s k is unbounded. For our purposes, we need to construct sequences x k and y k by For any k ∈ Ω, by Lemma 2 (b), we have C k ≤ ρ] k , and by Lemma 2 (d), we have ‖G w (r k )‖ ≤ C k . us, ‖G w (r k )‖ ≤ ρ] k for all k ∈ Ω. is, together with (23) and (24) as well as the definition of the mapping G w in (9), implies that ese mean that both sequences x k and y k are uniformly bounded.
Furthermore, we construct sequences s k and t k by en, by using Lemma 1(b) and the definition of y k in (24), we have which leads to Moreover, by Assumption 1 it follows that SOL w (f) ≠ ∅, and hence, there exists (s * , t * )⊆R n × R n satisfying us, by using (28) and inequalities given in (27) and (29), we further obtain that Substituting (26) into (30), we obtain that holds for any k ∈ Ω. In what follows, we investigate the right-hand side of (31). Since Assumption 1 holds, we have that the mapping f is monotone, and hence, Hence, by using (23) and (24), we obtain that for any k ∈ Ω, where the first equality holds due to (23) and (29) and Denote en, it holds that either g(s k ) is bounded or g(s k ) ⟶ + ∞ as k ⟶ ∞. Moreover, we have that

(37)
By the assumption that s k is unbounded, it holds that the right-hand side of (37) tends to +∞ as k ⟶ + ∞, which is a contradiction to the fact that the left-hand side of (37) is a constant. erefore, the assumption that s k is unbounded does not hold, i.e., s k is bounded.
Part 3. We verify the boundedness of t k . On the one hand, we have that f(s k ) is bounded since s k is bounded and the mapping f is continuous. On the other hand, by (23), we have that t k � ] k x k + f(s k ) holds for all k ∈ Ω. us, by the boundedness of sequences ] k , x k , and f(s k ) , we can obtain that t k is bounded. erefore, by combining Part 1 with Part 2 and Part 3, we obtain that r k is bounded.
(b) We verify that every accumulation point of (s k , t k ) solves WCP (3). From Lemma 2 (a)-(d), we have that for all k ∈ Ω, us, by using the boundedness of the iterative sequence in (a) and taking the subsequence if necessary, we can assume that r * ≔ (] * , s * , t * ) is the limiting point of the sequence r k and denote that C * ≔ limit k⟶∞ C k and G * w ≔ limit k⟶∞ ‖G w (r k )‖. Since the mapping G w is continuous, it is obvious that G * w � ‖G w (r * )‖. Moreover, it is not difficult to verify from (16) that C * � G * w . If G * w � 0, then we can derive the desired result by a simple continuity discussion.
In the following, assume G * w ≠ 0. en, C * � G * w > 0 and ] * > 0. We consider the following two cases. □ Case 1. Suppose that c k ≥ c > 0 for all k ∈ Ω, where c is an s constant. en, by (15), we have that for any k ∈ Ω, Let k ⟶ ∞, so we can obtain that 1 ≤ 1 − σ(1 − (1/ρ))c, which leads to a contradiction. Case 2. Suppose that limit k⟶∞ c k � 0. en, for any sufficiently large k ∈ Ω, the stepsize c k ≔ (c k /δ) does not satisfy the line search criteria (15), i.e., where the second inequality holds from Lemma 2 (d). us, for any sufficiently large k ∈ Ω, Since ] * > 0, it is easy to show from (14) that the sequence dr k is convergent. Denote dr * ≔ limit k⟶∞ dr k . Let k ⟶ ∞, and it follows from (41) that is is a contradiction. erefore, by combining Case 1 with Case 2, we obtain that every accumulation point of (s k , t k ) solves WCP (4).
Combining (a) with (b), we complete the proof of the theorem. e basic idea in showing the boundedness of the iterative sequence (i.e., the proof of eorem 1 (a)) is similar to the ones in [29], Lemma 3.4 and eorem 3.1. However, the proof of eorem 1 is more simpler than those of Lemma 3.4 and eorem 3.1 in [29].

Numerical Experiments
In this section, we implement Algorithm 1 for solving weighted linear complementarity problems. All experiments were performed on a inkpad notebook computer with 2.5 GHz CPU and 8 GB memory. e codes are run in MATLAB R2016a under Win 10. Our experiments are divided into two parts. Part 1. In this part, we implement Algorithm 1 for solving WCP (4). Specifically, we test the following problem: Problem 1. Consider the following WLCP: which is randomly generated in the following way: for any given positive integer n, take rand(100, n) and s * ≔ rand(n, 1). Let 6 Mathematical Problems in Engineering Λ ≔ randi(n, 0.5 * n, 1), and take p in the following way: p i ≔ − (Ns * ) i for all i ∈ Λ and p i ≔ rand(1, 1) for all i ∈ 1, 2, . . . , n { }\Λ. Let w ≔ S * t * . Obviously, this problem is a monotone WLCP, which is solvable. In this problem, w i � 0 for all i ∈ Λ. In the experiments, we take Suppose n is taken as those listed in Table 1. Take the starting point s 0 ≔ − 0.5 * rand(n, 1). Set t 0 ≔ Ns 0 + p and r 0 ≔ (] 0 , s 0 , t 0 ). Take ρ ≔ (‖G w (r 0 )‖/] 0 ). We take η 0 ≔ 0.8, and for any k ∈ Ω, if the condition For every choice of n, the corresponding problem is tested five times and the experimental results are reported in Table 1, where It denotes the number of iterations, Itm denotes the number of iterations when the monotone line search is used (i.e., the case of η k � 0 ), NF denotes the number of evaluations for the mapping G w (r k ), Val denotes the value of ‖G w (r k )‖, and Cpu denotes the CPU time in seconds, respectively.
From Table 1, we have the following observations: (i) From columns "It" and "Itm" in Table 1, we see that the monotone line search and the nonmonotone line search both work in Algorithm 1. So, this algorithm is really an algorithm with a mixed line search. (ii) It can be seen that Algorithm 1 is effective for solving WLCPs since each tested problem was successfully solved in few iterations and short CPU time. (iii) Algorithm 1 is robust since different tested problems having the same size were successfully solved with small differences among the numbers of iterations.
Part 2. In fact, Algorithm 1 can be applied to solve WMLCP (1). For this purpose, instead of the mapping G w (·) defined by (9), we define a mapping for the WMLCP, for any r ≔ (], s, t, u) ∈ R × R n × R n × R m , where the mapping Φ w is the same as the one defined in Section 2 and G w ′ (r) is used to denote the Jacobian matrix of G w (·) at any r � (], s, t, u) ∈ R × R n × R n × R m . In this part, we implement Algorithm 1 for solving this type of weighted complementarity problem and compare it with two algorithms studied in the literature. e test problem is constructed in a similar way as those in [5,6]. Specifically, we test the following problem. (1), where matrices P, Q, and R and vector q are generated as follows:
In order to generate a strictly feasible starting point, we first take stemp � [s I ; s B ] with s B ≔ 0.1 * rand(n − m, 1) and s I ≔ B * s B , and then, we set the starting point by We implement our algorithm (i.e., Algorithm 1), Algorithm 1 in [5], and Algorithm 1 in [1] for solving Problem 2, respectively. For our algorithm, all parameters are selected as the same as those in experiments of Part 1; and for Algorithm 1 in [5], all parameters are selected as the same as those in experiments in [5]. Denote Res � ‖Ps + Qt + Ru‖ ∞ , For the first two algorithms, the termination criterion is max(Res, Gap, normG) ≤ 10 − 6 ; while for the last algorithm, the termination criterion is max(Res, Gap) ≤ 10 − 6 .
In the experiments of each algorithm, for every choice of n and m, the corresponding problem is tested five times. e experimental results are reported in Table 2, where It and Cpu denote the number of iterations and the CPU time in seconds, respectively, and Val denotes max(Res, Gap, normG) for the first two algorithms and max(Res, Gap) for the last algorithm.
From the numerical results in Table 2, it is easy to see that our algorithm is comparable to Algorithm 1 in [5], and these two algorithms are more efficient than Algorithm 1 in [1].

Conclusions
We proposed a smoothing Newton method with a mixed line search for solving monotone WCPs and showed the global convergence of the method under a weak assumption. e preliminary experimental results demonstrate the effectiveness and robustness of the method for solving the monotone WCP. We believe that many other smoothing Newton methods can also be modified to solve the WCP. Two future issues which have to be investigated are the theoretical properties of WCPs and solution methods for large-scale WCPs.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.