A Self-Adaptive Method for Solving a System of Nonlinear Variational Inequalities

The system of nonlinear variational inequalities (SNVI) is a useful generalization of variational inequalities. Verma (2001) suggested and analyzed an iterative method for solving SNVI. In this paper, we present a new self-adaptive method, whose computation cost is less than that of Verma’s method. The convergence of the new method is proved under the same assumptions as Verma’s method. Some preliminary computational results are given to illustrate the efficiency of the proposed method.


Introduction
A large number of problems arising in various branches of pure and applied sciences can be studied in the unified framework of variational inequalities.In recent years, classical variational inequality and complementarity problems have been extended and generalized to study a wide range of problems arising in mechanics, physics, optimization, and applied sciences, see [1][2][3][4][5][6][7][8][9][10][11][12][13] and the references therein.A useful and important generalization of variational inequalities is the system of nonlinear variational inequalities.
Many authors suggested and analyzed various iterative methods for solving different types of variational inequalities in Hilbert spaces or Banach spaces based on the auxiliary principle and resolvent operators technique.Usually, they discuss the convergence for those methods, but most of them have not discussed the efficiency for the proposed method, especially through the numerical result.
On the other hand, much attention has been given to develop efficient and implemental projection method and its variant forms for solving variational inequalities and related optimization problems in R n , see, for example, Glowinski et al. [2], Harker and Pang [3], He [4], He and Liao [5], Shi et al. [8], Y. J. Wang et al. [12,13], and the references therein.
2 Mathematical Problems in Engineering However, this type of numerical methods can be only applied for solving the classical variational inequality.
Verma [11] investigated the approximation solvability of a new system of nonlinear variational inequalities involving strongly monotone mappings.In 2005, Bnouhachem [1] presented a new self-adaptive method for solving general mixed variational inequalities.In this paper, inspired and motivated by the results of Verma [11] and Bnouhachem [1], the author proposed a new self-adaptive iterative method for solving SNVI.The author also proved the convergence of the proposed method under the same assumptions as those by Verma [11].The numerical examples are given to illustrate the efficiency of the proposed method.

Preliminaries
Let H be a real Hilbert space with the inner product •, • and norm • .Let T : K → H be any mapping and K a closed convex subsets of H.The author considered a system of nonlinear variational inequalities (SNVI): determine elements x * , y * ∈ K such that ( The SNVI (2.1) is first introduced and studied by Verma [11] in 2001.For the applications, formulation, and numerical methods of SNVI (2.1), we refer the reader to Verma [11].
For y * = x * and ρ = γ = 1, the SNVI (2.1) reduces to the following standard nonlinear variational inequality (NVI) problem: find an element x * ∈ K such that Let K be a closed convex cone of H.The SNVI (2.1) is equivalent to a system of nonlinear complementarities (SNC): find the elements x * , y * ∈ K such that T(x * ),T(y where K * is a polar cone to K defined by For y * = x * and ρ = γ = 1, the SNC (2.3) reduces to the nonlinear complementarity problem: find an element x * ∈ K such that T(x * ) ∈ K * and The projection of a point x ∈ H onto the closed convex set K, denoted by P K [x], is defined as the unique solution of the problem min y∈K x − y . (2.6) For any closed convex set K ⊂ H, a basic property of the projection operator P K [•] is From the above inequality and the Cauchy-Schwartz inequality, it follows that the pro- (2.8) Lemma 2.1 [11].Elements x * , y * ∈ K form a solution set of the SNVI (2.1) if and only if where y * = P K [x * − γT(x * )], for γ > 0.
In [11], Verma used Lemma 2.1 to suggest and analyze the following algorithm for solving SNVI.Algorithm 2.2.For an arbitrarily chosen initial point x ∈ K, compute sequences {x k } and {y k } by an iterative procedure (for k ≥ 0) where For γ = 0 and y k = x k , Algorithm 2.2 reduces to the following algorithm.
Algorithm 2.3.Compute a sequence {x k } by the following iteration for an initial point x 0 ∈ K: ) for all x ∈ K and for ρ > 0.

Convergence of projection methods
Unlike Algorithms 2.2-2.3,we construct the following algorithm.
Step 2. Set ρ k = ρ, if r(x k ,ρ) < ε, then stop; otherwise, find the smallest nonnegative integer m k , such that ρ k = ρμ mk satisfying where Step 3. Compute where r(x,ρ) Step 4. Get the next iterate: , and go to Step 2.
Remark 3.2.Note that Algorithm 3.1 is obviously a modification of the standard procedure.In Algorithm 3.1, the searching direction is taken as −γd(x k ,ρ k ) − γTx k , which is closely related to the projection residue, and differs from the standard procedure.In addition, the self-adaptive strategy of step-size choice is used.The numerical results show that these modifications can introduce computational efficiency substantially.
Applying Algorithm 3.1, we know Chaofeng Shi 5 Since T is r-strongly monotone and s-Lipschitz continuous, we know ( It follows that Next, we consider where we use the property of the operator P K .Now, we consider where we use the definition of d(x k ,ρ k ).It follows that

Preliminary computational results
In this section, we presented some numerical results for the proposed method. ) , where l = (0,0,...,0) T , u = (1,1,...,1) T .The calculations are started with a vector x 0 = (0,0,...,0) T , and stopped whenever r(x n ,γ) ∞ < 10 −5 .Table 4.1 gives the numerical results of Algorithms 3.1 and 2.2.Table 4.1 show that Algorithm 3.1 is very effective for the problem tested.In addition, for our method, it seems that the computational time and the iteration numbers are not very sensitive to the problem size.In all tests, the calculations are started with a vector x 0 = (0,0,...,0) T , and stopped whenever r(x n ,γ) ∞ < 10 −4 .All codes are written in Matlab and run on a desk computer.The iteration numbers and the computational time from Algorithms 2.2 and 3.1 with different dimensions are given in Table 4.2.Table 4.2 shows that Algorithm 3.1 is also very effective for the problem tested.In addition, for our method, it seems that the computational time and iteration numbers are not very sensitive to the problem size.

Example 4 . 2 .
Let T(x) = N(x) + Dx + C, where N(x) and Dx + c are the nonlinear and linear parts of F(x).The element of N(x) is N j (x) = a j * arctan x j , where a j are randomly chosen in (0,1), D and c are the same as Example 4.1.