A Preconditioned Multisplitting and Schwarz Method for Linear Complementarity Problem

The preconditioner presented by Hadjidimos et al. (2003) can improve on the convergence rate of the classical iterative methods to solve linear systems. In this paper, we extend this preconditioner to solve linear complementarity problemswhose coefficientmatrix is M-matrix or H-matrix and present a multisplitting and Schwarz method. The convergence theorems are given. The numerical experiments show that the methods are efficient.


Introduction
Many science and engineering problems are usually induced as linear complementarity problems (LCP): find an  ∈   such that  ≥ 0,  −  ≥ 0,  ⊤ ( − ) = 0, where  = [  ] ∈  × is a given matrix and  ∈   is a vector.It is necessary to establish an efficient algorithm to solve the complementarity problem.Numerical methods for complementarity problems fall in two major kinds, direct and iterative methods.There have been lots of works on the solution of the linear complementarity problem ([1-4], etc.), which presented feasible and essential techniques for LCP.Recently some parallel multisplitting iterative methods for solving the large sparse linear complementarity problems are presented ( [5][6][7][8][9][10][11], etc.).These methods are based on several splittings of the system matrix  and are constructed with a suitable weighting combination of the solution of the sublinear complementarity problems.
For the large sparse linear complementarity problem, some accelerated modulus-based matrix splitting iteration methods and modulus-based synchronous two-stage multisplitting iteration methods are constructed [7,11].Numerical results show that these methods are more efficient.
Many researchers have studied preconditioners applied to linear system so that the corresponding iterative methods, such as Jacobi or GS, converge faster than the classical ones.Hadjidimos et al. [12] considered the following preconditioner: where  = [0,  2 , . . .,   , . . .,   ] ∈   with constants   ≥ 0,  = 2(1). Consider In (3), let   = 1,  = 2(1);  1 () is a preconditioner presented by Milaszewicz [13].It eliminates the elements of the first column of  below the diagonal.Reference [12] shows that the new modifications and improvements of the original preconditioners can improve on the convergence rates of the classical iterative methods (Jacobi, GS, etc.).
In this paper, with multisplitting technique, we will extend the preconditioner to solve the linear complementarity problem (1) and present a new multisplitting and Schwarz method.The new method is parallel and has high computational efficiency.
In Section 2, some preliminaries for the new method are presented.A multisplitting and Schwarz method is given in Section 3. Convergence analysis is given in Section 4. Section 5 presents the numerical experiments results.
If the problem (1) has a nonzero solution, there at least exists an index ,   > 0. In this paper, let us assume that  1 > 0. By Lemma 1, we have the following conclusion.
Lemma 8 (see [15]). is a nonsingular -matrix if and only if all the principal minors of  are positive.

Synchronous Multisplitting and Schwarz Method
By Theorem 9, (  ) ≤ () ≤ () < 1.It means that the Gauss-Seidel iterative methods associated with the new preconditional matrix Ã() =  1 () will be no worse than the ones corresponding to .Similar to [6], we present a synchronous multisplitting and Schwarz algorithm corresponding to Ã().
Then the following lemma is obviously true.

Convergence Analysis
In this section, we give the convergence analysis of the algorithm.
Proof.The conclusion easily resulted from Lemma 2, Lemma 12, and Theorem 9.
Lemma 15.Let  be a diagonally dominant -matrix.
Proof.Note that   = 1, 0 ≤ −  ≤ 1, and ∑  =1   ≥ 0. We have   is well defined.By the definition of Ã(), and for It implies that Ã() is a diagonally dominant matrix; then it is an -matrix with positive diagonal elements by Lemma 14.
Lemma 16 (see [6]).Let  * be the solution of (1), and  , is the solution of (14) Similar to the proof in Theorem 2.1 in [8], we have the following convergence theorem.
Theorem 17.Let  be an -matrix; the sequence {  } generated by Algorithm 10 converges to the solution of the problem (1).

Numerical Experiments
In this section, we give two numerical examples to show that the new methods are efficient.In the numerical experiments, the stop criterion is ‖ +1 −   ‖ < 10 −8 .In the tables, MMS denotes Algorithm 10 with preconditioner, and GSOR denotes Algorithm 10, in which  = 1.
For  1 ̸ = 0, let us choose   = 0.5; then () is an matrix.In Algorithm 10 Ã() =   −   maybe an -compatible splitting for each splitting.The corresponding results are shown in Tables 2 and 3.

Table 1 :
Comparison of MMS and GSOR with unpreconditioned and preconditioned method.

Table 4 :
Comparison of MMS and AMAOR.