Spectral Properties of the Iteration Matrix of the HSS Method for Saddle Point Problem

with symmetric positive definite A ∈ R and B ∈ R with rank (B) = m ≤ n. Without loss of generality, we assume that the coefficient matrix of (1) is nonsingular so that (1) has a unique solution. Systems of the form (1) arise in a variety of scientific and engineering applications, such as linear elasticity, fluid dynamics, electromagnetics, and constrained quadratic programming. One can see [1] for more applications and numerical solution techniques of (1). Recently, based on the Hermitian and skew-Hermitian splitting ofA:A = H + S, where


Introduction
Consider the following saddle point problem: with symmetric positive definite  ∈ R × and  ∈ R × with rank () =  ≤ .Without loss of generality, we assume that the coefficient matrix of ( 1) is nonsingular so that (1) has a unique solution.Systems of the form (1) arise in a variety of scientific and engineering applications, such as linear elasticity, fluid dynamics, electromagnetics, and constrained quadratic programming.One can see [1] for more applications and numerical solution techniques of (1).Recently, based on the Hermitian and skew-Hermitian splitting of A: A =  + , where the HSS method [2] has been extended by Benzi and Golub [3] to solve the saddle point problem (1) and it is as follows.
The HSS Method.Let  (0) ∈ C  be an arbitrary initial guess.
In addition, if we introduce matrices then A =   −   and   =  −1    .Therefore, one can readily verify that the HSS method is also induced by the matrix splitting A =   −   .
The following theorem established in [3] describes the convergence property of the HSS method.

Mathematical Problems in Engineering
In fact, one can see [4] for a comprehensive survey on the HSS method.As is known, the iteration method (3) converges to the unique solution of the linear system (1) if and only if the spectral radius (  ) of the iteration matrix   is less than 1.The spectral radius of the iteration matrix is decisive for the convergence and stability, and the smaller it is, the faster the iteration method converges when the spectral radius is less than 1.In this paper, we will discuss the spectral properties of the iteration matrix   of the HSS method for saddle point problems and derive estimates for the region containing both the nonreal and real eigenvalues of the iteration matrix   of the HSS method for saddle point problems.
Throughout the paper,   denotes the transpose of a matrix  and  * indicates its transposed conjugate.  ,  1 ≥ 0 are the smallest and largest eigenvalues of symmetric positive semidefinite , respectively.We denote by  1 , . . .,   the decreasing ordered singular values of .R() and I(), respectively, denote the real part and imaginary part of  ∈ C.

Main Results
In fact, the iteration matrix   can be written as Therefore, we are just thinking about the spectral properties of matrix    .That is, we consider the following eigenvalue problem: where (, ) is any eigenpair of    .From (7), we have Note that (  ) < 1 for all  > 0. From (8), we have Let Obviously,  ̸ = 0. Therefore, ( 9) can be written as That is, which is equal to It is easy to see that the two eigenproblems ( 7) and ( 13) have the same eigenvectors, while the eigenvalues are related by (10).Obviously, if the spectrum of B can be obtained, then the spectrum of ( 7) can be also derived.From [5, Lemma 2.1], we have the following result.
Lemma 2. Assume that  is symmetric and positive definite and  = +(1/ 2 )  .For each eigenpair (, ) of ( 7), all the eigenvalues  of the iteration matrix   are  = (−)/(+), where  ̸ = 0 satisfies the following. ( From ( 14), it is easy to verify that 0 ≤ || 2 ≤  2 1 , and if ).In the sequel, we will present the main result, that is, Theorem 3. Theorem 3.Under the hypotheses and notation of Lemma 2, all the eigenvalues  of the iteration matrix   are such that the following hold.
(28) So, we have That is, By the simple computations, we have Obviously, we also have That is to say,

Numerical Experiments
In this section, we consider the following two examples to illustrate the above result.
From Tables 1 and 2, it is not difficult to find that the theoretical results are in line with the results of numerical experiments.Further, for 8 × 8, the average error in the lower bounds for 10 different values of  is 0.00112 and the average error in the upper bounds for 10 different values of  is 0.00047.For 16 × 16, the average error in the lower bounds for 10 different values of  is 0.0005 and the average error in the upper bounds for 10 different values of  is 0.000091.That is, Theorem 3 provides reasonably good bounds for the eigenvalue distribution of the iteration matrix   of the HSS method when the iteration parameter  is taken in different regions.
In this case there are nonreal eigenvalues (except for very small ).In Table 3 we list the upper bounds given in Theorem 3 when I() ̸ = 0. From Table 3, it is not difficult to find that the theoretical results are in line with the results of numerical experiments.That is, Theorem 3 provides reasonably good bounds for the eigenvalue distribution of the iteration matrix   with I() ̸ = 0 when the iteration parameter  is taken in different regions.

Table 1 :
The region for all the eigenvalues of   with I() = 0 and 8 × 8.

Table 2 :
The region for all the eigenvalues of   with I() = 0 and 16 × 16.