New Block Triangular Preconditioners for Saddle Point Linear Systems with Highly Singular (1,1) Blocks

We establish two types of block triangular preconditioners applied to the linear saddle point problems with the singular (cid:2) 1,1 (cid:3) block. These preconditioners are based on the results presented in the paper of Rees and Greif (cid:2) 2007 (cid:3) . We study the spectral characteristics of the preconditioners and show that all eigenvalues of the preconditioned matrices are strongly clustered. The choice of the parameter is involved. Furthermore, we give the optimal parameter in practical. Finally, numerical experiments are also reported for illustrating the e ﬃ ciency of the presented preconditioners.


Introduction
Consider the following saddle point linear system: where G ∈ R n×n is a symmetric and positive semidefinite matrix with nullity dim kernel G p, the matrix B ∈ R m×n has full row rank, vectors x, b ∈ R n , and vectors y, q ∈ R m , and vectors x, y are unknown.The assumption that A is nonsingular implies that null G ∩ null B {0}, which we use in the following analysis.Under these assumptions, the system 1.1 has a unique solution.This system is very important and appears in many different applications of scientific computing, such as constrained optimization 1, 2 , the finite element method for solving the Navier-Stokes equation 3-6 , fluid dynamics, constrained least problems and generalized least squares problems 7-10 , and the discretized timeharmonic Maxwell equations in mixed form 11 .

Preconditioners and Spectrum Analysis
For linear systems, the convergence of an applicable iterative method is determined by the distribution of the eigenvalues of the coefficient matrix.In particular, it is desirable that the number of distinct eigenvalues, or at least the number of clusters, is small, because in this case convergence will be rapid.To be more precise, if there are only a few distinct eigenvalues, then optimal methods like BiCGStab or GMRES will terminate in exact arithmetic after a small and precisely defined number of steps.
Rees and Greif 12 established the following preconditioner for the symmetric saddle point system 1.1 : where t is a scalar and W is an m × m symmetric positive weight matrix.Similar to M, we introduce the following precondtioners for solving symmetric saddle point systems: where t / 0 is a parameter, and where 1 / t > 0.
Theorem 2.1.The matrix M −1 t A has two distinct eigenvalues which are given by with algebraic multiplicity n and p, respectively.The remaining m − p eigenvalues satisfy the relation where μ are some m − p generalized eigenvalues of the following generalized eigenvalue problem: The second block row gives y 1/λt W −1 Bx, substituting which into the first block row equation gives By inspection it is straightforward to see that any vector x ∈ R n satisfies 2.9 with λ 1; thus the latter is an eigenvalue of M −1 t A and x T , 1/t W −1 Bx T T is an eigenvector of M −1 t A. We obtain that the eigenvalue λ 1 has algebraic multiplicity n.From the nullity of G it follows that there are p linearly independent null vectors of G.
When the parameter t −1, we easily obtain the following corollary from Theorem 2.1.

Corollary 2.2. Let t −1.
Then the matrix M −1 t A has one eigenvalue which is given by λ 1 with algebraic multiplicity n p.The remaining m − p eigenvalues satisfy the relation where μ are some m − p generalized eigenvalues of the following generalized eigenvalue problem:

2.13
Theorem 2.3.The matrix M −1 t A has two distinct eigenvalues which are given by with algebraic multiplicity n and p, respectively.The remaining m − p eigenvalues lie in the interval Proof.According to Theorem 2.1, we know that the matrix M −1 t A has two distinct eigenvalues which are given by with algebraic multiplicity n and p, respectively.
From 2.9 , we can obtain that the remaining m − p eigenvalues satisfy • is the standard Euclidean inner product, x / ∈ null G and x / ∈ null B .Evidently, we have 0 < u < 1.The expression 2.17 gives an explicit formula in terms of the generalized eigenvalues of 2.17 and can be used to identify the intervals in which the eigenvalues lie.Furthermore, we can obtain that the remaining m − p eigenvalues lie in the interval 0, −1/t t < 0 or −1/t, 0 t > 0 .
When the parameter t −1, we easily obtain the following corollary from Theorem 2.3.
Corollary 2.4.Let t −1.Then the matrix M −1 t A has one eigenvalue which is given by λ 1 with algebraic multiplicity n p.The remaining m − p eigenvalues lie in the interval 0, 1 .
Theorem 2.5.The matrix M −1 t A has two distinct eigenvalues which are given by with algebraic multiplicity n and p, respectively.The remaining m − p eigenvalues satisfy the relation where μ are some m − p generalized eigenvalues of the following generalized eigenvalue problem:

2.20
Let {z i } n−m i 1 be a basis of the null space of B, and let {u i } p i 1 be a basis of the null space of G, and let where μ are some m − p generalized eigenvalues of the following generalized eigenvalue problem:

2.23
Theorem 2.7.The matrix M −1 t A has two distinct eigenvalues which are given by with algebraic multiplicity n and p, respectively.The remaining m − p eigenvalues lie in the interval

2.25
Proof.The proof is similar to the proof of Theorem 2.3.
When the parameter t 2, we easily obtain the following corollary from Theorem 2.7.
Corollary 2.8.Let t 2. Then the matrix M −1 t A has only one eigenvalue which is given by λ 1 with algebraic multiplicity n p.The remaining m − p eigenvalues lie in the interval 0, 1 .Remark 2.9.The above theorems and corollaries illustrate the strong spectral clustering when the 1, 1 block of A is singular.A well-known difficulty is the increasing ill-conditioned 1, 1 block as the solution is approached.Our claim is that the preconditioners perform robust even as the problem becomes more ill-conditioned; in fact the outer iteration count decreases.On the other hand, solving the augmented 1,1 block may be more computationally difficult and requires effective approaches such as inexact solvers.In Section 3, we indeed consider inexact solvers in numerical experiments.
Remark 2.10.It is clearly seen from Theorems 2.1 and 2.5 and Corollaries 2.2 and 2.6 that our preconditioners are suitable for symmetric saddle point systems, from Theorems 2.3 and 2.7 and Corollaries 2.4 and 2.8 that our preconditioners are most effective than the preconditioner of 12 .Remark 2.11.Similarly, the nonsymmetric saddle point linear systems can also obtain the above results.

Numerical Experiments
All the numerical experiments were performed with MATLAB 7.0.The machine we have used is a PC-Intel R , Core TM 2 CPU T7200 2.0 GHz, 1024 M of RAM.The stopping criterion is r k 2 / r 0 2 10 −6 , where r k is the residual vector after kth iteration.The right-hand side vectors b and q are taken such that the exact solutions x and y are both vectors with all components being 1.The initial guess is chosen to be zero vector.We will use preconditioned GMRES 10 to solve the saddle point linear systems.Our numerical experiments are similar to those in 16 .We consider the matrices taken from 17 with notations slightly changed.
We construct the saddle point-type matrix A from reforming a matrix A of the following form: where G ≡ matrix B with the same sparsity as B u , B v , replaced by satisfies rank B T rank B m. From the matrix A in 3.2 we construct the following saddle point-type matrix: where G 1 is constructed from G by making its first m/4 rows and columns with zero entries.Note that G 1 is semipositive definite and its nullity is m/4.In our numerical experiments the matrix W in the augmentation block preconditioners is taken as W I m .During implementation of our augmentation block preconditioners, we need the operation G 1 B T B −1 u for a given vector u or, equivalently, need to solve the following equation: G 1 B T B v u for which we use an incomplete LU factorization of G 1 B T B LU R with drop tolerance τ.Here m n means number of outer inner iterations.Time t represents the corresponding computing time in seconds when taking the parameter as t.
In the following, we summarize the observations from Tables 1, 2, 3, 4, 5, 6, and 7 and Figures 1, 2, and 3. i From Tables 2-4, we can find that our preconditioners are more efficient than those of 12 both in number of iterations and time of iterations, especially in the case of the optimal parameter.
ii Number and time of iterations with the preconditioner M −1 smaller than those with the preconditioners M 1 and M 2 .In fact, M 1 is a diagonal preconditioner.
iii Number and time of iterations with the preconditioner M t are the smallest when t 2.
iv Number of iterations decreases but the computational cost of incomplete LU factorization increases with decreased τ.Therefore, we should not use the preconditioners with small τ in practical.
v The eigenvalues of M −1 t A 1 are strongly clustered.Furthermore, the eigenvalues of M −1 −1 A 1 are positive.

Conclusion
We have proposed two types of block triangular preconditioners applied to the linear saddle point problems with the singular 1,1 block.The preconditioners have the attractive property of improved eigenvalues clustering with increasing ill-conditioned 1,1 block.The choice of the parameter is involved.Furthermore, according to Corollaries 2.2, 2.4, 2.6, and 2.8, we give the optimal parameter in practice.Numerical experiments are also reported for illustrating the efficiency of the presented preconditioners.
In fact, our methodology can extend the unsymmetrical case; that is, the 1,2 block and the 2,1 block of the saddle point linear system are unsymmetrical.

Figure 1 :
Figure 1: Convergence curve and total numbers of inner GMRES 10 iterations for different t when h 1/16.

Figure 2 :
Figure 2: Convergence curve and total numbers of inner GMRES 10 iterations for different t when h 1/24.
Next, we consider the remaining m − p eigenvalues.Suppose λ / 1 and λ / − 1/t.From 2.9 we obtain Let t 2. Then the matrix M −1 t A has one eigenvalue which is given by Proof.The proof is similar to the proof of Theorem 2.1.When the parameter t 2, we easily obtain the following corollary from Theorem 2.5.

Table 1 :
Values of n and m, and order of A 1 .

Table 2 :
Number and time of iterations of GMRES 10 with preconditioners M t and M for different drop tolerances τ and t when h 1/16.Results of preconditioner M lie inside .

Table 3 :
Number and time of iterations of GMRES 10 with preconditioners M t and M for different drop tolerances τ and t when h 1/24.Results of preconditioner M lie inside .

Table 4 :
Number and time of iterations of GMRES 10 with preconditioners M t and M for different drop tolerances τ , t when h 1/32.Results of preconditioner M lie inside .

Table 5 :
Number and time of iterations of GMRES 10 with preconditioners M t for different drop tolerances τ and t when h 1/16.

Table 6 :
Number and time of iterations of GMRES 10 with preconditioners M t for different drop tolerances τ and t when h 1/24.

Table 7 :
Number and time of iterations of GMRES 10 with preconditioners M t for different drop tolerances τ, t when h 1/32.