A Banded Preconditioning Iteration Method for Time-Space Fractional Advection-Diffusion Equations

In this paper, we concentrate on the efficient solvers for the time-space fractional advection-diffusion equations. Firstly, the implicit finite difference schemeswith the shiftedGrünwald-Letnikov approximations for spatial fractional derivative and unshifted Grünwald-Letnikov approximations for time fractional derivative are employed to discretize time-space fractional advectiondiffusion equations. The discretization results in a series of large dense linear systems. Then, a banded preconditioner is proposed and some theoretical properties for the preconditioning matrix are studied. Numerical implementations show that the banded preconditioner may lead to satisfactory experimental results when we choose appropriate bandwidth in the preconditioner.


Introduction
Consider the following initial-boundary value problem of time-space fractional advection-diffusion equations (TS-FADE): where (, ) is the source and sink term, the coefficient of anomalous diffusion satisfying  + () ≥  + min > 0 and  − () ≥  − min > 0, the real constant V denotes the drift of the process, that is, the mean advection velocity, and  ∈ (1, 2) and  ∈ (0, 1) are the differentiation parameters.
The TS-FADE of form (1) arises in variety of research areas such as modeling chaotic dynamics of classical conservative systems [4], turbulent flow [5], groundwater contaminant transport [6], and applications in finance [7], image processing [8], physics [9], biological systems [10], and so on.Though many analytic approaches, such as the Fourier transform method, the Mellin transform method, and the Laplace transform methods, have been used to seek the closed-form solutions [11], there are few available analytical 2 Mathematical Problems in Engineering closed-form solutions for fractional differential equations (FDEs).Hence, it is important to find efficient methods to solve fractional differential equations.
Traditional methods for solving FDEs generate the computational cost of ( 3 ) and storage of ( 2 ) as a result of full coefficient matrices, where  is the grid point number.However, when we use the shifted Grünwald-Letnikov discretization scheme proposed by Meerschaert and Tadjeran [12] to approximate the FDE, we will obtain a special Toeplitzlike discretized coefficient matrix [13,14].Therefore, according to [13], the storage requirement is () instead of ( 2 ), and the complexity of the matrix-vector multiplication only requires ( log ) operations by fast Fourier transform (FFT).By making use of this advantage, K. Wang and H. Wang [14] showed that the conjugate gradient normal residual (CGNR) method which has computational cost of ( log 2 ) is very efficient when the diffusion coefficients are very small; that is, the discretized systems are wellconditioned.However, when the diffusion coefficients are not small, the discretized diffusion coefficient matrices become ill-conditioned.Therefore, preconditioning technique must be used to improve the efficiency of the CGNR method.Many references proposed efficient methods.For example, Pang and Sun [15] proposed the multigrid (MG) method to solve the system.Lei and Sun [16] used the circulant preconditioned CGNR method, which is proved to be more efficient than the MG method.After that, Qu et al. [17] introduced a circulant and skew-circulant splitting method to solve the fractional diffusion equations.For more accuracy analysis of the discretization from the fractional difference equations, we refer to the documents [18][19][20].Some efficient algorithms for the discretized linear systems from the space fractional difference equations can be found in [16,[21][22][23][24] and so on.
In this paper, we will first discretize TS-FADE (1) by using the unshifted Grünwald-Letnikov approximation to the Caputo time fractional derivative and the shifted Grünwald-Letnikov approximation to the spatial fractional derivative.Then this discretization will lead to a system with Toeplitzlike coefficient matrix.Following the strategy proposed in [21], we will propose a banded preconditioning technique for solving the discretized system.Some theoretical results about the preconditioning matrix will be presented.Numerical experiments will also be given to testify the effectiveness of the new preconditioning technique.
The remainder of this paper is organized as follows.In Section 2, we give a detailed description for the discretization of TS-FADE (1).In Section 3, a banded preconditioner is constructed.Then some properties of the discretized system (16) are given and some spectral properties for the preconditioning matrix are studied.Numerical experiments are presented in Section 4 to illustrate the efficiency of the banded preconditioning technique for solving the TS-FADE.Finally, in Section 5, we draw some conclusions to end this paper.
Mathematical Problems in Engineering Then we can rewrite the linear system (13) into a matrix equation form with A defined in ( 14) being a sum of some dense diagonaltimes-Toeplitz matrices and B = (   )  being upper triangular matrix.
Note that A is a sum of some diagonal-times-Toeplitz matrices and B is upper triangular matrix, the right-hand side matrix in ( 16) is not low rank, and the size of B usually has much smaller size than that of A. Then the first column of  can be obtained by solving the shifted system (A + B 11 I)u (1) = f (1) .All subsequent columns of  may be obtained with some substitutions as where B = (B  ); that is, B  denotes the (, )th-entry of B.
Since B  ≡  () 0 = 1,  = 1, 2, . . ., .Then we have to solve these systems with the same coefficient matrix )+I, where  + and  − are positive diagonal matrices, I is an identity matrix, and  and    are described previously.

Banded Preconditioning Iteration Method
where  = V/2ℎ < 0 and the matrices  + ,  − , , and    are defined in Section 2. It can be easily found that  is a skewsymmetric matrix.
Obviously, the main work for solving the discretized system (17) concentrates on solving the system with coefficient matrix A for  = 1, 2, 3, . . ., .Since A is a nonsymmetric matrix, then we can use the Krylov subspace methods [25], such as the generalized minimal residual (GMRES) method, to solve the linear systems (17).The preconditioning techniques are often employed to improve the performance and reliability of the Krylov subspace methods.Denote ) ) ) ) ) , and define a banded preconditioner   as It can be easily seen that   , is a banded Toeplitz matrix with bandwidth  + 1 containing the central diagonals of    .Therefore,   is a banded matrix with bandwidth 2 − 1.
To study the property of the preconditioner   , we will give the following lemmas first.
Lemma 1 (see [12]).Let 1 <  ≤ 2 and  ()  be defined in (6).Then we have Hence, the Gershgorin circles of   are centered at −ℎ −  () 1 = ℎ − .Moreover, according to Lemma 1, the largest radius  max is at most It then follows that the real part of the spectrum of   , is contained in the open interval (0, 2ℎ − ).
Next, we will concentrate on the uniqueness of the solution of the matrix equation ( 16).Theorem 3. Let  fl ℎ  ⋅ (   + (   )  ), where    is defined as in (11).Then  is a symmetric positive definite matrix.
Because  is a real symmetric matrix, then the eigenvalues are all real.So  is symmetric positive definite.Theorem 4. Suppose that  defined in (16) satisfies  ̸ = 0, and then there exists only one solution for the Sylvester equation (16).
Proof.From [26,27], we know that when  ̸ = 0, the continuous Sylvester equation ( 16) has a unique solution if there is no common eigenvalue between  and −.It is easy to know that B is upper triangular matrix and its eigenvalues are all positive.Note that According to Theorem 3, if the real parts of the eigenvalues of    are positive, then the real parts of all eigenvalues of  +    are also positive.Furthermore, the matrices  1 and  2 are both symmetric positive definite.Therefore, A + A  is symmetric positive definite, and the real parts of the eigenvalues of A are positive.Therefore, there is no common eigenvalues between the matrices A and −B.Theorem 5.For 2 ≤  ≤ , the preconditioner   defined in (20) is nonsingular.
Proof.Since  + is a positive diagonal matrix and from Lemma 2, the real part of the spectrum of   , is positive.Then the real part of the spectrum of  +   , is also positive.Using the same strategy, we can obtain that the real part of the spectrum of  − (  , )  is also positive.Assume that   is a singular matrix and there exists a nonzero eigenvector  such that    = 0. Then it follows that Multiplying  * / *  by the left of the above equation, we get As  is a skew-symmetric matrix and  < 0 is a real number, then −( * / * ) is a pure imaginary.Because the real part of  * ( +   , +  − (  , )  )/ *  is a positive number, then the real part of  * (I +  +   , +  − (  , )  )/ *  is a positive number.According to (26), we can easily find that there is a contradiction because the real part of the right hand side of ( 26) is zero.That is, if    = 0, it must hold that  = 0; that is, the matrix   is nonsingular.Theorem 6.For 2 ≤  ≤  and 1 <  ≤ 2, the relative difference between A and   can be described as which implies   can be an efficient preconditioner for the coefficient matrix A as  increases.
Proof.The proof process is similar to [13].
At the end of this section, we will draw the main steps for solving the discretized system (13).
Step 1. Compute the matrices A, B, and  in (16).

Table 1 :
).All runs at each time step are started from zero initial guess and terminated if the current iteration satisfies ERR ≤ 10 −7 or the elapsed CPU time is exceeding 2000 in seconds, where Numerical results of the tested methods for  = 0.4 and  = 2 4 .

Table 2 :
Numerical results of the tested methods for  = 0.8 and  = 24. of the resulting linear systems.Theoretical properties for the preconditioning matrix are studied in detail.Numerical implementations show that the banded preconditioner leads to satisfactory experimental results when we choose appropriate bandwidth in the preconditioner. solution