A Numerical Solution Using an Adaptively Preconditioned Lanczos Method for a Class of Linear Systems Related with the Fractional Poisson Equation

This study considers the solution of a class of linear systems related with the fractional Poisson equation (cid:2) FPE (cid:3) (cid:2) −∇ 2 (cid:3) α/ 2 ϕ (cid:4) g (cid:2) x,y (cid:3) with nonhomogeneous boundary conditions on a bounded domain. A numerical approximation to FPE is derived using a matrix representation of the Laplacian to generate a linear system of equations with its matrix A raised to the fractional power α/ 2. The solution of the linear system then requires the action of the matrix function f (cid:2) A (cid:3) (cid:4) A − α/ 2 on a vector b . For large, sparse, and symmetric positive deﬁnite matrices, the Lanczos approximation generates f (cid:2) A (cid:3) b ≈ β 0 V m f (cid:2) T m (cid:3) e 1 . This method works well when both the analytic grade of A with respect to b and the residual for the linear system are su ﬃ ciently small. Memory constraints often require restarting the Lanczos decomposition; however this is not straightforward in the context of matrix function approximation. In this paper, we use the idea of thick-restart and adaptive preconditioning for solving linear systems to improve convergence of the Lanczos approximation. We give an error bound for the new method and illustrate its role in solving FPE. Numerical results are provided to gauge the performance of the proposed method relative to exact analytic solutions.


Introduction
In recent times, the study of the fractional calculus and its applications in science and engineering has escalated 1-3 . The majority of papers dedicated to this topic discuss fractional kinetic equations of diffusion, diffusion-advection, and Fokker-Planck type to describe transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns 2, 3 . These papers provide comprehensive reviews of fractional/anomalous diffusion and an extensive collection of examples from a variety of application areas. A particular case of interest is the motion of solutes through aquifers discussed by Benson et al. 4,5 .

1.3
We choose such a simple region so that an analytic solution can be found, which can be used subsequently to verify our numerical approach. Note also that this system captures type I boundary conditions k i 0, h i 1, i 1, . . . , 4 and type II boundary conditions h i 0, k i 1, i 1, . . . , 4 . The latter case has to be analysed separately with care since 0 is an eigenvalue that introduces singularities.
The use of our matrix transfer technique leads to the matrix representation of the FPE 1.2 , which requires that the matrix function equation must be solved. Note that in 1.4 , A ∈ R n×n denotes the matrix representation of the Laplacian operator obtained using any of the well-documented methods: finite difference, the finite volume method, or variational methods such as the Galerkin method using finite element or wavelets and b b 1 A α/2−1 b 2 , with b 1 ∈ R n a vector containing the discrete values of the source/sink term, and b 2 ∈ R n a vector that contains all of the discrete boundary condition information. We assume further that both the discretisation process and the implementation of the boundary conditions have been carried out to ensure that A is symmetric positive definite, that is, A ∈ SPD.
The general solution of 1.4 can be written as and one notes the need to determine both the action of the matrix function f A A −α/2 on the vector b 1 and the action of the standard inverse on b 2 , where the matrix A can be large and sparse.
In the case where α 2, numerous authors have proposed efficient methods to deal directly with 1.5 using Krylov subspace methods and in particular, the preconditioned generalised minimum residual GMRES iterative method see, e.g., the texts by Golub and Van Loan 18 , Saad 19 , and van der Vorst 20 . In this paper, we investigate the use of Krylov subspace methods for computing an approximate solution for a range of values 0 < α < 2 and indicate how the spectral information gathered from at first solving AΦ 2 b 2 can be recycled to obtain the complete solution Φ Φ 1 Φ 2 in 1.5 , where In literature, a majority of references deal with the extraction of an approximation to f A v for scalar analytic function f t : D ⊂ C → C using Krylov subspace methods see is the usual Lanczos decomposition, and the columns of V m form an orthonormal basis for Krylov subspace K m A, v {v, Av, . . . , A m−1 v}. However, as noted by Eiermann and Ernst 24 , all basis vectors must be stored to form this approximation, which may prove costly for large matrices. Restarting the process is by no means as straightforward as for the case f t 1/t, and the restarted Arnoldi algorithm for computing f A v given in 24 addresses this issue. Another issue worth pointing out is that although preconditioning linear systems is now well understood and numerous preconditioning strategies exist to accelerate the convergence of many iterative solvers based on Krylov subspace methods 19 , preconditioning in many cases cannot be applied to f A v. For example if AM B, one can only deduce f A from f B in a limited number of special cases for f t .
In the previous work by the authors 27 , we proposed a spectral splitting method where QQ T is an orthogonal projector onto the invariant subspace associated with a set of eigenvalues on the "singular part" of the spectrum σ A with respect to f t and I − QQ T an orthogonal projector onto the "regular part" of the spectrum. We refer to that part of the spectral interval where the function to be evaluated has rapid change with large values of the derivatives as the singular part see 27 for more details . The splitting was chosen in such a way that p m t was a low-degree polynomial of degree at most 5 . Thick restarting was used to construct the projector QQ T on the singular part. Unfortunately, the computational overhead associated with constructing the projector QQ T , whilst maintaining the requirement of a low-degree polynomial approximation for f t over the regular part, limits the application of the splitting method to a class of SPD matrices that had fairly compact spectra. The method appeared to work well for applications in statistics 27, 28 . In this paper, we build upon the splitting method idea in the manner outlined as follows to approximate f A v for monotone decreasing function f t t −q .
1 Determine an approximately invariant subspace AIS , span{q 1 , . . . , q k } for the set of eigenvectors associated with the singular part of σ A with respect to f t . Form Q k q 1 , . . . , q k and set Λ k diag{λ 1 , λ 2 , . . . , λ k }, where λ i are the eigenvalues associated with the eigenvectors q i , i 1, . . . , k. The thick restarted Lanczos method discussed in 27, 29 or 30 can be used for the AIS generation.

5
To avoid components of any eigenvectors associated with the singular part reappearing in K A, v , we show how this splitting strategy can be embedded in an adaptively constructed preconditioning of the matrix function. The paper is organised as follows. In Section 2, we use MTT to formulate the matrix representation of FPE to accommodate nonhomogeneous boundary conditions. We also consider the approximation of the matrix function f A A −q v using the Lanczos method with thick restart and adaptive preconditioning. In Section 3, we give an upper bound on the error cast in terms of the linear system residual. In Section 4, we derive an analytic solution to the fractional Poisson equation using the spectral representation of the Laplacian, and in Section 5, we give the results of our algorithm when applied to two different problems, which highlight the importance of using our adaptively preconditioned Lanczos method. In Section 6, we give the conclusions of our work and hint at future research directions.

Matrix function approximation and solution strategy
The general numerical solution procedure MTT is implemented as follows. First apply a standard spatial discretisation process such as the finite volume, finite element, or finite difference method to the standard Poisson equation i.e., α 2 in system 1.2 in the case of homogeneous boundary conditions to obtain the following matrix form: where it is assumed that 1/h 2 A m −∇ 2 is the finite difference matrix representation of the Laplacian, and h is the grid spacing. Φ m ϕ is the representation of ϕ, and g m g is the representation of g. Then, as was discussed in 15 , the solution of FPE subject to homogeneous boundary conditions is approximated by the solution of the following matrix function equation: Next, we apply the same finite difference method to the homogeneous Poisson equation i.e., Laplace's equation with nonhomogeneous boundary conditions. The resulting equations can be written in the following matrix form: where b represents the discretized boundary values, and the matrix A is the same as given above. In other words, if ϕ does not satisfy homogeneous boundary conditions, then the modified representation Journal of Applied Mathematics and Stochastic Analysis is used, where −∇ 2 denotes the extended definition of the Laplacian see 8 and also refer to Section 4 for further details . Thirdly, we follow 8 to write the fractional Laplacian in the following form: and its matrix representation as Hence, the matrix representation for FPE is Assuming that A has an inverse, the solution of this equation is Our aim is to devise an efficient algorithm to approximate the solution Φ in 2.8 using Krylov subspace methods. One notes from 2.8 that the solution comprises two distinct , and 0 < q α/2 < 1. We note further in this context that the scalar function f t t −q is monotone decreasing on σ A , where A ∈ R n×n is symmetric positive definite.
There exists a plethora of Krylov-based methods available in the literature for approximately solving the linear system AΦ 2 b using, for example, conjugate gradient, FOM, or MINRES see 19,20 . Although preconditioning strategies are often employed to accelerate the convergence of many of these methods, we prefer not to adopt preconditioning here so that spectral information gathered about A during this linear system solve can be recycled and used to aid the approximation of Φ 1 . As we will see, this recycling is affected through the use of thick restart 30, 32 and adaptive preconditioning 33, 34 . We emphasise that even if M is a good preconditioner for A, it may not be useful for f A since we cannot find a relation between f A and f AM −1 . Thus, many efficient solvers used for the ordinary Poisson equation cannot be employed for the FPE. The adaptive preconditioner, however, can.
We begin our presentation of the numerical algorithm by briefly reviewing the solution of the linear system AΦ 2 b, where A ∈ R n×n is a symmetric positive definite using the full orthogonal method FOM 19 together with thick restart 27, 30, 32 .

Stage 1-Thick restarted, adaptively preconditioned, Lanczos procedure
Suppose that the Lanczos decomposition of A is given by where the columns of V form an orthonormal basis for K A, b , and is the analytic grade defined in 31 . The analytic grade of order t of the matrix A with respect to b is defined as the lowest integer for which u − P u / u < 10 −t , where P is the orthogonal projector onto the lth Krylov subspace K l and u l A b. The grade can be computed from the Lanczos algorithm using the matrices T 1 , T 2 , . . . , T l generated during the process. If t 1 is the 1st column of T 1 , and t i T i t i−1 , for i 1, . . . , , then u − P u / u |e T l 1 t l |/ t l . In each restart, or cycle, that follows, the Lanczos decomposition is carried up to the analytic grade , which could be different for different cycles. Consequently, for ease of exposition, the subscript will be suppressed so that the only subscript that appears throughout the description below refers to the cycle. Let Φ 0 2 be some initial approximation to the solution Φ 2 and define r 0 b − AΦ 0 2 .

Cycle 1
i Generate Lanczos decomposition ii Obtain approximate solution Φ 1 2 and residual
ii Select the k orthonormal ON eigenvectors, Y 1 , of T 1 corresponding to the k smallest in magnitude eigenvalues of T 1 and form the Ritz vectors where w i are ON, and let the associated Ritz values be stored in the diagonal matrix iii Set V 2 W 1 , u 1 and generate the thick-restart Lanczos decomposition and residual

2.17
Test if r 2 < ε. If yes, stop; otherwise, continue to the next cycle.
ii Select k orthonormal ON eigenvectors, Y j , of T j corresponding to the k smallest in magnitude eigenvalues of T j and form the Ritz vectors W j V j Y j .
iii Set V j 1 W j , u j and generate thick-restart Lanczos decomposition where T j 1 has similar form as T 2 .
iv Obtain approximate solution Φ and residual

Construction of an adaptive preconditioner
Another important ingredient in the algorithm described above is the construction of an adaptive preconditioner 33, 34 . Let the thick-restart procedure at cycle j produce the k approximate smallest Ritz pairs We then check if any of these Ritz pairs have converged to approximate eigenpairs of A by testing the magnitude of the upper bound on the eigenpair residual The eigenpairs deemed to have converged are then locked and used to construct an adaptive preconditioner that can be employed during the next cycle to ensure that difficulties such as spuriousness can be avoided. Suppose we collect the p locked Ritz vectors as columns of the matrix Q j q 1 , q 2 , . . . , q p , set Λ j diag{θ 1 , . . . , θ p }, and form where γ θ min θ max /2. θ min , θ max are the current estimates of the smallest and largest eigenvalues of A, respectively, obtained from the restart process. Then, A j AM −1 j has the same eigenvectors as A; however its eigenvalues {λ i } p i 1 are shifted to γ 33, 34 . Furthermore, it should be noted that these preconditioners can be nested. If M 1 , M 2 , . . . , M j is a sequence of such preconditioners, then with Q Q 1 , Q 2 , . . . , Q j and Λ diag Λ i , i 1, . . . , j , we have Thus, during the cycles say cycle j 1 the adaptively preconditioned, thick-restart Lanczos decomposition is employed.
Note. The preconditioner M −1 does not need to be explicitly formed; it can be applied in a straightforward manner from the stored locked Ritz pairs.
In summary, stage 1 consists of employing the adaptively preconditioned Lanczos procedure outlined above to approximately solve the linear system AΦ 2 b for Φ 2 . At the completion of this process, the residual r b − AΦ 2 < ε, and we have the set {θ i , q i } k i 1 of locked Ritz pairs. This spectral information is then passed to accelerate the performance of stage 2 of the solution process.

10
Journal of Applied Mathematics and Stochastic Analysis

Stage 2-Matrix function approximation using an adaptively preconditioned Lanczos procedure
At the completion of stage 1, we have generated an approximately invariant eigenspace V span{q 1 , q 2 , . . . , q k } associated with the smallest in magnitude eigenvalues of A. We now show how this spectral information can be recycled to aid with the approximation of

Adaptive preconditioning
Recall from stage 1 that we have available The important observation at this point is the following relationship between f A and f AM −1 . Thus, By noting that we obtain the main result The following proposition shows that, as was the case for the solution of the linear system in stage 1, these preconditioners can be nested in the case of the matrix function approximation.
. . , M j be a sequence of preconditioners as defined in Proposition 2.1, then Proof.
which appears similar to the idea of spectral splitting proposed in [27].
We now turn our attention to the approximation of Φ 1 A −q g, which by using Corollary 2.3 can be expressed as where g I − Q k Q T k g. First note that if A ∈ SPD, then AM −1 ∈ SPD. We expand the Lanczos decomposition AM −1 V V T β v 1 e T to the analytic grade of AM −1 with v 1 g/ g . Next perform the spectral decomposition of T Y Λ Y T and set Q V Y , then compute the Lanczos approximation Based on the theory presented to this point, we propose the following algorithm to approximate the solution of the fractional Poisson equation. 4 Compute linear system residual r |β e T Y Λ −1 Y T V T g| and estimate λ min ≈ μ min from T to compute bound 3.9 μ −q min r derived in Section 3. 5 If bound is small, then approximate f AM −1 g ≈ V T −q V T g and exit to step 6 , otherwise continue the Lanczos expansion until bound is satisfied.

Remarks
At stage 2, we monitor the upper bound given in Proposition 3.3 to check if the desired accuracy is achieved in the matrix function approximation. If the desired level is not attained, then it may be necessary to repeat the thick-restart procedure to determine the next k smallest eigenvalues and their corresponding ON eigenvectors. In fact, this process may need to be repeated until there are no eigenvalues remaining in the "singular" part so that the accuracy of the approximation is dictated entirely by that of the linear system residual. We leave the design of this more sophisticated and generic algorithm for future research.
It is natural at this point to ask what is the accuracy of the approximation 2.33 for a given ? Not knowing AM −1 −q g at the outset makes it impossible to answer this question. Instead, we opt to provide an upper bound for the error AM −1 −q g − V T −q V T g , which is the topic of the following section.

Error bounds for the numerical solution
At first, we note that Churchill 35 uses complex integration around a branch point to derive the following: By changing the variable, one can deduce the following expression, for λ −q , λ > 0: Noting that A AM −1 ∈ SPD, the spectral decomposition and the usual definition of the matrix function enable the following expression for computing A −q to be obtained: Recall that the approximate solution of the linear system Ax v from K A, v using the Galerkin approach FOM or CG is given by We note the similarity to 2.33 ; however a key observation is that the error in the matrix function approximation cannot be determined in such a straightforward manner as for the linear system 24 . The following proposition enables the error in the matrix function approximation to be expressed in terms of the integral expression given above in 3.3 and the residual of what is called a shifted linear system.
Proof. It is known that

3.5
It is interesting to observe that r t − β e T T t 1/ 1−q I −1 V T v v 1 for the Lanczos approximation, so that the vectors r ≡ r 0 and r t are aligned; however their magnitudes are different. Note further that r t e T T tI −1 e 1 e T T −1 e 1 r 0 .

3.6
An even more important result is the following relationship between their norms. Proof. The result follows from 26 , which gives the following polynomial characterisations for the residuals: 3.8 so that r t π A v/π t 0 π 0 /π t 0 r i 1 μ i / μ i t 1/ 1−q r . The result follows by taking the norm and noting that t > 0.

Journal of Applied Mathematics and Stochastic Analysis
We are now in a position to formulate an error bound essential for monitoring the accuracy of the Lanczos approximation 2.33 .

Proposition 3.3.
Let λ min be the smallest eigenvalue of A and r the linear system residual obtained by solving the linear system Ax g using FOM on the Krylov subspace K A, g , then for 0 < q < 1, one has Proof. Using the orthogonal diagonalisation A QΛQ T , we obtain from Proposition 3.1 that The result follows by taking norms and using Proposition 3.2 to obtain The importance of this result is that it relates the error in the matrix function approximation to a scalar multiple of the linear system residual. This bound can be monitored during the Lanczos decomposition to deduce whether a specified tolerance has been reached in the matrix function approximation. Another key observation from Proposition 3.3 is that it motivates us to shift the small troublesome eigenvalues of A, via some form of preconditioning, so that λ min ≈ 1. In this way, the error in the function approximation is dominated entirely by the residual error.

Analytic solution
In this section, we discuss the analytic solution of the fractional Poisson equation, which can be used to verify the numerical solution strategy outlined in Section 2. The theory depends on the definition of the operator −∇ 2 α/2 via spectral representation. The one-dimensional case was discussed in Ilić et al. 8 , and the salient results for two dimensions are repeated here for completeness.

Homogeneous boundary conditions
In operator theory, functions of operators are defined using spectral decomposition. Set Ω { x, y | 0 < x < a, 0 < y < b}, and let H be the real space L 2 Ω with real inner product u, v Ω uv dS. Consider the operator on D {ϕ ∈ H | ϕ x , ϕ y absolutely continuous; ϕ x , ϕ y , ϕ xx , ϕ xy , ϕ yy ∈ L 2 Ω , B ϕ 0}, where B ϕ is one of the boundary conditions in the FPE problem with right-hand side equal to zero.

4.2
If ψ is a continuous function on R, then Hence, if the eigenvalue problem for T can be solved for the region Ω, then the FPE problem with homogeneous boundary conditions can be easily solved to give 4.4

Nonhomogeneous boundary conditions
Before we proceed further, we need to specify the definition of − ∇ 2 α/2 . Let Then, for any f ∈ F γ , −Δ α/2 f is defined by If one of λ 2 ij 0 and ϕ 0 is the eigenfunction corresponding to this eigenvalue, then one needs f, ϕ 0 0.

Journal of Applied Mathematics and Stochastic Analysis
For α > 0, Definition 4.1 may be too restrictive, since the functions we are interested in satisfy nonhomogeneous boundary conditions, and the resulting series may not converge or not converge uniformly. It suffices to consider 0 < α < 2.

4.10
where −Δ is the extension of −Δ with domain D Ω that is the same as D Ω without B ϕ 0, which is well documented in books on partial differential equations 7 . This is done by calculating the conjunct concomitant or boundary form using the Green's formula Thus, which gives the result on substitution.
This result can be readily used to write down the analytic solution to the FPE problem. First, we obtain the spectral representation of the operator T by solving the eigenvalue problem: Knowing the eigenvalues λ ij and the corresponding orthonormal ON eigenfunctions ϕ ij , we can use the finite-transform method with respect to ϕ ij and Proposition 4.3 to obtain λ α ij ϕ ij , ϕ g, ϕ ij λ α−2 ij b ij , 4.14 where λ α−2 ij b ij is the second term on the right hand side in Proposition 4.3. Hence,

Results and discussion
In this section, we exhibit the results of applying Algorithm 2.4 to solve two FPE test problems. To assess the accuracy of our approximation, we compare the numerical solutions with the exact solution in each case.

5.2
For the numerical solution, a standard five-point finite-difference formula with equal grid spacing h 1/n in the x and y directions has been used to generate the block tridiagonal matrix A ∈ R n−1 2 × n−1 2 given in 2.1 as The parameters used to test this model are listed in Table 1.

5.4
where H i h i /k. The analytical solution to this problem is given by

5.7
The eigenvalues μ i are determined by finding the roots of the transcendental equation: with ν j determined from a similar equation for ν. Finally, α i,j is given by For the numerical solution, a standard five-point finite-difference formula with equal grid spacing h 1/n was again employed in the x and y directions. However in this example, additional finite-difference equations are required for the boundary nodes as a result of type III boundary conditions. The block tridiagonal matrix required in 2.8 is then similar to that exhibited for example 1, however it has dimension A ∈ R n 1 2 × n 1 2 and boundary blocks must be modified to account for the boundary condition contributions.
The parameter values used for this problem are listed in Table 2.

Discussion of results for test problem 1
A comparison of the numerical and analytical solutions for test problem 1 is exhibited in Figure 1 for different values of the fractional index α 0.5, 1.0, 1.5, and 2 with the value 2 representing the solution of the classical Poisson equation . In all cases, it can be observed that good agreement is obtained between theory and simulation, with the analytical solid contour lines and numerical dashed contour lines solutions almost indistinguishable. In fact, Algorithm 2.4 consistently produced a numerical solution within approximately 2% absolute error of the analytical solution.  The impact of decreasing the fractional index from α 2 to 0.5 is particularly evident in Figure 2 from the shape and magnitude of the computed three-dimensional symmetric profiles. Low values of α produce a solution exhibiting a pronounced hump-like shape, with the diffusion rate low, the magnitude of the solution high at the centre, and steep gradients evident near the boundary of the solution domain. As α increases, the magnitude of the profile diminishes and the solution is much more diffuse and representative of a Gaussian process. These observations motivate the following remark. where f is a C ∞ -function with rapid decay at infinity, and g α π n/2 2 α Γ α/2 Γ n/2 − α/2 5.11 is known to generate α-stable processes. In fact, the Green's function of the equation is the probability density function of a symmetric α-stable process. When α 1, it is the density function of a Cauchy distribution, and when α 2 it is the classical Gaussian density. As α → 0, the tail of the density function is heavier and heavier. These behaviours are reflected in the numerical results given in the above example; namely, when α → 2, the plots exhibit the bell shape of the Gaussian density, but when α → 0, the curves are flatter, indicating very heavy tails as expected.
We now report on the performance of Algorithm 2.4 for computing the solution of the FPE. The numerical solutions shown in Figures 1 and 2 were generated using a standard fivepoint finite difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The x-and y-dimensions were divided equally into 31 divisions to produce the symmetric positive definite matrix A ∈ R 900×900 having its spectrum σ A ⊂ 0.0205, 7.9795 . One notes for this problem that the homogeneous boundary conditions necessitate only the solution Φ h α A −α/2 g. Algorithm 2.4 was still employed in this case; however in stage 1 we at first solve Ax g by the adaptively preconditioned thick restart procedure. The gathered spectral information from stage 1 is then used for the efficient computation of Φ during stage 2. Figure 3 depicts the reduction in the residual of the linear system and the error in the matrix function approximation during both stages of the solution process for test problem 1 for the case α 1. For this test using FOM 25,10 , subspace size 25 with an additional 10 approximate Ritz vectors augmented at the front of the Krylov subspace four restarts were required to reduce the linear system residual to ≈ 1 × 10 −15 , which represented an overall total of 110 matrix-vector products. This low tolerance was enforced to ensure that as many approximate eigenpairs of A could be computed and then locked during stage 1 for use in stage 2. An eigenpair was deemed converged when the residual in the approximate eigenpair was less than θ max × 10 −10 , where θ max is the current estimate of the largest eigenvalue of A. This process saw 1 eigenpair locked after 2 restarts, 5 locked after 3 restarts, and finally 9 locked after 4 restarts. From this figure, we also see that when subspace recycling is used for stage 2 only an additional 30 matrix-vector products are required to compute the solution Φ to an acceptable accuracy. It is also worth pointing out that the Lanczos approximation for this preconditioned matrix function reduces much more rapidly than for the case where preconditioning dotted line is not used. Furthermore, the Lanczos approximation in this example lies almost entirely on the curve that represents the optimal approximation obtainable from the Krylov subspace 26 . Finally, we see that the bound 3.9 can be used with confidence as a means for halting stage 2 once the desired accuracy in the bound is reached.

Discussion of results for test problem 2
A comparison of the numerical and analytical solutions for test problem 2 is exhibited in Figure 4, again for the values of the fractional index α 0.5, 1.0, 1.5, and 2. It can be seen that the agreement between theory and simulation is more than acceptable for this case, with Algorithm 2.4 producing a numerical solution within approximately 4% absolute error of the analytical solution. However, the impact of increasing the fractional index from α 0.5 to 2 is less dramatic for problem 2.
The numerical solutions shown in Figure 4 were again generated using a standard fivepoint finite-difference stencil to construct the matrix representation of the two-dimensional Laplacian operator. The x-and y-dimensions were divided equally into 30 divisions resulting in the symmetric positive definite matrix A ∈ R 961×961 having its spectrum σ A ⊂ 0.000758, 7.9795 . One notes for this problem that type II boundary conditions have produced a small eigenvalue that undoubtedly will hinder the performance of restarted FOM. Figure 5 depicts the reduction in the residual of the linear system for computing the solution Φ 1 and the error in the matrix function approximation for Φ 2 during both stages of the solution process for test problem 2 with α 1. Using FOM 25,10 , a total of nine restarts were required to reduce the linear system residual to ≈ 1×10 −15 , which represented an overall total of 240 matrix-vector products. One notes that this is much higher than Problem

Conclusions
In this work, we have shown how the fractional Poisson equation can be approximately solved using a finite-difference discretisation of the Laplacian to produce an appropriate matrix representation of the operator. We then derived a matrix equation that involved both a linear system solution and a matrix function approximation with the matrix A raised to the same fractional index as the Laplacian. We proposed an algorithm based on Krylov subspace methods that could be used to efficiently compute the solution of this matrix equation using a two-stage process. During stage 1, we used an adaptively preconditioned thick restarted FOM method to approximately solve the linear system and then used recycled spectral information gathered during this restart process to accelerate the convergence of the matrix function approximation in stage 2. Two test problems were then presented to assess the accuracy of our algorithm, and good agreement with the analytical solution was noted in both cases. Future research will see higher dimensional fractional diffusion equations solved using a similar approach via the finite volume method.