Preconditioned Krylov subspace methods for sixth order compact approximations of the Helmholtz equation

In this paper, we consider an efficient iterative approach to the solution of the discrete Helmholtz equation with Dirichlet, Neumann and Sommerfeld-like boundary conditions based on a compact sixth order approximation scheme and preconditioned Krylov subspace methodology. A sixth order compact scheme for the 3D Helmholtz equation with different boundary conditions is developed to reduce approximation and pollution errors, thereby softening the point-per-wavelength constraint. The resulting systems of finite-difference equations are solved by different preconditioned Krylov subspace-based methods. In the majority of test problems, the preconditioned Generalized Minimal Residual (GMRES) method is the superior choice, but in the case of sufficiently fine grids a simple stationary two-level algorithm proposed in this paper in combination with a lower order approximation preconditioner presents an efficient alternative to the GMRES method. In the analysis of the lower order preconditioning developed here, we introduce the term"$k$-th order preconditioned matrix"in addition to the commonly used"an optimal preconditioner". The necessity of the new criterion is justified by the fact that the condition number of the preconditioned matrix $ AA^{-1}_p $ in some of our test problems improves with the decrease of the grid step size. In a simple 1D case, we are able to prove this analytically. This new parameter could serve as a guide in the construction of new preconditioners. The lower order direct preconditioner used in our algorithms is based on a combination of the separation of variables technique and Fast Fourier Transform (FFT) type methods. The resulting numerical methods allow efficient implementation on parallel computers. Numerical results confirm the high efficiency of the proposed iterative approach.


Introduction
In recent years, the problem of increasing the resolution of existing numerical solvers has become an urgent task in many areas of science and engineering. Most of the existing efficient solvers for structured matrices were developed for lower-order approximations of partial differential equations. The need for improved accuracy of underlying algorithms leads to modified discretized systems and as a result to the modification of the numerical solvers (see e.g. [24]).
The use of a lower order preconditioner for efficient implementation of highorder finite-difference and finite-element schemes has been under consideration for a long time (see e.g. [14], [18]). In this paper, a compact sixth order approximation finite-difference scheme is developed, and a lower order approximation direct solver as a preconditioner for an efficient implementation of this compact scheme for the Helmholtz equation in the Krylov subspace method framework is considered. This approach allows us to utilize the existing lower order approximation solvers which significantly simplifies the implementation process of the higher resolution numerical methods.
The model problem considered in the paper is the numerical solution of the Helmholtz equation with the Dirichlet, Neumann, and/or Sommerfeld-like (radiation) boundary conditions u = 0, on ∂ Ω 1 , u n = 0, on ∂ Ω 2 , where Ω = {0 ≤ x, y, z ≤ a}, k is a complex valued constant, and ∂ Ω 1 , ∂ Ω 2 and ∂ Ω 3 are different boundary sides of the rectangular computational domain Ω . It is known that for a given error level in the numerical approximation to the solution of the Helmholtz equation, the quantity (Re(k)) p+1 h p needs to be constant, where p is the order of finite-difference scheme and h is the grid size. This phenomenon is known as "pollution" [1,3]. One way of reducing the pollution error is to increase the order of accuracy of the scheme. In this paper a sixth order compact finite-difference scheme is considered to address this problem. In the cases of the 3D Helmholtz and Dirichlet boundary conditions, this scheme was proposed by Sutmann in [21]. The 2D version of this scheme with Dirichlet and Neumann boundary conditions was developed in [17,22]. In this paper, the method is extended to the explicit compact sixth order approximation of Neumann and Sommerfeld-like boundary conditions in the 3D case. The extension of the known approach for the Neumann boundary conditions is straightforward. But in the case of the Sommerfeld-like boundary conditions the method is nontrivial and requires the introduction of a new auxiliary function. In the case of the variable coefficient Helmholtz equation and Sommerfeld-type boundary conditions, the fourth order compact approximation scheme was considered in [5].
The resulting discretization leads to a system of linear equations with block 27diagonal structure. In general, the matrix of this system is neither positive definite nor Hermitian. Hence, most iterative methods either fail to converge or converge too slowly, which is impractical. For the solution of this system we propose using a combination of Krylov subspace-based methods and the FFT preconditioner. Concerning other approaches for the solution of the problem (1)-(2), we refer to Bayliss, Goldstein and Turkel [2] for a preconditioned conjugate-gradient algorithm, Kim [16] for a domain decomposition method, and Douglas, Hensley and Roberts [6] for an ADI algorithm. A very efficient multigrid method based on a shifted-Laplace preconditioner for the Helmholtz equation is presented in [23]. The analysis of the multigrid algorithms in the case of nonsymmetric and indefinite problems can be found in [4].
On the other hand, the solution of this problem by a direct method based on Gaussian elimination requires a prohibitive amount of additional storage and computer time and thus has limited use. The most promising results in the solution of a similar problem have been obtained by preconditioned Krylov subspace methods in [9]. In this paper we generalize some approaches developed for the second-order central difference discretization of the Helmholtz equation by Elman and O'Leary [7,8] and the author and others [12] to the case of compact sixth order approximation scheme.
The key to the fast convergence of the suggested iterative method is the choice of the preconditioning matrix. In our algorithm we use a preconditioner based on the second order central difference approximation of the Helmholtz equation and corresponding boundary conditions. The inversion of the preconditioning matrix at each step of the Krylov subspace method is done by a direct solver based on the FFT technique which requires O(N 3 log N) operations, where N is the number of grid points in each direction.
Numerical experiments with test problems demonstrate the high resolution of the sixth order compact finite-difference scheme, as well as the computational efficiency of our preconditioned Krylov subspace type numerical method. In most situations, the GMRES method demonstrates the best convergence properties. However, in the case of sufficiently fine grids, we propose using a simple stationary two-level method (see e.g. [20]). This method was naturally constructed in the analysis of the convergence of the GMRES algorithm. So, the choice of the parameters in this method is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination from a Krylov subspace. This is why, for the purpose of this paper, we took the liberty of calling this approach the Simplified Krylov Subspace (SKS) method. On sufficiently fine grids this method requires much less processor time than the GMRES method, though the number of iterations until convergence is still larger than in the case of the GMRES method as should be expected. Also, since the implementation of SKS algorithm does not require the calculation of scalar products, it has greater potential than the GMRES method for implementation on parallel computers. A distinguishing feature of these approaches is that the number of iterations required for the convergence of Krylov subspace iterations decreases as the size of the discretized system increases. To explain this fact we introduce "the order of the preconditioned matrix" as a parameter to quantify the rate of convergence of the condition number of the preconditioned matrix AA −1 p to 1 on a sequence of grids. In some simple situation, we were able to find this parameter analytically. We believe that this parameter is more informative then the commonly used "an optimal preconditioner" (see e.g. [15], p.196) in the case of a lower order preconditioners and may be used as guide in the further development of preconditioners of similar type. But we must notice that even in this paper this parameter has limited application.
This conclusion is based on the theoretical analysis of some simple situations and confirmed in our numerical experiments. Some preliminary numerical results on the application of this approach were presented at the 10th International Conference on Mathematical and Numerical Aspects of Waves, Vancouver, Canada, 24-29 July, 2011 [11].
The rest of the paper is organized as follows. In Section 2, the main idea of the proposed sixth order approximation compact method is presented in the case of the 1D problem. To analyze the convergence of the algorithms developed here, variations of known convergence estimates for the Krylov subspace methods are considered. In this section, simplified approaches based on the Krylov subspace iterations are also presented. Section 3 focuses on the development of the sixth order compact approximation scheme in the case of Neumann and Sommerfeld-like boundary conditions. In Section 4, the effectiveness of the proposed algorithms is demonstrated on a series of test problems.

A one-dimensional model problem
Let A and A p be matrices derived from six and second order approximations to (1) and the Dirichlet boundary condition (2) using a mesh x i = ih, i = 0, 1, ..., N + 1, h = 1 N+1 . The standard notation for the first and second order central differences at ith grid point is given by where u i = u(x i ). The difference operators δ y , δ z , δ 2 y and δ 2 z used in the following sections are defined similarly. The sixth order approximation to the second derivative at the ith grid point can be written as As usual, in the case of compact schemes, we need to use the original equation to find appropriate relations to eliminate the fourth and sixth derivatives in (4). By using the second order central difference of the fourth derivative of u at x = x i and the expression for the fourth derivative of u(x) from (1) the relation (4) can be expressed in the form where f i = f (x i ). After using (7), the discretized system corresponding to the compact sixth order approximation scheme for (1)-(2) can be presented as where U i is the sixth order discrete approximation to u(x i ). This system can also be rewritten in the form AU = F, where and In a similar way the preconditioning matrix based on the second order central difference approximation can be presented as Finally the right preconditioned system can be written The main goal of this paper is to demonstrate the computational efficiency of a preconditioning technique based on the use of a lower order approximation discrete system in the numerical implementation of higher order compact finite-difference schemes. To consider this construction in more general form, we introduce the definition of a kth order preconditioned system. Definition 1 Given two nonsingular N ×N matrices A and A p , a system in the form of (11) is said to be a kth order preconditioned system if the N × N matrix AA −1 p is diagonalizable and can be expressed as Using this definition we present the convergence analysis of the GMRES method applied to the system (11) in the following theorem.
Theorem 1 Let a system in the form of (11) be a kth order preconditioned system. Then the nth iteration U (n) of the GMRES method applied to this system satisfies the convergence estimate Proof. Since the matrix AA −1 p is diagonalizable, it is well known (see e.g. [10], [19]) that the residual of nth iteration of GMRES algorithm satisfies Consider the polynomial of nth degree p n (x) = 1−xy n−1 (x), where y n−1 (x) = n−1 ∑ k=0 α n k+1 x j and the coefficients α n k satisfy the equation p n (1 + x) = −α n n x n , n ≥ 1. It is easy to see that the coefficients α n k , k = 1, ..., n satisfy n ∑ k=l−1 k l − 1 α n k = 0, l = 2, ..., n, and the unique solution of this system is given by α n k = (−1) k−1 n k , k = 1, ..., n. Then the convergence estimation for the GMRES method becomes The proof is somehow trivial and could be significantly simplified but we present it in such a form to use later in the construction of the simplified iterative method based on this proof.
In the same way we can derive another useful estimate expressed in the following corollary.
We can derive another well known estimate based on a standard application of Chebyshev polynomials (see e.g. [19]). We consider only the case of real valued matrices.
The following theorem provides the details of the proposed technique. p . Then the nth iteration U n of the preconditioned GMRES method applied to this system satisfies the convergence estimate Proof. This estimation immediately follows from the general complex valued matrix result (see e.g. [19], p.206).
Next we will applied the developed convergence estimations to the analysis of the solution of the system (11). (8) and (10)

Theorem 3 Let N × N matrices A and A p be defined by
order preconditioned system and we have the convergence estimate.
Proof. The matrices A and A p have the same set {V j , j = 1, ..., N} of eigenvectors, as the matrix Λ from (9). The eigenvectors are given by (see e.g. [8]) The matrix V is unitary, so V −1 2 = V 2 = 1. The eigenvalues of the matrices A p and A are given by respectively. We can write the eigenvalues of the preconditioned matrix AA −1 p in the form The second term in the right hand side can be estimated using m ≤ | where m = 13h 4 k 4 /720 and M = h 2 k 4 /(12δ 0 ). So AA −1 p is a second order preconditioned matrix and using (12) we obtain the estimate stated in the theorem.
The first condition of this theorem hk < 2π/10 is a common natural restriction that comes to play even when one wants to visualize a numerical solution for qualitative analysis. It is just not an accurate representation of the solution if there are fewer then 5 points per half wave length. Moreover, to avoid the so-called the "pollution" phenomenon Bayliss et. al. [3] introduced the restriction that k p+1 h p should be constant, i.e. to maintain a fixed accuracy of a scheme, the number of grid points must grow as k 1/p where p is the accuracy order of the scheme. So, if one satisfies the restriction that avoids "pollution" phenomenon, the first condition of the theorem is satisfied almost automatically and it is not much of a restriction at all.
The second condition of the theorem is a requirement that avoids the discrete spectrum of the operator. Since the resolvent set of a bounded linear operator is open, we can always do this by introducing a small change to k 2 if necessary.
Next, we present how we can choose δ 0 in some important cases.
In the simplest case of k 2 < 9, we can take δ 0 = 1/3. Consider how to get an estimate for δ 0 in the case of a given k > π and a sequence of grids with the grid sizes h 1 > h 2 > h 3 . . . . The first restriction of the theorem gives us the condition kh < 2π 10 . We can see that if the j in the argument of min j 4 sin 2 ( jπh 2 ) h 2 − k 2 were to take on values of all real numbers from 1 to N, then the minimum of the expression under consideration would be zero. Now let's denote α( j) = jπh 2 and consider for which α the expression is zero if kh = 2π 10 . We consider the worst-case scenario to justify this approach for all kh < 2π 10 . It is trivial to find that α 0 = sin −1 ( kh 2 ) = sin −1 ( π 10 ) ≈ .32. So, it is clear that the minimum of the function occurs either at j 0 = ⌊ 2α 0 πh ⌋ or at j 1 = j 0 + 1 since sin(α( j)) is increasing on the interval 1 < j < N. There exists a neighborhood of α 0 which includes both α( j 0 ) and α( j 1 ) such that α < .64 which allows us to use the Taylor series to represent sin 2 (α) in this neighborhood. We use only first two terms in the series, i.e. sin 2 (α) = α 2 − 1 3 cos(2ξ )α 4 , 0 < ξ < 0.64. Now we can substitute this expression in the equation So, we just need to avoid the values for k and h 1 for which one of the terms in the brackets is nonpositive. In some situations we would need to choose a sufficiently small grid size step h 1 for the coarsest grid in a sequence. For example, in the case k = 20 and h 1 = 1/32, min 400 − 6 2 π 2 , 7 2 π 2 − 400 − 1
This δ 0 is independent of h and can be used in our convergence analysis on a sequence of grids. It follows from (19) that the number of iterations for the Krylov subspace algorithm decreases as the grid size h decreases. This result proves that the algorithm developed here yields a very effective iterative technique for the implementation of a higher order approximation scheme. Later we will consider the extension of this method to 3D problems.
The proof of Theorem 1 suggests a simplified version of the Krylov subspace method for the solution of (11). Indeed, using the expression for the exact solution of the system (14), we have α n n = −α n+1 n+1 , for n = 1, ... and the following recurrence expression for y n This allow us to derive the SKS algorithm described in the table labelled Algorithm 1.
p U (0) = 0 be the initial approximation, let the tolerance be tol, and let the maximum number of iterations be M 2: r 0 = F, Y = r 0 and δ 0 = r 0 2 3: j = 1 4: while j < M and ε < tol do 5: This is a simple stationary two-level method (see e.g. [24]). However, the choice of the parameters in this algorithm is not based on the minimization of the spectral radius of the iteration matrix on each step but rather on the construction of a particular linear combination of vectors from a Krylov subspace. This is why, for the purpose of this paper, we call this approach the Simplified Krylov Subspace (SKS) method. The proposed method does not require an orthogonalization procedure, i.e. requires no inner products, which is attractive for an implementation in a parallel computing environment. In addition, this method does not use estimates of maximum and minimum eigenvalues of the preconditioned matrix which is the drawback of the Chebyshev acceleration algorithm (see e.g. [19]). The Chebyshev acceleration method is naturally constructed in the proof of the Theorem 2 and we will compare the numerical effectiveness of the methods in Section 4. We should notice that in most numerical experiments, the GMRES method exhibits the best convergence properties, but in some situations the SKS algorithm proves to be more efficient. Some such examples are considered in Section 4.
The SKS algorithm also gives a good criterion for evaluating the quality of a preconditioner in the form of the order of preconditioned matrix (11). In this paper we will calculate this as follows. Consider two grids with the grid size h and γh, where γ > 1. Let the l 2 −norm of the residuals of the SKS method on first two iterations be r h 1 2 and r h 2 2 on the first grid and r We consider this parameter in the discussion of the results of the numerical experiments.

A sixth order approximation compact scheme
In this section we present a three dimensional compact sixth order approximation finite-difference scheme that was first introduced in [21], and we develop a sixth order compact explicit approximation of the Neumann and Sommerfeld-like boundary conditions (2). In this discussion we consider a uniform grid First, we consider a finite-difference approximation of (1) in the form where Using the appropriate derivatives of (1) we can write the sixth order compact approximation of the Helmholtz equation in the form For the two dimensional case this scheme was proposed in [17]. In the three dimensional case with Dirichlet boundary conditions, it was developed in [21]. We will consider the iterative implementation of this scheme based on preconditioned Krylov subspace type algorithms.

Boundary conditions
The implementation of the Dirichlet boundary conditions (2) is straightforward but the explicit compact approximation of Neumann and Sommerfield-like boundary conditions requires careful consideration. The sixth order compact approximation of the Neumann boundary condition in the two dimensional case was considered in [17]. Here we extend this approach to the three dimensional problem.

Neumann boundary conditions
In this subsection the Neumann boundary conditions are considered in the form We restrict our consideration to only one side of the computational domain Ω . By using Taylor series, we can derive the sixth order approximation formula To express the third and fifth derivatives in (27) we can differentiate the original Helmholtz equation (1), assuming sufficient smoothness of the solution and the right hand side. After substitution of these derivatives into (27), the resulting expression is Next, we approximate the third order mixed derivatives in the second bracket of (28) by a fourth order approximation formula and in the second line using a second order approximation scheme, which yields Now we can simplify this and write Finally, to complete the explicit scheme for the boundary conditions, we present the previous equation in the form Now by using (3) and replacing u i, j,k with U i, j,k , we can write This formula allows us to eliminate the term U i, j,−1 in (25). The explicit implementation of the Neumann boundary conditions on the other boundary pieces can be conducted in a similar way.

Sommerfield-like boundary conditions
In this section we consider the implementation of the Sommerfeld-like boundary conditions (2) which are often used in scattering problems The difficulty in using similar methods to those described in the previous section can be seen in Equation (29). The last term in this equation requires calculation of the fourth derivative of the right hand side of (26) and if it depends on an unknown variable u, the use of a compact sixth order approximation scheme becomes problematic.
To avoid such a difficulty, we introduce a new variable v = e ikz u. Then the Helmholtz equation becomes wheref = e ikz f . Now (30) can be rewritten in the form The sixth order approximation of (32) becomes Assuming sufficient smoothness of v andf , and using (31) and (32), we can express the third and fifth derivatives of v in the form To preserve the sixth order compact approximation in (33) we need to use the fourth order approximation for ∂ 2 v ∂ z 2 in (34). The second order approximation is sufficient for the derivatives of v in (35). First, let's consider the fourth finite-difference compact approximation for ∂ 2 v ∂ z 2 . It can be presented in the form Now we can use (32), (36) and the second order approximation of (35) to express (33) in the form This equation gives an implicit compact sixth order approximation for (32). Next, the explicit implementation of the boundary conditions (32) similar to (29) is developed by using the equation Here, µ 1 and µ 2 are parameters. We also use that ∂ z∂ x 4 z=0 = 0 and ∂ 5 v ∂ z∂ y 4 z=0 = 0. We leave the first of these derivatives in the right hand side but drop the latter two derivatives. Now, by using the second order finite-difference approximation in (38) and adding it to the (37), we derive the formula Next we choose parameters µ 1 , µ 2 and µ 3 to match the coefficients in (25). This choice is determined by the conditions Using these parameters and replacing v i, j,l with U i, j,l e ikz l , we write This equation provides an explicit compact sixth-order approximation for the boundary condition (30). At the upper boundary z = a, the Sommerfeld-like boundary condition can be written ∂ u ∂ z − iku z=a = 0 and the auxiliary substitution becomes v = e −ikz u. The rest of the derivation is very similar to the derivation for the case of the lower boundary. Implementations of the sixth order compact approximation of the boundary conditions in the other directions are also similar to the calculations already presented.

Fast Fourier Preconditioner
The main idea of this paper is to utilize the existing lower-order approximation direct solvers as preconditioners in the implementation of a higher resolution scheme. One of the most popular methods for the approximate solution of the Helmholtz equation is the second-order central difference scheme. We consider the cases with Dirichlet or Neumann boundary conditions on the sides of the computational domain Ω , i.e. u(0, y, z) = u(a, y, z) = u(x, 0, z) = u(x, a, z) = 0 or ∂ u and D α  N = diag(α, 0, ..., 0, α), where α, β and N are parameters depending on boundary conditions. Next let A 0 The preconditioning matrix can now be written in the form

Convergence analysis of 3D algorithm
In this section, the convergence of the proposed algorithms is considered in the case of the 3D Helmholtz equation with the Dirichlet boundary conditions (1), (2) on the rectangular computational domain Ω = {0 ≤ x, y, z ≤ 1} with uniform grid size h = 1 N+1 . To insure the uniqueness of the solution to the original boundary value problem, we assume that the coefficient k 2 is in the resolvent set of the Laplace operator defined on an appropriate function space. We also impose a common restriction on the number of grid points per wave length, i.e. kh < 2π 10 . The convergence analysis of the proposed algorithms follows the same lines as in Theorem 3. The preconditioning matrix is given by (43) with β = α z = β z = 0 and N z = N. To express the right hand side of the sixth order approximation scheme given in (25), we will use the notation introduced in (43). In addition, we define the N × N matrix A 1 β ,N = Λ β N − 4I. Then the resulting N 3 × N 3 matrix A can be written as Both matrices A and A p have the same set of orthonormal eigenvectors V i, j,l (see e.g. [8]): 1 ≤ i, j, l, m, n, s ≤ N. The matrix of eigenvectors is unitary so the l 2 -norm of V and V −1 is one. To present the eigenvalues of the matrices, we introduce the notation s r = sin 2 ( rπh 2 ), r = i, j, l. Then the eigenvalues of the matrices A p and A in the 3D case are given by Unfortunately, the eigenvalues of these matrices do not satisfy the hypotheses of Theorem 1, but we still can apply Corollary 1 to prove the convergence of the GM-RES method and the SKS algorithm by showing that λ i, j,l /λ p i, j,l = 1 + d i, j,l , where |d i, j,l | < 1, for all i, j, l = 1, ..., N in some simple cases.
Let's assume that N ≥ 1, i.e. h ≤ 1 2 and k 2 < 25 . Then s 2 r = sin 2 ( πrh 2 ) ≥ sin 2 ( πh 2 ) ≥ 2h 2 , for r = i, j, l. It is also easy to see that 8 3 / λ p i, j,l , i, j, l = 1, ..., N. This gives us a presentation of the eigenvalues of the preconditioned matrix in the form λ i, j,l /λ p i, j,l = 1 + d i, j,l , where |d i, j,l | < 3/4, for all i, j, l = 1, ..., N. Now we can apply Corollary 1 and obtain an estimate for the convergence rate of convergence of the GMRES and SKS algorithms : where Similarly to the 1D case, it is possible to show that this estimate holds for k > 6 and a sequence of grids with the grid sizes for some δ 0 = const and sufficiently small h 1 . As in the case of the 1D problem if follows from the fact that the resolvent set of a bounded linear operator is open. The proof is somewhat tedious but elementary, so we skip it. In the next table we present lower and upper bounds m and M for λ i, j,l /λ p i, j,l − 1 for different values of k and for different grid step sizes. Also, the last two rows of the table present possible choices for h 1 and δ 0 for different k. This result is weaker then in 1D case, i.e. it says that convergence is independent of the grid step, but it does not improve with the decrease of h. In this case, A p is said to be "an optimal preconditioner" (see e.g. [15], p.196). This situation is typical for multigrid-type algorithms but it is not what we would expect based on 1D example. In the following section we present the results of numerical experiments in the case of several test problems which confirm that actual convergence of the presented methods accelerate with the decrease of the grid size, i.e. the convergence in the numerical experiments significantly exceeds the estimate (48).

Numerical Results
In this section we present the results of numerical experiments which demonstrate the quality of the proposed numerical methods. These algorithms were implemented in Matlab 7.11.0 on an iMac with an Intel Core i7, 2.93 GHz processor. We also use the standard programing implementation of the GMRES method in Matlab (gmres function).

1D test problem
In the first series of numerical experiments, we consider the convergence of the compact sixth order approximation algorithm on a sequence of grids. In these tests we focus on the numerical solution of the 1D Dirichlet problem for the Helmholtz equation with zero boundary conditions on the interval 0 ≤ x ≤ 1 and k = 20. The computational grid in our experiments is defined by and N is the number of grid points. We consider the function u( .., N as the exact solution of the original boundary value problem with the right hand side f ( .., N. The first column of Table 2 displays the step size h of the grids in our experiments. Columns 2-4 of the table present the number of iterations until convergence required by the preconditioned GMRES method, the SKS method (Algorithm 1) and the Chebyshev acceleration (CA) algorithm [19]. We use the stoppage criterion ||F − AU (n) || 2 /||F|| 2 ≤ 10 −10 , where U (n) is the iterative solution on nth iteration. Column 5 reports the approximation error of the numerical solution In Column 6, the second order approximation error Err 2 of the numerical solution obtained by using the preconditioner as a solver is presented for comparison. The number of grid points is chosen to satisfy the requirement h < 2π/10k. The values of the minimal and maximum eigenvalues of the preconditioned matrix used by the Chebyshev acceleration algorithm are calculated exactly by (22). These experiments demonstrate the sixth order convergence of the numerical solution to the exact solution of the original boundary value problem. Theorem 3 suggests that the residual on nth iteration in the presented iterative algorithms decreases with the increase in the number of grid points. The results of the numerical experiments confirm this conclusion. We have already shown that the order of the preconditioned matrix in this case is two. The numerical value of the constant ψ calculated by using (23) is 1.99. The 1D numerical experiments confirm the effectiveness of the iterative approaches under consideration and allows us to expect that the same properties hold for 3D problems.

3D test problems
In the next series of experiments we consider different boundary value problems for the 3D Helmholtz equation (1), (2). The computational domain is given by Ω = {(x, y, z)|0 ≤ x, y, z ≤ 1} with a uniform grid in all three directions x i = ih, i = 1, ..., N x , y j = jh, j = 1, ..., N y and z k = kh, k = 1, ..., N z . The numbers of grid points N x , N y and N z in x, y and z-directions are chosen to provide uniform grid size.

Dirichlet problem
First, we consider the Helmholtz equation with boundary conditions u = 0 and k = 20. This test is similar to the 1D case with N x = N y = N z = N and h = 1/(N + 1). As the solution of the boundary value problem we choose the function u(x, y, z) and 0 ≤ x, y, z ≤ 1.
In the first series of tests we consider the convergence of the compact high order approximation finite-difference scheme (25) on a sequence of grids as well as convergence properties of the GMRES, SKS and CA iterative algorithms. The iterative process is stopped when the initial residual is reduced by a factor of 10 −10 . The values of the minimal and maximumal eigenvalues of the preconditioned matrix used by the Chebyshev acceleration algorithm are calculated exactly using (47). The description of the columns of Table 3 is similar to the description of Table 2. The test results confirm the sixth order approximation of the compact finite-difference scheme. Though, in our theoretical analysis (47) we could not prove the desired kth order preconditioning property in the case of the 3D Dirichlet problem, the SKS algorithm still exhibits acceleration of the convergence with the decrease of grid size. We also mention that the total computer time required for convergence for the SKS and GMRES methods is approximately the same and the Chebyshev acceleration algorithm requires about four times more computer time until convergence than the other two iterative algorithms do. The SKS algorithm has also greater potential for efficient implementation on parallel computers than the GMRES method does. One potential application of the proposed methods is to electromagnetic scattering problems. In these problems, the typical value of the coefficient k is in the range between 10 and 50. For example, the propagation of an electromagnetic wave with 1 GHz frequency in a vacuum is described by the Helmholtz equation with k 2 ≈ 20.9 m −1 . In the next table we present the convergence of the algorithms presented here on the sequence of grids with different values k. In every cell of the table we present the number of iteration for three methods under consideration : GMRES, SKS, and CA. The stoppage criterion is the same as in the previous series of experiments. If any of these three methods fails to converge in 100 iterations, we indicate this by using symbol "> 100" and in the case of divergence we use "div" symbol. From these numerical experiments we can see that the preconditioned GMRES and SKS methods demonstrate excellent convergence properties. We also observe that the convergence of the proposed algorithms improves with the increase in the number of the grid points. In the analysis of the algorithm's convergence (48), it was shown that the number of iterations does not increase with the increase of the grid points. But the last row in the table indicates that the parameter ψ (23) for all values of k is approximately 2, which indicates that similarly to the 1D case the convergence of the 3D algorithm not only does not depend on the grid size but accelerates with the increase of the number of grid points. This also indicates that even in the case when the SKS method does not converge, one can expect convergence on a finer grid. In all numerical tests, the parameter ψ is calculated by using the first two iterations of the SKS method on the grids with the grid sizes h = 1 128 and h = 1 256 . From the results presented, it is clear that slow convergence and difficulties arising in the choice of the parameters m and M in the Chebyshev acceleration algorithm (18) make this method a poor choice for the implementation of the developed high-order approximation scheme. These numerical tests illustrate the fact that the Chebyshev polynomial is not optimal on the discrete spectrum and this has a dramatic effect on the iterative methods based on this polynomial. So, in the next numerical experiments we focus on the convergence properties of the GMRES and SKS methods in our numerical framework.

Dirichlet-Neumann boundary conditions
In the next series of experiments we replace the Dirichlet boundary conditions u = 0 in (2) with u = u b at the top of the computational domain (z = 1) and with the Neumann boundary conditions ∂ u ∂ z = 0 at z = 0. On the other boundaries of the rectangular computational domain Ω = {0 ≤ x, y, z ≤ 1} we still use the Dirichlet boundary conditions u = 0. As a test function that satisfies the boundary conditions we consider u(x, y, z) The number of grid points is chosen to provide the same grid step in all three directions, i. e. N x = N y = N z − 1. The stoppage criterion is the same as in the previous numerical experiments. Table 5 provides the results of the numerical tests on a sequence of grids with the coefficient k = 20. As already mentioned we focus only on two best algorithms: GMRES and SKS. The rest of Table 5 is similar to Table 3. The results of this series of numerical tests indicate the sixth order convergence of the resulting compact finite-difference scheme including explicit approximation of the Neumann boundary conditions. As expected, the number of iterations decreases with the increase of the number of grid points in both the GMRES and SKS approaches. In the next table we present the number of iterations required for convergence for both algorithms on a sequence of grids and with different k coefficients in the Helmholtz equation. The content of Table 6 is similar to the content of Table 4, except for the number of iterative approaches under consideration. We present only the number of iterations for the GMRES and SKS methods. It follows from this table that the GMRES approach is much more efficient on coarse grids but on finer grids, the processor time for the SKS method is the same or even smaller than for the GMRES algorithm. For example, on the grid with h = 1/512 and k = 50 the corresponding processor times for the GMRES and SKS methods are 1467 sec and 1052 sec. Also, as we mentioned before, the SKS method has much greater potential than the GMRES algorithm for efficient implementation on parallel computers. We also expect that even if the SKS method does not converge on coarser grids, by reducing the grid size one can achieve convergence of this algorithm. We can see this in the example of the convergence of the methods under consideration in the case of k ≥ 30. The values of the parameter ψ vary significantly for different values of k but ψ > 0 still indicates that the convergence of the iterative algorithms improves with the increase in the number of the grid points.

Dirichlet-Sommerfeld-like boundary conditions
In the last series of numerical experiments, we consider the boundary value problem for the Helmholtz equation (1) with a combination of Sommerfeld-like(radiation) boundary conditions (2) at z = 0 and z = 1, and Dirichlet boundary conditions at all other boundaries of the rectangular computational domain Ω = {0 ≤ x, y, z ≤ 1}. The source function f in (1) selected that the true solution is u(x, y, z) = φ 1 (x)φ 2 (y)φ 3 (z), where φ 1 (x) = x 3 (1 − x) 3 , φ 2 (y) = y(1 − y) cos(kπy),φ 3 (z) = e ik(z+1) + e −ik(z−1) − 2 and 0 ≤ x, y, z ≤ 1. Note that the analytic solution satisfies the radiation boundary condition (2). The numbers of grid points in x, y, z−directions are chosen such that N x = N y = N z − 2. First we consider the convergence of our sixth order approximation scheme on the sequence of grids with k = 20. The main goal of this series of numerical experiments is to investigate the convergence properties of the new explicit compact sixth order approximation of the Sommerfeld-like boundary conditions (41) proposed in this paper. As in previous test runs, the iterative process is stopped when the l 2 −norm of the initial residual is reduced by a factor of 10 −10 . The results are presented in Table 7. The description of data presented in this table is the same as in the case of Table 5. In this series of numerical experiments, both algorithms exhibit the same convergence properties as in the previous series of tests, i.e. the number of iterations decreases as the number of grid points increases. From the table, we can also see the sixth order convergence of the approximate solution to the exact solution of the boundary value problem on a sequence of grids. These results confirm the sixth order approximation of the compact explicit scheme proposed for the numerical implementation of the Sommerfeld-like boundary conditions.
In the last table, we present the number of iterations required for the convergence of the GMRES and SKS methods for different coefficients k in the Helmholtz equation with the same boundary conditions used in the previous series of numerical tests. Data presented in the next table are similar to the data in Table 6.

Conclusions
New 3D compact sixth order explicit finite-difference schemes for the approximation of Neumann and Sommerfeld-like boundary conditions on rectangular computational domains with uniform grid size were developed and implemented. Together with the compact sixth order approximation scheme for the Helmholtz equation proposed in [21], these algorithms represent highly accurate methods for the solution of boundary value problems for Helmholtz equations.
A new rapid iterative method based on preconditioned Krylov subspace methodology was developed for the implementation of the proposed compact finite-difference schemes. The strategy is based on a combination of higher order approximation schemes and a lower order approximation preconditioner. The analysis of some typical test problems reveals the attractive properties of the developed methods such as the decrease of the number of iteration until convergence with the increase of the number of the grid points or the size of the resulting matrix. This approach is especially attractive in situations in which the lower approximation solver already exists and the original boundary value problem calls for more accurate approximation.
The typical time to produce the sixth order accuracy solution of the 3D Helmholtz equation with a combination of the Dirichlet and Sommerfeld boundary conditions on a 512 3 grid was just 30 minutes on a iMac using only one 2.93 GHz Intel Core i7 processor. The method was tested for realistic parameter ranges typical for electromagnetic scattering problems. We must notice that the Sommerfeld-like boundary is just a first order approximation of the Sommerfeld conditions on the unbounded domain. So, direct application of the higher order approximation for the Sommerfeldlike boundary condition to the solution of a scattering problem is not always justified. However, a straightforward extension of the approximation approach presented in this paper could be applied for approximation of the absorbing boundary conditions (see e.g. [23]) or can be used in the implementation of the perfectly matched layer (PML) boundary conditions (see e.g. [13]).