Fast Hankel Transforms Algorithm Based on Kernel Function Interpolation with Exponential Functions

. The Pravin method for Hankel transforms is based on the decomposition of kernel function with exponential function. The defect of such method is the difficulty in its parameters determination and lack of adaptability to kernel function especially using monotonically decreasing functions to approximate the convex ones. This thesis proposed an improved scheme by adding new base function in interpolation procedure. The improved method maintains the merit of Pravin method which can convert the Hankel integral to algebraic calculation. The simulation results reveal that the improved method has high precision, high efficiency, and good adaptability to kernel function. It can be applied to zero-order and first-order Hankel transforms.


Introduction
The Hankel transforms (HT) arise naturally in the discussion of problems posed in cylindrical coordinates and hence [1], as a result of separation of variables, involving Bessel functions.The HT are frequently used as a tool for solving numerous scientific problems.For example, the stratified model and cylindrical coordinates are widely used in geophysical research, and the HT arise in forward and inverse calculation with zero or first order.The general Hankel transforms pair with the kernel being  ] is defined as and HT, being self-reciprocal, inverse is given by where  ] is the ]th-order Bessel function of first kind.Analytical evaluations are rare and their numerical computations are difficult because of the oscillatory behavior of the Bessel function and the infinite length of the interval.
To overcome these difficulties, various different techniques are available in the literature.The first is the fast Hankel transforms as proposed by Siegman in [2].Here, via an exponential change of variables, the problem is transformed in the space of the logarithmic coordinates and the fast Fourier transform in that space.The disadvantage is requiring sampling over an exponential grid, thereby leading to important errors for functions with an oscillating tail.Moreover, it is sensitive to the smoothness of the functions in that space [3].The second is the back-projection and projection-slice methods [4], which carry out the HT as a double integral by means of one of the standard integral representations of the Bessel functions.But these methods generally require the efficient implementation of Chebyshev and Abel transforms.The computational complexity is unfortunately ( 2 ) [5].In [6], the authors used Filon quadrature philosophy to evaluate zero-order Hankel transforms.They separated the integrand into the product of slowly varying component and a rapidly oscillating one.This method works quite well for computing  0 () for  ⩾ 1, but the error is appreciable for  < 1.And the calculation of inverse HT is more difficult as  0 () is no longer a smooth function but a rapidly oscillating one.In 1998, Yu et al. [7] gave another method to compute zero-order quasi-discrete HT by approximating the input function by a 2 Journal of Applied Mathematics Fourier-Bessel series over a finite integration interval.It leads to a symmetric transformation matrix.And later in 2004, Guizar-Sicairos and Gutiérrez-Vega [8] obtained a powerful scheme to calculate the HT of order n by extending the zeroorder HT algorithm of Yu to higher orders.Their algorithm is based on the orthogonality properties of Bessel functions.
More recently, Postnikov [9] and Zykov and Postnikov [10] proposed, for the first time, a novel and powerful method for computing zero-and first-order HT by using Haar basis and piecewise-linear basis.After that, Singh et al. adopted the linear Legendre multiwavelets [3], rationalized Haar (RH) wavelets [11], and the hybrid method of blockpulse and Legendre polynomials [1] for small random perturbation in data function situation [12].Above methods can be cast into a general class as expansion of integrand by wavelets.
Similarly, Gupta et al. [13] have proposed to transform the integral kernel function using orthogonal exponential expansion.This expansion reduces the integral to a simple algebraic sum because the analytical solution of HT of an exponential function is readily available.The essence of Pravin method is adopting exponential functions for fitting or interpolating basis [14].However, only the monotonic exponential base function is selected, and, as a result, it is difficult to approximate the convex functions.So in this paper, we extend this idea further and obtain an improvement method by adding a new base function for fast calculation of HT.Numerical examples and comparison were given to illustrate the proposed algorithms.

Kernel Function Interpolation Based on the Exponential
Function.In Pravin's innovative idea of exponential expansion, the kernel function () (namely, ()) which is defined as the part to be multiplied with Bessel function is decomposed by a series of exponential functions.In other words, the kernel function represents a linear combination of exponential functions and the coefficient of every exponential function can be determined by interpolation.Because the analytical solution of HT of an exponential function is readily available in literature, so we can finally obtain the integral through a simple algebraic sum.Since the order of Bessel function in the majority of physical problems is an integer as zero or one, this thesis focuses on zero-and first-order Hankel transforms.The calculation principles based on the exponential function expansion can be summarized as the following.
(1) Decomposition of kernel function using a linear combination form is as the following: where   is the approximation parameter of the exponential function and it is determined by orthogonalization in Pravin method and   is the coefficient and it can be determined by collocation method.The interpolation form of above formula can be expressed briefly in matrix as G = Ma, so coefficients a = M −1 G, where matrix element is   =  −    .
(2) Calculation of every component's integral is as follows: the integral for the product of every base function with Bessel function is given in explicit form as (3) HT calculation procedure is as follows: based on collocation and explicit formula, we can obtain And it can be expressed as F = Na, where the matrix element It is obvious that the Pravin method is simple and concise.And the calculation error is only caused by the interpolation.Nevertheless, the base functions are monotonically decreasing exponential functions.So it is difficult to approximate convex function, such as approximating the kernel function whose value is zero at the origin.Moreover, to determine the parameter  is also hard in order to ensure the method's accuracy and stability.Thus, this paper proposed to add a new type base function, which has a convex form and its value is zero at the origin.

Improved Kernel Function Interpolation Theory.
Based on the idea of exponential expansion, the new type base function should satisfy the following two conditions.Specifically, the function has convexity and explicit form of HT.After systematically analyzing, we choose the following function as where ] is zero or one, correspondingly to the order of Bessel function.And the improved interpolation formula is as the following: It can be written as . So the coefficients a, b can be determined by collocation.The HT of ( 7) can be obtained as The integrals of the second component can be given by the following formula: For V = 0, referring to formulas (4) and ( 9), we can obtain zero-order HT as And for V = 1, we can obtain the first-order HT as So the improvement method maintains the simple and concise characteristics.The HT calculation can be obtained by an algebraic summation.

Numerical Examples
As illustrative numerical examples which possess exact solutions, consider the following functions.Firstly, we carry out the kernel function interpolation simulation which can reveal the approximation ability of improved method.Then, numerical examples of zero-order and first-order HT are implemented, and the method's accuracy and computational efficiency are analyzed.

Kernel Function Interpolation Simulation.
In this section, we focus on the interpolation ability of added base functions, namely,  1 () =  − 2 and  2 () =  2  − 2 which are adopted, respectively, in zero-and first-order HT.Select two typical kernel functions as The first is a monotonic function and the second is a convex one.In numerical calculations, the interpolation nodes are uniformly distributed with interval ℎ = 0.1 and the test nodes interval ℎ  = 0.01.In Pravin's method, the approximation parameter   = 0.5 * (1.2) −1 ,  = 1, 2, . . ., 51, the first value is 0.5, the common ratio is 1.2, and the last 51th point is 4550.2.So the parameter varies in a wide range with exponential form which maybe results in a poor condition number for interpolation matrix.In this paper, we choose the parameter  with linear variation as   = 1.0 + (-1)ℎ.The added constant is to avoid overflow in (10) or (11).Comparisons between Pravin and the improved method were listed in Tables 1 and  2. The Max Abs represents the maximum absolute error and RMS means the root mean square error which is defined as From Tables 1 and 2, we can make the following conclusions.Firstly, the Pravin method can obtain good approximation precision for the kernel function  1 () in two kinds of parameter  settings.Because the  1 () and the interpolation basis  −   are all monotonically decreasing functions.However, for function  2 () which is a convex one, the Pravin method is unstable when parameter  is exponential variation and is still stable for linear variation.And secondly, the improved method not only maintains the stability for both types of functions, but also acquires higher accuracy whose interpolation error is lesser by one or two magnitudes than the Pravin's.The numerical results showed that the condition number of interpolation matrix decreased dramatically in improved method.For example, the condition number was reduced from 1E38 to 1E17 when parameter  isexponential variation and was reduced from 1E20 to 1E18 in linear variation.The only deficiency of improved method lies in it being time-consuming, that is, 0.1032 s (for Pravin) and 0.1557 s (for improved method).
In addition, the extrapolation ability has been investigated.We choose the region of [0, 3] for interpolation.The extrapolation test is defined in region [0, 5].Numerical results show that the improved method's maximum absolute errors were 3.6249 − 6 (G 1 ()) and 2.6540 − 6 (G 2 ()), while the errors of Pravin were 7.9155−5 and 0.5713 correspondingly.So we can conclude that the improved method is feasible to approximate the kernel function and therefore to calculate HT.

Hankel Transforms Calculation.
Here, we have chosen four analytical integrals given by Anderson [15].They involve the zero-and first-order HT with monotonic and convex types.The analytical integral pairs were Integral 2, In order to compare with above interpolation simulation easily, all the parameters are chosen the same as Section 3.1, only changing the collocation nodes interval ℎ = 0.25 (V = 1) or 0.2 (V = 0).Using (7) in the improved method, we can obtain the coefficients and then substitute into (10) or (11) to calculate the unknown values.Relative error comparisons of four HT pairs among Anderson, Pravin, and the improved method proposed in this paper are listed in Tables 3 and 4.
From Tables 3 and 4, we can conclude that the proposed method can improve the HT accuracy especially for integrals 1 and 3.In above four integrals, the worst accuracy is about the magnitude of 10 −7 , so the algorithm can obtain good approximation for the theoretical value of zero-or first-order HT.The following Figures 1, 2, 3, and 4 illustrated the absolute error of improved method.
Numerical results showed that the maximum absolute errors of the four integrals were 1.3639 − 8, 1.7500 − 7, 5.5460 − 9, and 2.2042 − 8 separately.And the root mean square errors were corresponding to 8.4464−9, 1.6560−7, 3.2052 − 9, and 1.5964 − 8.The time consuming of above examples was almost 0.05 s.  (16) So the resistivity conversion function is a convex one in above parameters; then the Pravin method and the improved method were adopted to approximate the () and the kernel function ().The interpolation nodes were set in domain [0.01, 0.5] with interval ℎ = 0.01; the test nodes' interval was ℎ  = 0.001.Choosing approximation parameter  as   = 15.0 + ( − 1)ℎ, the approximation results were shown in Figures 5 and 6   It is obvious that the resistivity conversion function and kernel function can be approximated much better by the improved method.Numerical results in () approximation showed that the maximum absolute error and root mean square error of approximation were 7.3090−1 and 1.0405− 1 (for Pravin), while the data were 4.2557−2 and 3.2775−3 (for improved method).
Finally, the Hankel transforms were performed and compared.The improved method also showed its significant superiority which can be revealed from Figure 7.The apparent

Table 3 :
Relative error comparison among Anderson, Pravin, and improved method for zero-order HT.

Table 4 :
Relative error comparison among Anderson, Pravin, and improved method for first-order HT.