TSWJ The Scientific World Journal 1537-744X Hindawi Publishing Corporation 390148 10.1155/2014/390148 390148 Research Article VIM-Based Dynamic Sparse Grid Approach to Partial Differential Equations Mei Shu-Li Kong L. Momoniat E. College of Information and Electrical Engineering China Agricultural University P.O. Box 53, East Campus, 17 Qinghua Donglu Road, Haidian District, Beijing 100083 China cau.edu.cn 2014 2722014 2014 09 12 2013 14 01 2014 27 2 2014 2014 Copyright © 2014 Shu-Li Mei. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Combining the variational iteration method (VIM) with the sparse grid theory, a dynamic sparse grid approach for nonlinear PDEs is proposed in this paper. In this method, a multilevel interpolation operator is constructed based on the sparse grids theory firstly. The operator is based on the linear combination of the basic functions and independent of them. Second, by means of the precise integration method (PIM), the VIM is developed to solve the nonlinear system of ODEs which is obtained from the discretization of the PDEs. In addition, a dynamic choice scheme on both of the inner and external grid points is proposed. It is different from the traditional interval wavelet collocation method in which the choice of both of the inner and external grid points is dynamic. The numerical experiments show that our method is better than the traditional wavelet collocation method, especially in solving the PDEs with the Nuemann boundary conditions.

1. Introduction

The sparse representation of functions via a linear combination of a small number of basic functions has recently received a lot of attention in several mathematical fields such as approximation theory as well as signal and image processing . The advantage of the sparse grid approach is that it can be extended to nonsmooth solutions by adaptive refinement methods; that is, it can capture the steep waves appearing in the solution of the PDEs. In fact, the boundary conditions can also be taken as the nonsmooth parts appearing in the solution, especially to the Neumann boundary. Furthermore, it can be generalized from piecewise linear to high-order polynomials. Also, more sophisticated basis functions like interpolets, prewavelets, or wavelets can be used in a straightforward way . In practice, the standard piecewise linear multiscale basis in one dimension, that is, the Faber-Schauder, can be viewed as a scaling function in the wavelet analysis. As an interpolation operator, the basis functions act as the Dirac delta function when operating on itself and its derivatives . So, the interpolation wavelet such as the Shannon wavelet, Shannon-Gabor wavelet, Harr wavelet, and the autocorrelation function of the Daubechies scaling function can be taken as the basis function to construct the sparse grid approach directly.

Faber-Schauder and Haar scaling function do not have the smoothness property, so the function and its derivative to be approximated cannot be represented exactly by them. The autocorrelation function of Daubechies scaling function has been widely used in various numerical methods for PDEs such as the wavelet collocation method and the sparse grids method. The Daubechies scaling functions possess almost all the excellent numerical properties, such as orthogonality, smoothness, and compact support, which are helpful in improving numerical accuracy and efficiency. However, the autocorrelation function of the Daubechies scaling function loses the orthogonality. In addition, Daubechies scaling function has no exact analytical expression. This will bring error to the approximation solution from the Daubechies wavelet numerical method.

Cattani studied the properties of the Shannon wavelet function , which possesses many advantages such as orthogonality and is continuous and differentiable. It also has the advantage over the Hermite DAF in that it is an interpolating function, producing matrix equations that have the potential to be relatively sparse. In addition, the second-order approximation of a C2-function, based on Shannon wavelet functions, is given . The approximation is compared with the wavelet reconstruction formula and the error of approximation is explicitly computed . The advantages of the Shannon wavelet have been illustrated in solving PDEs in engineering , which can avoid the shortcomings of Daubechies wavelet such as the interpolation property. Furthermore, Cattani studied the fractional calculus problems with the Shannon firstly. A perceived disadvantage of the Shannon scaling function is that it tends towards zero quite slowly. The direct consequence of this is that a large number of the nodal values will contribute significantly when calculating the derivatives of the function to be approximated. It is for that reason that Hoffman et al. constructed the Shannon-Gabor wavelet  using the Gaussian window function. In some ways it improves the approximation to a Dirac delta function compared with Shannon wavelet. However, the presence of the Gaussian window destroys the orthogonal properties possessed by the Shannon wavelet, effectively worsening the approximation to a Dirac delta function. In order to test the multilevel interpolation operator constructed in this paper, the Faber-Schauder, Shannon scaling functions and the autocorrelation function of the Daubechies scaling function are taken as the basis employed in the multilevel interpolator to discretize the PDEs in the experiments, respectively.

There are many ways to solve the system of nonlinear ODEs, which are obtained from the discretization of the nonlinear PDEs using the multilevel interpolator. Compared with the finite difference method, the retained grid points are sparse and the dimensionality of the system of ODEs is smaller. This is helpful to improve the efficiency, but the small change of the condition number and the smoothness of the function to be approximated can destroy the exactness of the numerical solution obtained by the traditional difference method. The variational iteration method proposed by Inokuti et al. in 1978 , which has been developed by He [10, 11] and widely used in various fields , is able to give the solution of the nonlinear problems in an infinite series usually converging to an accurate solution rapidly. By means of the precise integration method (PIM), VIM has been generated to solve the system of nonlinear ODEs by Mei and Zhang . In fact, both of the PIM and VIM are the analytical method; so, the impact of the system of ODEs on the choice of the sparse grid points can be neglected.

The dynamic choice of the inner grid points relates with the smoothness and the gradient at each point of the solution to be approximated, and the external grid points relates with the boundary conditions. A better choice of the external grid points can restrict the boundary effect effectively. Most of the schemes are based on the extension of the solution function, such as the interval wavelet in the wavelet collocation method  and Lagrange multiplier in the sparse grid approaches . In most cases, the smoothness and the gradient around the boundary are variational dynamically such as the Nuemann boundary conditions. The extension method by means of the Lagrange multiplier is not suited to change the external grid points dynamically.

In our approach we want to achieve several goals so that the solution should be sparse and should be a good approximation. First of all is to construct a multilevel interpolation operator with which the adaptive sparse grid approach can be simplified to a linear combination of the interpolation operators. The operator should be independent with the basic functions. So, we can take different basis functions in the interpolation operator to solve different problems. Second goal is to construct an adaptive sparse grid approach by combining the multilevel interpolation operator and the VIM. The last one is to construct a dynamic choice scheme on the external grid points, so that both of the inner and external grid points are dynamic with the development of the solution, especially to PDEs with the Neumann boundary conditions.

2. Multilevel Interpolator on Sparse Grids 2.1. Interpolating Multiresolution Analysis

Let us start with the interpolating multiresolution analysis  that is necessary for a detailed discussion of sparse grids for purposes of interpolation or approximation, respectively. Let ϕ ( x ) be any of the interpolating basis function such as the Shannon, Faber-Schauder, scaling functions or the autocorrelation function of the Daubechies scaling function. This mother of all basis functions can be used to generate an arbitrary ϕ k j ( x ) by dilation and translation; that is (1) ϕ k j ( x ) = ϕ ( 2 j x - k ) , k = 0,1 , 2 , , 2 j . It is easy to check that introducing the spaces (2) V j span ϕ k j , k = 0,1 , 2 , , 2 j L 2 ( ) . For convenience of notation we use the superscript to denote the level of resolution and the subscript to denote the location in physical space. The sequence { V j } is a multiresolution analysis, which is an increasing sequence of closed subspaces of L 2 ( ) . We call such a structure an interpolating multiresolution analysis due to the fact that the function ϕ verifies what we call interpolation property, that is ϕ k j ( n 2 - j ) = δ n , k . We may then define an interpolation operator I j : C 0 ( 0,1 ) V j (3) I j f = k = 0 2 j f ( x k j ) ϕ k j , x k j = k 2 - j . It is obvious that ϕ k j is just the nodal point basis of the finite-dimensional space V j . Additionally, we introduce the hierarchical increments W j V j + 1 (4) W j = span ψ k j , k = 0,1 , 2 , , 2 j - 1 , where ψ k j = ϕ 2 k + 1 j + 1 .

Let y k j = x 2 k + 1 j + 1 , we may remark that the function ψ k j verifies (5) ψ k j ( y n j ) = δ k n , ψ k j ( y n j ) = 0 , j < j . It is obvious that (6) V j + 1 = V j W j . Such multiresolution analysis has been extensively investigated in . According to this theory, any function f C 0 ( 0,1 ) can be represented approximately as: (7) f f j = k = 0 2 j 0 β k j 0 ϕ k j 0 + j j 0 k = 0 2 j α k j ψ k j . The coefficients β k j 0 and α k j are defined as: (8) β k j 0 = f ( x k j 0 ) , α k j = f ( y k j ) - I j f ( y k j ) , respectively. This shows that the coefficient α k j measures the lack of approximation of f by I j f .

2.2. Multilevel Interpolation Operator on Sparse Grids

Equation (7) is the approximate representation of function f , which is not unique since the set of functions is not linearly independent. In this section, we will try to determine the sparsest representation, that is, a representation with a maximal number of vanishing coefficients among the coefficients { α k j , k = 0,1 , , 2 j , j Z } . The conventional scheme in signal processing, acquiring the entire signal and then compressing it, was questioned by Donoho and Elad . Indeed, this technique uses tremendous resources to acquire often very large signals, just to throw away information during compression. The popular solving scheme is the compressed sensing technique proposed by Donoho . In contrast to it, we try to achieve the same goals by constructing a multilevel interpolation operator via combining the interpolating multiresolution analysis described above and the wavelet transform theory .

Let us start with the definition of the interpolation operator (9) u J ( x ) = i Z Ω J I i ( x ) u i J , Z Ω J : = 0,1 , 2 , , 2 J , I i ( x ) is the interpolation function. According to the wavelet transform theory, function u ( x ) can be expressed approximately as: (10) u J ( x ) = k 0 = 0 2 j 0 u ( x k 0 j 0 ) φ k 0 j 0 ( x ) + j = j 0 J - 1 k Z j α k j ψ k j ( x ) , where Z j 0,1 , 2 , , 2 j , and the interpolation wavelet transform coefficient can be denoted as: (11) α k j = u ( x j + 1 2 k + 1 ) -    [ k 0 = 0 2 j 0 u ( x j 0 k 0 ) φ j 0 k 0 ( x j + 1 2 k + 1 ) + j 1 = j 0 j - 1 k 1 = 0 2 j 1 - 1 α j 1 k 1 ψ j 1 k 1 ( x j + 1 2 k + 1 ) ] = n = 0 2 J [ R j + 1 , J 2 k + 1 , n - k 0 = 0 2 j 0 R j 0 , J k 0 , n φ j 0 k 0 ( x j + 1 2 k + 1 ) ] u ( x J n ) -    n = 0 2 J j 1 = j 0 j - 1 k 1 = 0 2 j 1 - 1 α j 1 k 1 ψ j 1 k 1 ( x j + 1 2 k + 1 ) , where, 0 j J - 1 ,    k Z j ,   n Z J , and R is the restriction operator defined as: (12) R i , m l , j = { 1 , x i l = x m j , 0 , others . Suppose (13) α k j = n = 0 2 J C k , n j , J u ( x n J ) . Substituting (13) into (11), we obtain (14) C k , n j , J = R 2 k + 1 , n j + 1 , J - k 0 = 0 2 j 0 R k 0 , n j 0 , J φ k 0 j 0 ( x 2 k + 1 j + 1 ) - j 1 = j 0 j - 1 k 1 = 0 2 j 1 - 1 C k 1 , n j 1 , J ψ k 1 j 1 ( x 2 k + 1 j + 1 ) . If j = j 0 , then (15) C k , n j , J = R 2 k + 1 , n j + 1 , J - k 0 = 0 2 j 0 R k 0 , n j 0 , J φ k 0 j 0 ( x 2 k + 1 j + 1 ) . Substituting the restriction operator (12) and the wavelet transform coefficient (13) into (10), the approximate expression of the solution u ( x ) can be obtained as: (16) u J ( x ) = i Z J ( k 0 = 0 2 j 0 R k 0 , n j 0 , J φ k 0 j 0 ( x 2 k + 1 j + 1 ) - j 1 = j 0 j - 1 k 1 = 0 2 j 1 - 1 C k 1 , n j 1 , J ψ k 1 j 1 ( x 2 k + 1 j + 1 ) ) u ( x i J ) . According to the definition of the interpolation operator (9), it’s easy to obtain the expression of the interpolation operator as follows: (17) I i ( x ) = k 0 = 0 2 j 0 R k 0 , i j 0 , J φ k 0 j 0 ( x ) + j = j 0 J - 1 k Z j C k , i j , J ψ k j ( x ) . The corresponding m -order derivative of the interpolation operator is (18) D i ( m ) ( x ) = k 0 = 0 2 j 0 R k 0 , i j 0 , J φ k 0 j 0 ( m ) ( x ) + j = j 0 J - 1 k Z j C k , i j , J ψ k j ( m ) ( x ) . Substitute (17) and (18) into the nonlinear PDEs, and it can be changed to a system of nonlinear ODEs, the approximate analytical solution of which can be obtained with the variational iteration method (VIM).

3. Coupling Technique of VIM and Sparse Grid Method for Nonlinear PDEs

As mentioned above, the multilevel interpolation operator is independent of the basic functions; that is, any basis function with the interpolation property can be employed in (17) directly. But the basis function without the m th order derivative cannot be employed in (18) directly. In this section, we just consider the parabolic PDEs with the second order derivative as follows: (19) - x ( p ( x ) u x ) + r ( x ) u x + q ( x ) u = u t + f ( x ) , c c c c c c c c c c c c c c c c c c c c c x [ a , b ] , ( x , t ) D , u ( a , 0 ) = α , p ( b ) u ( b , 0 ) x + g ( b ) u ( b , 0 ) = β , where D is the definition domain in x - t plane.

Therefore, there are two cases that will be discussed in detail in the following. One is that the basic function to be employed in (7) has second-order derivative; the other aims at the Faber-Schauder scaling function.

3.1. Basis Function with <italic>C</italic> <sup>2</sup> Continuity

Substituting (16) into (19), it is easy to obtain the nonlinear matrix differential equations as follows: (20) L ( V ˙ , V , t ) + N ( V ˙ , V , t ) = G ( t ) , where L is a linear operator, N a nonlinear operator, and G ( t ) is an inhomogeneous term, V is an n -dimensional unknown vector, and dot stands for the differential with respect to time variable t . For convenience, (20) can be rewritten as: (21) V ˙ - H V - F ( V ˙ , V , t ) = 0 H is a given n × n constant matrix, and F ( V ˙ , V , t ) is an n -dimensional nonlinear external force vector.

According to VIM, we can write down a correction functional as follows: (22) V n + 1 ( t ) = V n ( t ) + 0 t λ V ˙ n ( τ ) - H V n ( τ ) - F ( V ~ ˙ n , V ~ n , τ ) d τ , where λ is a general Lagrange vector multiplier  which can be identified optimally via the variational theory. The subscript n denotes the n th approximation and V ~ n is considered as a restricted variation ; that is, δ V ~ n = 0 .

Using VIM, the stationary conditions of (22) can be obtained as follows: (23) λ + λ H = 0 , 1 + λ ( τ ) | τ = t = 0 . The Lagrange vector multiplier can therefore be readily identified, (24) λ ( τ ) = - e H ( t - τ ) .

As a result, we obtain the following iteration formula: (25) V n + 1 ( t ) = V n ( t ) - 0 t e H ( t - τ ) V ˙ n ( τ ) - H V n ( τ ) f f f f f f f f f f f f f f - F ( V ~ ˙ n , V ~ n , τ ) d τ .

According to VIM, we can start with an arbitrary initial approximation that satisfies the initial condition. So we take the exact analytic solution of V ˙ - H V = 0 as the initial approximation; that is, (26) V 0 ( t ) = e H t A , where A is the given initial value vector.

Substituting (26) into (25) and after simplification, we have (27) V n + 1 ( t ) = V n ( t ) + 0 t e H ( t - τ ) F ( V ~ ˙ n , V ~ n , τ ) d τ . According to the theory of matrices, the analytical expression of the external force F ( V ~ ˙ n , V ~ n , τ ) is required now, but it is not always available except F ( V ~ ˙ n , V ~ n , τ ) is a constant vector f ; that is, (28) F ( V ~ ˙ n , V ~ n , τ ) = f . The integration term of (15) is (29) 0 t e H ( t - τ ) f d τ = ( e H t - I ) H - 1 f , where the exponential matrix e H t can be calculated accurately in PIM, and I is a unit matrix. Substituting (29) into (27), we obtain the variational iteration formula of the matrix differential equation as follows: (30) V n + 1 ( t ) = V n ( t ) + ( e H t - I ) H - 1 f . e H t can be solved exactly by means of the precise integration method (PIM) .

3.2. Basis Function with <italic>C</italic> <sup>1</sup> Continuity

Faber-Schauder scaling function is the typical basis with C 1 continuity. For convenience to construct the variational equation, the parameter t should be discretized as t 0 , t 1 , t 2 , , t m , t m + 1 , , where t 0 = 0 , t m = m Δ t . Then, u / t can be approximated as: (31) u t 1 Δ t [ u ( x , m Δ t ) - u ( x , ( m - 1 ) Δ t ) ] . Substituting above equation into (19), we obtain (32) - d d x ( p ( x ) d u d x ) + r ( x ) d u d x + q ( x ) u = F ( x ) , h h h h h h h h h h h h h h h h h x [ a , b ] , ( x , t ) D , u ( a , 0 ) = α , p ( b ) d u ( b , 0 ) d x + g ( b ) u ( b , 0 ) = β , F ( x ) = f ( x ) + 1 Δ t [ u ( x , m Δ t ) - u ( x , ( m - 1 ) Δ t ) ] . Obviously, it is the initial-boundary elliptic PDEs. Using the virtual displacement theory, the variation equation can be obtained as: (33) a ( u , v ) = G ( v ) , u H E 1 ( a , b ) , v H 0 E 1 ( a , b ) , where (34) H E 1 ( a , b ) { u H 1 ( a , b ) u ( a ) = α } , H 0 E 1 ( a , b ) { v H 1 ( a , b ) v ( a ) = 0 } , a ( u , v ) a b [ p ( x ) d u d x d v d x + r ( x ) d u d x v + q ( x ) u v ] d x + g ( b ) u ( b ) v ( b ) , G ( v ) a b F ( x ) v ( x ) d x + β v ( b ) , H 1 ( a , b ) is the Sobolev space.

According to the interpolation wavelet transform theory, the variables u and v can be approximated as: (35) u ( x , t ) = k = 0 2 j 0 u ( x j 0 , k ) w k j 0 ( x ) + j = j 0 J - 1 k = 0 2 j - 1 α j , k ( t ) w 2 k + 1 j + 1 ( x ) v ( x , t ) = k = 0 2 j 0 v ( x j 0 , k ) w k j 0 ( x ) + j v = j 0 J - 1 k v = 0 2 j v - 1 α j v , k v ( t ) w 2 k v + 1 j v + 1 ( x ) . The first-order derivatives are (36) d d x u ( x , t ) = k = 0 2 j 0 u ( x j 0 , k ) ( w k j 0 ( x ) ) + j = j 0 J - 1 k = 0 2 j - 1 α j , k ( t ) ( w 2 k + 1 j + 1 ( x ) ) , d d x v ( x , t ) = k = 0 2 j 0 v ( x j 0 , k ) ( w k j 0 ( x ) ) + j v = j 0 J - 1 k v = 0 2 j v - 1 α j v , k v ( t ) ( w 2 k v + 1 j v + 1 ( x ) ) , respectively. Substituting (35)-(36) into (32), the sparse method for the parabolic PDEs based on the Faber-Schauder scaling function will be obtained. The system of ODEs can be solved exactly by means of the precise integration method (PIM).

4. Dynamic Choice Scheme on the External Grid Points

Combining the multilevel interpolation operator with the threshold scheme, it is easy to obtain the sparse inner grid points dynamically. Any adaptive method can capture the steep gradient appearing in the solution; that is, the inner grid points can concentrate around the larger gradient points adaptively. The PDEs in engineering are always defined in the finite domain, so the boundary condition can usually change the smoothness of solution around the boundary. This results in that more grid points around the boundary contribute to the solution and increase the calculation amount. The reasonable choice of the external grid points can decrease the boundary effect and improve the precision of the solution. In this section, we try to give a dynamic choice scheme of the external grid points, which is deduced from concept of the interval interpolation wavelet and is different from it.

4.1. Construction of the Interval Interpolation Wavelet

In general, the interpolation basis functions defined in interval can be represented as: (37) ω j k = { ω ( 2 j x - k ) + n = - L + 1 - 1 a n k ω ( 2 j x - n ) , k = 0 , , L ω ( 2 j x - k ) , k = L + 1 , , 2 j - L - 1 ω ( 2 j x - k ) + n = 2 j + 1 2 j + L - 1 b n k ω ( 2 j x - n ) , k = 2 j - L , , 2 j , where (38) a n k = i = L - 1 i k - 1 x j , n - x j , i x j , k - x j , i , b n k = i = 2 j + 1 i k 2 j + 1 + L x j , n - x j , i x j , k - x j , i , x j , k = k x max - x min 2 j , k , where L is the amount of the external collocation points; the amount of discrete points in the definition domain is 2 j + 1 ( j Z ) ; [ x min , x max ] is the definition domain of the approximated function.

Equations (37) and (38) show that the interval wavelet is derived from the domain extension. The supplementary discrete points in the extended domain are called external points. The value of the approximated function at the external points can be obtained by Lagrange extrapolation method. Using the interval wavelet to approximate a function, the boundary effect can be left in the supplementary domain; that is, the boundary effect is eliminated in the definition domain.

According to (37) and (38), the interval wavelet approximant of the function f ( x ) x [ x min , x max ] can be expressed as (39) f j ( x ) = f j ( x n ) W j ( 2 j x - n ) , x n = x min + n x max - x min 2 j , f j ( x n ) is the given value at the discrete point x n . At the external points, f j ( x n ) can be calculated by extrapolation method; that is, (40) f j ( x n ) = { k = 0 L - 1 ( f j ( x k ) i = 0 i k L - 1 x n - x i x k - x i ) , n = - 1 , , - L k = 2 j - L + 1 2 j ( f j ( x k ) i = 2 j - L + 1 k i 2 j x n - x i x k - x i ) , n = 2 j + 1 , , 2 j + L . So the interval wavelet approximant of f ( x ) can be rewritten as: (41) f j ( x ) = n = - L - 1 ( k = 0 L - 1 f j ( x k ) i = 0 L - 1 x n - x i x k - x i ) ω ( 2 j x - n ) + n = 0 2 j f j ( x k ) ω ( 2 j x - n ) + n = 2 j + 1 2 j + L ( k = 2 j - L 2 j f j ( x k ) i = 2 j - L 2 j x n - x i x k - x i ) ω ( 2 j x - n ) . Let (42) LS L ( x n ) = k = 0 L - 1 f j ( x k ) i = 0 L - 1 x n - x i x k - x i , LE L ( x n ) = k = 2 j - L 2 j f j ( x k ) i = 2 j - L 2 j x n - x i x k - x i then (43) f j ( x ) = n = - L - 1 LS L ( x n ) ω ( 2 j x - n ) + n = 0 2 j f j ( x k ) ω ( 2 j x - n ) + n = 2 j + 1 2 j + L LE L ( x n ) ω ( 2 j x - n ) . LS L ( x n ) and LE L ( x n ) correspond to the left and the right external points, respectively. They are obtained by Lagrange extrapolation using the internal collocation points near the boundary. So, the interval wavelet’s influence on the boundary effect can be attributed to Lagrange extrapolation. It should be pointed out that we did not care about the reliability of the extrapolation. The only function of the extrapolation is enlarging the definition domain of the given function which can avoid the boundary effect occurring in the domain. Therefore, we can discuss the choice of L by means of Lagrange inner- and extrapolation error polynomial as follows: (44) R L ( x ) = f ( L + 1 ) ( ξ ) ( L + 1 ) ! i = 0 L ( x - x i ) , for    some    ξ    between    x , x 0 , , x L . Equation (44) indicates that the approximation error is both related to the smoothness and the gradient of the original function near the boundary. Setting different L can satisfy the different error tolerance requirement.

4.2. Dynamic Choice Scheme of External Points in Sparse Grids Approach

This scheme is made up with 2 steps. First, the Newton interpolation operator is employed instead of the traditional Lagrange interpolation. Second, both of the error tolerance and condition number are taken as the termination procedure of dynamic choice of external grid points. We will discuss it in detail in this section.

In order to construct the dynamic choice scheme of external grid points, the Newton interpolation theory should be introduced instead of the traditional Lagrange interpolation theory. It is well known that the Newton interpolation is equivalent with Lagrange interpolation, but the Lagrange interpolation algorithm has no inheritance which is the key feature of Newton interpolation. So, the advantage of Newton interpolation method is that the basis function does not need to be recalculated as one point is added except only one more term is needed to be added, which reduces the number of compute operation, especially the multiplication.

The expression of Newton interpolation can be written as: (45) N n ( x ) = f ( x 0 ) + ( x - x 0 ) f ( x 0 , x 1 ) + ( x - x 0 ) ( x - x 1 ) f ( x 0 , x 1 , x 2 ) + + ( x - x 0 ) ( x - x 1 ) , , ( x - x n - 1 ) × f ( x 0 , x 1 , , x n ) , substituting the Newton interpolation instead of the Lagrange interpolation into (43), which can be rewritten as: (46) f j ( x ) = n = - L - 1 ( NS L ( x n ) ) ω ( 2 j x - n ) + n = 0 2 j f j ( x n ) ω ( 2 j x - n ) + n = 2 j + 1 2 j + L ( NE L ( x n ) ) ω ( 2 j x - n ) , where (47) N S L ( x n ) = f ( x 0 ) + ( x n - x 0 ) f ( x 0 , x 1 ) + ( x n - x 0 ) ( x n - x 1 ) f ( x 0 , x 1 , x 2 ) + + ( x n - x 0 ) ( x n - x 1 ) , , ( x n - x L - 1 ) × f ( x 0 , x 1 , , x L ) , NS R ( x n ) = f ( x 2 j ) + ( x n - x 2 j ) f ( x 2 j , x 2 j - 1 ) + ( x n - x 2 j ) ( x n - x 2 j - 1 ) f ( x 2 j , x 2 j - 1 , x 2 j - 2 ) + + ( x n - x 2 j ) ( x n - x 2 j - 1 ) , , ( x n - x 2 j - L ) × f ( x 2 j , x 2 j - 1 , , x 2 j - L ) .

It is well known that the Newton interpolation is equivalent to the Lagrange interpolation. The corresponding error estimation can be expressed as: (48) R n ( x ) = ( x - x 0 ) ( x - x 1 ) , , ( x - x n ) f ( x , x 0 , , x n ) . And the simplest criterion to terminate the dynamic choice on L is | R n ( x ) | Ta ( Ta is the absolute error tolerance). Obviously, it is difficult to define Ta which should meet with the precision requirement of all approximated curves. In fact, the difference coefficient f ( x , x 0 , , x n ) can be used directly as the criterion; that is, (49) | f ( x , x 0 , , x n ) | < ε . As mentioned above, once the curves with lower-order smoothness are approximated by higher-order polynomial expression, the errors will become bigger on the contrary. In fact, even if the L is infinite, the computational precision cannot be satisfied except increasing computational complexity. To avoid this, we design the termination procedure of dynamic choice about L as follows:

If    f ( x 0 , x 1 ) < Ta , then    L = 1

elseif    f ( x 0 , x 1 , x 2 ) < Ta

or    f ( x 0 , x 1 , x 2 ) < f ( x 0 , x 1 ) , then    L = 2

elseif    f ( x 0 , x 1 , x 2 , x 3 ) < Ta

or    f ( x 0 , x 1 , x 2 , x 3 ) < f ( x 0 , x 1 , x 2 ) , then    L = 3

In the field of numerical analysis, the condition number of a function with respect to an argument measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. There is no doubt that the choice of L can change the condition number of the system of algebraic equations discretized by the wavelet interpolation operator or the finite difference method. Therefore, the choice of L should take the condition number into account. In fact, if the condition number cond ( A ) = 10 k , then you may lose up to k digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. According to the general rule of thumb, the choice should follow the rule as follows: (50) Cond ( A L + 1 ) Cond ( A L ) < 10 .

The computational complexity of interpolation calculation is not proportional to the increasing points. The former is mainly up to the computation amount of ( x - x 0 ) ( x - x 1 ) , , ( x - x n ) and the derivative operations. Obviously, according to (9), the increase in computational complexity is O ( L 3 ) when the number of extension points L increases by 1. But the computational complexity of adaptively increasing collocation points is related to the different basis functions. For the basis with compact support property such as Daubechies wavelet and Shannon wavelet, the value of L is impossible to be infinite. For Haar scaling function which has no smoothness property, L can be taken as 0 at most since it does not need to be extended. For Faber-Schauder wavelet, L can be taken as 1 at most. For Daubechies wavelet, L can be taken as different values according to the order of vanishing moments, but it must be finite. For the wavelets without compact support property, L can take value dynamically, such as Shannon wavelet. The computational complexity of increasing points is mainly dependent on the basis function of itself.

5. Numerical Experiments 5.1. Dynamic Choice of the Sparse Grid Points

In order to test the adaptability of the sparse grid approach proposed in this paper, the Faber-Schauder and Shannon scaling function, the autocorrelation function of Daubechies scaling functions, are taken as the basis, respectively. Faber-Schauder scaling function has first-order derivative and Daubechies scaling function has second derivative, so both of the dynamic choice schemes will be tested.

Example 1.

Burger equation with Dirichlet boundary conditions.

As a test problem for the numerical algorithm described in the previous section, we will consider Burgers equation as follows: (51) u t + u u x = 1 Re 2 u x 2 , x [ 0,2 ] with initial and boundary conditions (52) u ( x , 0 ) = sin ( π x ) u ( 0 , t ) = u ( 2 , t ) = 0 , where t represents the time and Re denotes the Reynolds number. With the increasing of the value of Re , the solution develops into a saw-tooth wave at the origin point. The gradient at the origin reaches its maximum value. Therefore, the performance of a numerical method is often judged by its ability to resolve the large gradient region that develops in the solution, which is shown in Figure 1.

Using the difference coefficient to approximate the partial differential operator u / t , the Burgers equation becomes (53) - 1 Re d 2 u ( x , m . Δ t ) d x 2 + u ( x , ( m - 1 ) Δ t ) · d u ( x , m Δ t ) d x + 1 Δ t [ u ( x , m Δ t ) - u ( x , ( m - 1 ) Δ t ) ] = 0 , u ( 0 , m Δ t ) = u ( 2 , m Δ t ) = 0 , m = 1,2 , . According to the virtual displacement theory, the variational form of the Burgers equation can be represented as: (54) 0 2 ε d u ( x , m · Δ t ) d x · d v ( x ) d x d x + 0 2 [ u ( x , ( m - 1 ) Δ t ) d u ( x , m · Δ t ) d t + 1 Δ t u ( x , m · Δ t ) ] v ( x ) d ( x ) = 0 2 1 Δ t u ( x , ( m - 1 ) Δ t ) v d x . This can be solved by means of (35)-(36).

In the experiments, the Reynolds number Re = 1000 , and the time step τ = 0.001 .

The numerical results showed in Figure 2 are obtained by the finite difference method. As the amount of the even discrete points is taken as 512, the Gibbs phenomena appeared at x = 1 where exists a steep slope in the solution (Figure 2(b)). Increasing the discrete points can restrict the Gibbs phenomena (Figure 2(a)).

Figure 3 illustrates the performance of the sparse solution method on this example by comparison of the sparse solution and the true solution produced using a standard fully resolved method (finite difference method).

With the increasing of the parameter t , the gradient of the solution at the point x = 1 becomes more and more large. Any of the Faber-Schauder scaling function and the auto-correlation function of the Daubechies scaling function is taken as the basis functions, the sparse method can capture the steep slope appeared in the solution effectively. This shows that more and more grid points concentrate around the point x = 1 . However, the maximum of the gradient at x = 1 appeared as t = 0.4 , the 1024 coefficients used in the true solution, only 64 with Faber-Schauder basis and 152 with Daubecies basis, are retained in the sparse solution (about 6.25% and 14.84%), respectively. Begin with t = 0.4 , the gradient value of the solution at x = 1 becomes smaller and smaller with the increasing of t . The amount of the sparse grid points also decreases with the increasing of t accordingly. This is illustrated in Figure 3. The adaptability of the proposed sparse method is helpful to improve the efficiency and the calculation precision of the algorithm.

Analytical solutions of the Burgers equation at different times ( t = 0,0.4,0.6 ).

Numerical solution obtained by the finite difference method.

t = 0.4 1024 grid points

t = 0.4 512 grid points

Solution evolution of Burgers equation with Re = 1000 .

Besides, we also noticed that the condition number of Burger equation from Table 1 varies with the change of j , Re , and the time step τ . In fact, the condition number relates closely with the sparse grid points. As Re and j are smaller, the steep gradient will not appear in the solution and the grid points are sparse. In this case, the condition number is smaller and will not destroy the numerical precision apparently. On the contrary, if the Re , j , and the time step are larger, the steep slope appearing in the solution and the error brought from the larger time step will bring more grid points adaptively into the algorithm. This will deduce that the condition number becomes larger. It has been mentioned in Section 4.2 that the larger condition number can decrease the calculation precision greatly. Table 1 shows that the condition number ( L = 2 ) increases more rapidly than L = 1 with the increase of j and Re . This also can be illustrated in Figure 4.

Condition number of the Fokker-Planck equation.

j Re τ = 0.0001 τ = 0.00001
L = 0 L = 1 L = 2 L = 0 L = 1 L = 2
4 10 8.9742 6.3786 12.1637 1.56319 1.9723 2.3335
100 91.8021 49.1826 186.9260 9.5434 11.9168 19.2971
1000 1891.2 512.7634 3856.3 403.3223 148.8976 812.9812

7 10 40.3968 32.2256 82.2844 41.2311 4.3834 7.2188
100 813.5472 199.1953 1245.7 81.7719 49.5538 173.4967
1000 21987.0000 2426.8 39004 3688.30 699.4512 6917.4

10 10 298.5375 145.7761 663.4654 20.6612 18.4523 40.1116
100 7821.7000 1217.8 14346 698.5623 212.9856 1274.6
1000 194670 13887 523830 38421 3974.4 79248

The influence of the condition number to the error ( Re = 1000 , j = 7 ).

Dynamic external grid points L ( t = 0 - 0.01 , L = 5 ; t = 0.01 - 0.02 , L = 3 ; t = 0.02 - 0.03 , L = 5 ; t = 0.03 - 0.04 , L = 3 )

Dynamic L ( t = 0.04 0.06 , L = 3 ; t = 0.06 0.2 , L = 1 )

Figure 4 illustrates that the external grid points change with the development of t dynamically. As t 0.04 (Figure 4(a)), the solution function is smooth and the condition number is smaller. The approach can take more external grid points dynamically to improve the precision. As t > 0.04 (Figure 4(b)), the steep slope is appearing in the solution and the condition number is increasing. In this case, the increase of the external grid points cannot improve the precision anymore. In fact, this explained the reason why we construct the dynamic choice scheme of external grid points to some extent.

5.2. Comparison between the Dynamic Choice Scheme and the Wavelet Collocation Method Example 2.

Consider the Heat equation (55) u t = 2 u 2 x + e x + 2 t , x [ 0,1 ] , 0 t T ,          with the initial and boundary conditions (56) u ( x , 0 ) = e x , u ( 0 , t ) x = e 2 t , u ( 1 , t ) x = e 1 + 2 t , where t denotes the time parameter.

Let I i ( x ) and D i ( x ) denote the interpolation operator and the corresponding derivative; according to the classical collocation approach, the approximating formulation u J ( x ) of a function u ( x ) can be written as: (57) u J ( x ) = i Z c I i ( x ) u J i . Substituting (57) into (55) leads to a system of nonlinear ordinary differential equations as follows: (58) n Z c u J ( x n , t ) D n ′′ ( x k ) + exp ( x k + 2 t ) = u j ( x k , t ) t , where k Z c . The corresponding vector expression is (59) t V J = M 0 V J + F ( t ) . The corresponding Neumann boundary condition can be expressed as: (60) M 1 ( 1,1 ) V J ( 1 ) + i = 2 2 J M 1 ( 1 , i ) V J ( i ) = e 2 t , M 1 ( 2 J , 2 J ) V ( 2 J ) + i = 2 2 J M 1 ( 2 J , i ) V J ( i ) = e 1 + 2 t , where (61) V J = ( u J ( x 0 , t ) , u J ( x 1 , t ) , , u J ( x 2 J , t ) ) T , F ( t ) = ( exp ( x 0 + 2 t ) , exp ( x 1 + t ) , , exp ( x 2 J + 2 t ) ) T , M 0 ( k , n ) = m k , n 0 = D n ′′ ( x k ) , k , n Z c , M 1 ( k , n ) = m k , n 1 = D n ( x k ) , k , n Z c . Equations (59)-(60) can be solved by the VIM and PIM. In the following, we will take heat equation as examples to illustrate the effectiveness of the algorithm proposed in this paper. The Shannon scaling function is employed to be the basis function. The exact analytical solution of (55) is u ( x , t ) = e x + 2 t . Obviously, the solution function’s infinite order derivative exists for all x in the definition domain.

( 1) Comparison between the Dynamic Choice Scheme and the Static Interval Wavelet. Let T = 0.2 and let τ = 0.01 . The computational error curve of the dynamic sparse grid approach is shown in Figure 5. The maximum of the absolute error is 0.0471, which occurs near the right boundary. This shows that the bigger gradient of the solution can cause bigger error. The dynamic L and the iteration times at the same L value are shown in Table 2. The value of L varies between 2, 3, and 4, and the iteration times at L = 4 is much more than L equaling 2 or 3. So, we take L = 4 in the static interval wavelet PIM to solve the heat equation in the same parameters with the dynamic scheme. The numerical solution is shown in Figure 6. Obviously, the error is too big that the algorithm is invalid. There are many reasons that can lead to this result such as the smoothness of the solution and the nonlinear term in the PDEs. As the time step τ = 0.00001 , the error curve was shown in Figure 7. The dynamic L and the iteration times at the same L value are shown in Table 3. With the decreasing of the time step, the influence of the nonlinear term on PDEs becomes smaller and smaller. The biggest errors of both dynamic and static interval wavelet PIM are 1.3388 × 10 - 5 . This shows that the construction of dynamic grid approach is necessary for nonlinear PDEs with Neumann boundary conditions.

Dynamic L and the iteration times at the same L value ( j = 5 , T = 0.2 , τ = 0.01 ).

 L 3 2 4 2 3 2 4 2 3 2 4 2 3 Iteration times 2 1 2 1 1 1 3 1 1 1 3 1 1

Dynamic L and the iteration times at the same L value ( j = 5 , T = 0.2 , τ = 0.00001 ).

 L 6 5 4 5 4 Iteration times 15 14 1 1 19968

Error of the solution with the dynamic grid approach ( T = 0.2 , τ = 0.01 , j = 5 ).

Numerical solution with interval wavelet method ( L = 4 , T = 0.2 , τ = 0.01 , j = 5 ).

Error of the solution with the dynamic sparse grid approach ( j = 5 , T = 0.2 , and τ = 0.00001 ).

( 2) Comparison between the VIM and PIM and Runge-Kutta Method for Time-Domain Integration. The numerical solution and error curves with the VIM and PIM and Runge-Kutta method are shown in Figure 8. It is obvious that the calculation precision of VIM and PIM (Figure 5) is better than Runge-Kutta method. It should be pointed out that Runge-Kutta is not sensitive to the time step τ    (Table 4) compared with VIM and PIM. One of the most important reasons is that the nonlinear term in PDEs was integrated with explicit format in VIM and PIM, and implicit format was employed in Runge-Kutta method.

Error comparison between VIMand PIM and Runge-Kutta Method.

τ 0.1 0.01 0.001 0.0001 0.00001
VIM and PIM 0.4176 0.0471 0.0045 4.1368 × 10−4 1.7610 × 10−5
Runge-Kutta 0.1366 0.2193 0.2110 0.2122 0.2124

Numerical solution and error curves with interval wavelet Runge-Kutta method ( L = 4 , T = 0.2 , τ = 0.00001 , j = 5 ).

Numerical solution and exact solution

Error curve of the numerical solution

6. Conclusions

The multilevel interpolation operator constructed in this paper is independent of the basis. Although Faber-Schauder scaling function has no second-order derivative, it still can be the basis employed in the multiscale interpolation operator to solve the Burgers equation, while only retaining important nodes. The reduced dynamics created by the sparse projection property can dynamically capture the true phenomena exhibited by the solution. This sparse projection amounts to a shrinkage of the coefficients of the updated solution at every time step. Compared with the finite difference method, the retained coefficients are less than 10% in the sparse solution of the Burgers equation.

The dynamic sparse grid approach, which is constructed by combining the multiscale interpolation operator and the variational iteration method, is able to choose both of the internal and external grid points based on the gradient and the smoothness of the solution, the condition number of the PDEs, and the error tolerance dynamically. This property is good suit to the PDEs with Neumann boundary conditions. It can eliminate the boundary effect efficiently. With regard to the accuracy and time complexity of the solution in comparison with those obtained with other algorithms, the dynamic sparse grid approach constructed in this paper is more reasonable. The numerical experiments illustrate that it is necessary to construct the dynamic sparse grid approach for the nonlinear PDEs with Neumann boundary conditions and Dirichlet boundary conditions.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 41171337) and the National High Technology Research and Development Program of China (no. 2012BAD35B02).

Schaeffer H. Caflisch R. Hauck C C. Osher S. Sparse dynamics for partial differential equations Proceedings of the National Academy of Sciences of the United States of America 2013 110 17 6634 6639 Bungartz H.-J. Griebel M. Sparse grids Acta Numerica 2004 13 147 269 2-s2.0-2942633452 McWilliam S. Knappett D. J. Fox C. H. J. Numerical solution of the stationary FPK equation using Shannon wavelets Journal of Sound and Vibration 2000 232 2 405 430 2-s2.0-0033731439 10.1006/jsvi.1999.2747 Cattani C. Second order Shannon wavelet approximation of C2-functions UPB Scientific Bulletin A 2011 73 3 73 84 2-s2.0-80051924501 Cattani C. Harmonic wavelets towards the solution of nonlinear PDE Computers and Mathematics with Applications 2005 50 8-9 1191 1210 2-s2.0-28844486000 10.1016/j.camwa.2005.07.001 Cattani C. Shannon wavelets for the solution of Integro-differential equations Mathematical Problems in Engineering 2010 2010 22 408418 10.1155/2010/408418 Cattani C. Fractional calculus and Shannon wavelet Mathematical Problems in Engineering 2012 2012 26 502812 10.1155/2012/502812 Hoffman D. K. Wei G. W. Zhang D. S. Kouri D. J. Shannon-Gabor wavelet distributed approximating functional Chemical Physics Letters 1998 287 1-2 119 124 2-s2.0-0032562315 Inokuti M. Sekine H. Mura T. General use of the Lagrange multiplier in non-linear mathematical physics Variational Method in Mechanics of Solids 1978 Oxford, UK Pergamon Press 156 162 He J.-H. Variational iteration method: a kind of non-linear analytical technique: some examples International Journal of Non-Linear Mechanics 1999 34 4 699 708 2-s2.0-0000092673 He J.-H. Asymptotic methods for solitary solutions and compactons Abstract and Applied Analysis 2012 2012 130 916793 10.1155/2012/916793 Noor M. A. Mohyud-Din S. T. Variational iteration methods for solving initial and boundary value problems of Bratu-type Applications and Applied Mathematics 2008 3 1 89 99 Aslam Noor M. Mohyud-Din S. T. Variational iteration technique for solving higher order boundary value problems Applied Mathematics and Computation 2007 189 2 1929 1942 2-s2.0-34249046099 10.1016/j.amc.2006.12.071 Yan H. H. Adaptive wavelet precise integration method for nonlinear black-scholes model based on variational iteration method Abstract and Applied Analysis 2013 2013 6 735919 10.1155/2013/735919 Mei S.-L. Zhang S.-W. Coupling technique of variational iteration and homotopy perturbation methods for nonlinear matrix differential equations Computers and Mathematics with Applications 2007 54 7-8 1092 1100 2-s2.0-34748829690 10.1016/j.camwa.2006.12.074 Vasilyev O. V. Paolucci S. A dynamically adaptive multilevel wavelet collocation method for solving partial differential equations in a finite domain Journal of Computational Physics 1996 125 2 498 512 2-s2.0-0030139121 10.1006/jcph.1996.0111 Mei S.-L. Lv H.-L. Ma Q. Construction of interval wavelet based on restricted variational principle and its application for solving differential equations Mathematical Problems in Engineering 2008 2008 14 2-s2.0-47649124285 10.1155/2008/629253 629253 Donoho D. L. Interpolating wavelet transforms Preprint, Department of Statistics, Stanford University 2. 3, 1992 Bertoluzza S. Adaptive wavelet collocation method for the solution of Burgers equation Transport Theory and Statistical Physics 1996 25 3–5 339 352 2-s2.0-0030515560 Donoho D. L. Elad M. Optimally sparse representation in general (nonorthogonal) dictionaries via 1 minimization Proceedings of the National Academy of Sciences of the United States of America 2003 100 5 2197 2202 2-s2.0-0037418225 10.1073/pnas.0437847100 Donoho D. L. Compressed sensing IEEE Transactions on Information Theory 2006 52 4 1289 1306 2-s2.0-33645712892 10.1109/TIT.2006.871582 Vasilyev O. V. Paolucci S. Sen M. A multilevel wavelet collocation method for solving partial differential equations in a finite domain Journal of Computational Physics 1995 120 1 33 47 2-s2.0-28144445254 10.1006/jcph.1995.1147 He J.-H. Wu X.-H. Variational iteration method: new development and applications Computers and Mathematics with Applications 2007 54 7-8 881 894 2-s2.0-34748870677 10.1016/j.camwa.2006.12.083 Wu G. C. Variational iteration method for q-diference equations of second order Journal of Applied Mathematics 2012 2012 5 102850 10.1155/2012/102850 Kong H. Huang L. L. Lagrange multipliers of q-diference equations of third order Communications in Fractional Calculus 2012 3 1 30 33 Wu G. C. Baleanu D. Variational iteration method for the Burgers' flow with fractional derivatives: new Lagrange multipliers Applied Mathematical Modelling 2013 37 9 6183 6190 Wu G. C. Challenge in the variational iteration method: a new approach to identification of the Lagrange multipliers Journal of King Saud University 2013 25 175 178 Mei S.-L. Lu Q.-S. Zhang S.-W. Jin L. Adaptive interval wavelet precise integration method for partial differential equations Applied Mathematics and Mechanics 2005 26 3 364 371 2-s2.0-20144382751