Convergence Analysis of Generalized Jacobi-Galerkin Methods for Second Kind Volterra Integral Equations with Weakly Singular Kernels

We develop a generalized Jacobi-Galerkin method for second kind Volterra integral equations with weakly singular kernels. In this method, we first introduce some known singular nonpolynomial functions in the approximation space of the conventional JacobiGalerkin method. Secondly, we use the Gauss-Jacobi quadrature rules to approximate the integral term in the resulting equation so as to obtain high-order accuracy for the approximation. Then, we establish that the approximate equation has a unique solution and the approximate solution arrives at an optimal convergence order. One numerical example is presented to demonstrate the effectiveness of the proposed method.


Introduction
In this paper we present a generalized Jacobi-Galerkin method for solving Volterra integral equations of second kind with weakly singular kernels.Specifically, for a given function  ∈ ( 2 ) with  fl [−1, 1] and a parameter  ∈ (0, 1), we define a Volterra integral operator K : () → () by (K) () fl ∫  −1 ( − ) −  (, )  () ,  ∈ , (1) and then consider the Volterra integral equation of the form where  ∈ () is a given function and  ∈ () is the unknown to be determined.
In view of the singularity of the kernel function in the operator K, the solution  of (2) exhibits a singularity at the point −1 in its derivative even if the forcing term  is a smooth function.There are many numerical attempts based on the spline approximation to overcome the difficulty caused by the singularity of the solution of (2) (see [1][2][3][4][5][6][7][8]). Recently, spectral methods using Jacobi polynomial basis have received considerable attention to approximating the solution of integral equations due to their high accuracy and easy implementation (see [9][10][11][12][13][14][15][16][17]).In particular, Chen and Tang in [11] proposed a Jacobi-collocation spectral method for second kind Volterra integral equations with weakly singular kernels.Some function transformations and variable transformations are employed to change the equation into new Volterra integral equations possessing better regularity so that the orthogonal polynomial theory can be applied accordingly.In [12], they proposed a spectral Jacobi-Galerkin approach for solving (2).A rigorous error estimate was given in both the infinite norm and the weighted square norm.As far as I am concerned, all existing spectral methods always either suppose that the original equation has a sufficiently smooth solution or convert the equation into a new one with a solution of better regularity than that of the original equation (2) so that the spectral method can be applied.It goes without saying that the function transformation makes the resulting equations and approximations more complicated, which leads us to consider the generalized spectral method involved.
We organize this paper as follows.In Section 2, we develop a generalized Jacobi-Galerkin method for solving (2) and then show the stability and convergence of this algorithm.

2
Journal of Function Spaces For the semidiscrete system proposed in previous section, we construct the efficient numerical integration scheme so as to obtain the fully discrete linear system.In Sections 3 and 4, we give a few technical results and analyze the stability and convergence analysis, respectively.In Section 5, one numerical example is presented to illustrate the efficiency and accuracy of this method.In addition, a conclusion is drawn.

A Generalized Jacobi-Galerkin Method
In this section, we first introduce some index sets: N 0 fl N ∪ {0} with N fl {1, 2, . ..} and Z  fl Z +  ∪ {0} with Z +  fl {1, 2, . . ., } for  ∈ N. We let  , () fl (1 − )  (1 + )  , ,  > −1, be a Jacobi weight function and let  2  , () denote the space of measurable functions whose square is Lebesgue integrable in  relative to the Jacobi weight function  , .The inner product and norm of this space are given by For  ∈ N 0 , we let  ,  be the Jacobi orthonormal polynomial of degree  relative to the weight function  , .
The following result regarding the regularity of the solution of (2) comes from [6].

Now we define an index set W by
and suppose that  is the cardinality of the set W, and then we define a nonpolynomial function set  by It follows from the notations above that Theorem 1 is rewritten as follows.
Corollary 2. Suppose that the kernel function  ∈   ( 2 ).If there exist some constants   ,  ∈ Z +  , such that where ℎ ∈   (), then there exist some constants   ,  ∈ Z +  , such that the solution  has the similar decomposition where V ∈   ().Now we introduce another finite dimensional space   given by and then let The generalized spectral Galerkin method for solving (2) is to seek a vector u  fl [ 1,1 , . . .,  ,1 ,  0,2 , . . .,  ,2 ]  such that satisfying the equation If we use P  which is the orthogonal projection operator from  2  , () to   , then the equation mentioned above has the operator form By expression (8), P   can be written as The conventional Jacobi-Galerkin method is to choose   as the approximation space and the test space, but when the original solution has a singularity, the approximation solution suffers from possessing lower-order accuracy.In order to overcome this difficulty, the same as in [7], we include the set  of the known nonpolynomial functions reflecting the singularity of the original solution in the usual Jacobi-Galerkin approximation space   .Hence, we call this method the generalized spectral Galerkin method.
Next we are going to analyze this generalized Chebyshev-Galerkin method.We first show the stability of the original operator I + K : Proof.First, it follows from  ∈ ( 2 ) and Theorems 2.21 and 4.12 in [18] that the integral operator K :  2  , () →  2  , () is compact.On the other hand, by the fact that −1 is not the eigenvalue of the integral operator K we conclude that I+K is injective from  2  , () into itself.Thus, using Theorem 3.4 in [18], the inverse operator (I + K) −1 : If we use Q  which is the orthogonal projection operator from  2  , () to   , it is clear that there holds Throughout the remainder of this paper, we use the symbol  to denote a positive constant which may take different values on different occurrences.Moreover, it follows from [19] where  appears in (16).Moreover, there exists a positive constant  such that Proof.Since K is a compact operator from  2 , () into itself and P   →  for all  ∈  2  , () as  tends to ∞, we conclude that there exists a positive integer  0 such that, for  ≥  0 and for V ∈   , where it and ( 16) and the triangle inequality     (I + yield conclusion (23).
On the other hand, subtracting (2) from ( 14) obtains By applying the operator P  to both sides of (2), we have Thus, A combination of ( 27) and (29) gives where it and ( 16) imply that Hence, by the solution expansion of ( 9) and ( 22) with  fl ,  1 fl , and  2 fl V, we conclude that A combination of (31) and (32) presents the desired conclusion.
In the remainder of the section, we write the matrix form of (14).To this end, for ,  ∈ N 0 , by introducing Journal of Function Spaces we then define four block matrices by It is clear that Likewise, we define the matrices B  , B  , B  , and B * .Using these notations, we define A  and B  by Associated with P  , by letting we define the vector f  as Thus using the matrices and vectors above, the matrix form of ( 14) is written as In order to solve system (39) in previous section, the matrix entries of integral form in (39) must be computed.Hence, the main purpose of this section is going to approximate the integral operator and the inner product based on the Gauss-Jacobi quadrature rule.To this end, for  ∈ N and  ∈ Z +  , we denote by  , , and  , , the set of  Jacobi-Gauss points and the corresponding weights relative to the weight function  , .We use the notation   to denote the set of all polynomials of degree not more than .Moreover, the classical Gauss-Jacobi quadrature rule is given by (40) In order to give the fully discrete form of matrix B  , we first approximate the integral operator K.For this purpose, for  ∈ , we introduce a variable transformation which converts the interval [−1, ] into .Thus, the operator K has the following form: In particular, when  fl   ∈ , then Next we define an integral operator G , by Consequently, the integral operator K is rewritten as In order to discretize the operator K, we first discretize the operator G , .To this end, for  ∈ (), we define the Lagrange interpolation polynomial L Subsequently, we can obtain the fully discrete form K  of the operator K, In order to approximate the inner product and easily analyze the stability and convergence of fully discrete equation, we define the operator K by On the other hand, let Using these notations above, we replace K and P   by K and L,  , obtaining where ũ is given by In order to observe that (56) is a fully discrete form, we have to write its matrix.To this end, suppose  ≥  + 1; replacing the operator K in (33) by the operator K given in (54) and then using Gauss-Jacobi quadrature (40) produce that b,,1 fl 2 )) . ( The same as before, we define four matrices B , B , B , and B * and then set  . (61) Hence, we have the following matrix form of (56): where ũ fl [ã 1,1 , . . ., ã, , ã1,2 , . . ., ã,2 ]  . (63)

Useful Results
In this section we are going to give some technical results so as to analyze the fully discrete equation (56).

Lemma 5. Suppose the kernel function
where C   is the binomial coefficient given by C   fl ( − 1) ⋅ ⋅ ⋅ ( −  + 1).Clearly, −   (,  (, )) = 2 −  0,− () D −   (,  (, )) . (66) Substituting the above result into the right hand side of (65) yields that Using the Cauchy-Schwartz inequality to the right hand side of (67) yields the desired conclusion with  being given by  fl Now we give the difference between G −,0  and G −,0  for  ∈   ().To this end, we introduce the result in [8] It is clear that Applying Cauchy inequality to the right hand side of (73) produces that It follows from the hypothesis that  + 2 < 1 and −1 <  < 1 yield that On the other hand, an application of (68) produces that A combination of ( 73)-( 78) yields the desired conclusion (69).

Journal of Function Spaces
Now we introduce the operator K as and then we estimate the difference between K and K  .
Proof.The same as Lemma 6, we only need to show that (80) holds.For  ∈   (), by the definition of the operators K and K  , A direct estimation yields that in which combining result (68) with  fl G −,0  implies that In the following we estimate then By the assumption that +,  < 1−2, there exists a positive constant  such that max{−1,  + , } <  < 1 − 2.Thus we define two functions  1 and  2 by It is obvious that Applying Cauchy inequality to the right hand side of (88) yields that Clearly, By using the second estimation in (64), we have Substituting results (89)-( 91) into (86) can obtain that where   is defined by The left thing is to estimate   for  ∈ Z  .To this end, we let of Function Spaces Then,  ∈  , a direct estimation for   yields that which implies that A consequence of (83), (92), and (96) produces the desired conclusion.
The next result is concerned with the difference between K and K .To this end, we introduce the result proposed in [9][10][11]: for any  ∈ (), there exists a positive constant  independent of ,      L , in which combining (97) produces that there exists a positive constant  such that This and (69) conclude the desired conclusion (98).
As a consequence of Lemma 8, for V ∈   , by using the inverse inequality relative to two norms weighted with different Jacobi weight functions in Theorem 3.31 in [19], we can easily obtain the following.Lemma 8

Corollary 9. Suppose the conditions in
Hence, we draw the desired conclusion.
Theorem 10 ensures that, for sufficiently large , if we select  as in (104), then the fully discrete system possesses (62) which possesses a unique solution ũ .The next result is concerned with the convergence of the approximation solution ũ .Theorem 11.Suppose the kernel function  ∈   ( 2 ) and  is given by (8).Three parameters , , and  satisfy −1 < ,  < 1 − 2,  +  < 1 − 2 and  is given in (104).Then there exist a positive constant  and a positive integer  0 such that, for  ≥  0 ,      − ũ In a similar manner, using result (98) produces      KV − K V      , ≤  − ‖V‖  , , .

One Numerical Example
In this section, we present a numerical example to demonstrate the approximation accuracy, the order convergence, and the stability of the proposed method.We also compare it with the conventional Jacobi-Galerkin method.Here, we compute the Gauss-Jacobi quadrature rule nodes and weights by Theorems 3.4 and 3.6 discussed in [19].
In this example, for simplicity we choose (, ) fl 1.For the numerical comparison purpose, we choose the right hand side function  so that  () fl (1 + ) 1/2 cos (1 + ) ,  ∈ , is the exact solution of the equation.Note that the first derivative of this solution has a singularity at  = −1.The purpose Journal Function Spaces (131) In this numerical example, our generalized spectral method chooses the nonpolynomial function set  given by  fl span {(1 + ) 1/2 :  ∈ } . (132) The comparison of numerical results between the conventional method and our proposed method is given in Table 1.
From these numerical results we observe that our method is superior to the conventional spectral method, which is consistent with the theoretical results.In summary, the conventional Jacobi-Galerkin method for solving Volterra integral equations with nonsmooth solutions may be of loworder accuracy.In order to obtain the high-order accuracy, in this paper we introduce the set  of nonpolynomial functions in the conventional Jacobi-Galerkin approximation space   of Jacobi polynomial basis and then develop a generalized Jacobi-Galerkin method.The price we pay to obtain this optimal convergence rate is the dimension of the increase of the approximation subspace.But we observe that the additional cost to achieve the optimal convergence rate is insignificant in comparison with the acceleration convergence that we obtain.
Suppose the kernel function  ∈   ( 2 ).If three parameters , , and  satisfy −1 < ,  < 1 − 2,  +  < 1 − 2, then there exists a positive constant  such that, for  ∈   (),      K − K       , ≤  −          , , .Moreover, if  fl   for  ∈ Z +  , then there exists a positive constant  such that      K  − K        , ≤  − .It follows from result (80) in Lemma 7 that we only need to estimate the second term in the right hand side of (100).By definition of K   and K       K  − K       , hold.Then there exists a positive constant  such that, for  ∈   ,In the section, we are going to analyze the convergence of the approximate solution of the fully discrete generalized Jacobi-Galerkin method.First we give the stability analysis of the operator (I + P  K ) :   →   .For  ∈   , there exists functions   ∈  for  ∈ Z +   () +  () ,   () fl     ()  ∈ .(106)Byusing (99) and (104), there exists a positive constant  such that, for  ∈ Z +  ,      P  K  − P  K        , ≤  −           , .     P  K  − P  K        , ≤  6           , , (109) where  denotes the cardinality of the set W as in Section 2. On the other hand, by estimation (103) in Corollary 9 and the choice of  in (104) there exists a positive constant  such that      P  K − P  K       , ≤  log −1           , . K)      , −      P  K − P  K       , , ≤  − . K ) (ũ  − P  ) = P  K − P  K P   + P  ℎ − L ,  ℎ.Clearly, using (68) with  fl ℎ and  fl 0 produces       P   − L,         , ≤      D  ℎ     +,+  − .     P  K − P  K P        , ≤  1 +  2 .     K  − K        , ≤            ,  − .

Table 1 :
comparison of results of the two methods. of these numerical experiments is to compare the numerical performance between the generalized spectral method with the conventional spectral method.For both of the methods we let  2  −1/2,−1/2 () be the solution space of the original solution .It is obvious that the solution  belongs to the space  1  −1/2,−1/2 () and  ¡ ∈   −1/2,−1/2 () for  ≥ 2. Now we choose the corresponding conventional finite dimensional subspace   as   fl span {cos ( arccos ) ,  ∈ Z  ,  ∈ } .