A Computational Method for Two-Point Boundary Value Problems of Fourth-Order Mixed Integrodifferential Equations

In this paper, reproducing kernel Hilbert space method is applied to approximate the solution of two-point boundary value problems for fourth-order Fredholm-Volterra integrodifferential equations. The analytical solution was calculated in the form of convergent series in the spaceW 2 [a, b] with easily computable components. In the proposed method, the n-term approximation is obtained and is proved to converge to the analytical solution. Meanwhile, the error of the approximate solution is monotone decreasing in the sense of the norm ofW 2 [a, b]. The proposed technique is applied to several examples to illustrate the accuracy, efficiency, and applicability of the method.


Introduction
Boundary value problems (BVPs) of fourth-order integrodifferential equations (IDEs) which are a combination of differential and integral equations arise very frequently in many branches of applied mathematics and physics such as fluid dynamics, biological models, chemical kinetics, biomechanics, electromagnetic, elasticity, electrodynamics, heat and mass transfer, and oscillation theory [1][2][3][4].If the BVPs for fourth-order Fredholm-Volterra IDEs cannot be solved analytically, which is the usual case, then recourse must be made to numerical and approximate methods.
But on the other aspects as well, the numerical solvability of differential and integral equations of different type and order can be found in [9][10][11][12][13][14][15] and references therein.Investigation about two-point BVPs for fourth-order Fredholm-Volterra IDEs is scarce.Actually, no one has given a method to get approximate solution for this type of equations in the literature.Also, none of previous studies propose a methodical way to solve these equations.Moreover, previous studies require more effort to achieve the results; they are not accurate but are usually developed for special types of ( 1) and (2).The new method is accurate, needs less effort to achieve the results, and is developed especially for nonlinear case.Meanwhile, the proposed technique has an advantage that it is possible to pick any point in the interval of integration, and the approximate solutions and all its derivatives up to order four will be applicable as well.
The paper is organized in the following form.In the next section, two reproducing kernel spaces are described.In Section 3, a linear operator, a complete normal orthogonal system, and some essential results are introduced.Also, a method for the existence of solutions for (1) and (2) based on reproducing kernel space is described.In Section 4, we give an iterative method to solve (1) and ( 2) numerically in the space  5  2 [, ].Numerical examples are presented in Section 5. Section 6 ends this paper with a brief conclusion.

Two Reproducing Kernel Spaces
In this section, two reproducing kernels needed are constructed in order to solve (1) and (2) using RKHS method.Before the construction, we utilize the reproducing kernel concept.Throughout this paper, C is the set of complex An abstract set is supposed to have elements, each of which has no structure, and is itself supposed to have no internal structure, except that the elements can be distinguished as equal or unequal, and to have no external structure except for the number of elements.
The second condition in Definition 1 is called "the reproducing property"; the value of the function  at the point  is reproducing by the inner product of (⋅) with (⋅, ).A Hilbert space which possesses a reproducing kernel is called an RKHS [23].
Next, we first construct the space  5 2 [, ] in which every function satisfies the boundary conditions (2) and then utilize the space  1 2 [, ].

Main Results and the Structure of Solution
In this section, the representation of the analytical solution of ( 1) and ( 2) and the implementation method are given in the reproducing kernel space  5  2 [, ].After that, we construct an orthogonal function system of  5  2 [, ] based on the Gram-Schmidt orthogonalization process.
To do this, we define a differential operator  as such that After homogenization of the boundary conditions ( 2), ( 1) and ( 2) can be converted into the equivalent form as follows: such that (, (), (), ()) = () + (, ()) + () + (), () = ∫   ℎ 1 (, ) 1 (()), and Next, we construct an orthogonal function system of where   are orthogonalization coefficients and are given by the following subroutine: , and Through the next theorem, the subscript  by the operator  (  ) indicates that the operator  applies to the function of .
Remark 12.We mention here that the approximate solution   () in ( 13) can be obtained by taking finitely many terms in the series representation for () of ( 12).

Procedure of Constructing Iterative Method
In this section, an iterative method of obtaining the solution of ( 9) is presented in the reproducing kernel space  5  2 [, ] for both linear and nonlinear cases.Initially, we will mention the following remark about the exact and approximate solutions of ( 1) and (2).
Remark 13.In order to apply the RKHS technique to solve (1) and ( 2), we have the following two cases based on the structure of the functions ,  1 , and  2 .
Case 1.If (1) is linear, then the exact and approximate solutions can be obtained directly from ( 12) and ( 13), respectively.Case 2. If (1) is nonlinear, then in this case the exact and approximate solutions can be obtained by using the following iterative algorithm.Algorithm 14.According to (12), the representation of the analytical solution of (1) can be denoted by the following series: where In fact,   ,  = 1, 2, . . . in (17) are unknown; we will approximate   using known   .For numerical computations, we define initial function  0 ( 1 ) = 0, put  0 ( 1 ) = ( 1 ), and define the -term approximation to () by where the coefficients   of   (),  = 1, 2, . . .,  are given as follows: Here, note that in the iterative process of ( 18), we can guarantee that the approximation   () satisfies the boundary conditions (2).Now, the approximate solution    () can be obtained by taking finitely many terms in the series representation of   () and Next, we will prove that   () in the iterative formula (18) converges to the exact solution () of (1); in fact, this result is fundamental in the RKHS theory and its applications.The next two lemmas are collected for future use in order to prove the recent theorem.
Proof.The proof consists of the following three steps.Firstly, we will prove that the sequence {  } ∞ =1 in ( 18) is monotone increasing in the sense of ‖ ⋅ ‖  5  2 .By Theorem 9, {  } ∞ =1 is the complete orthonormal system in  5  2 [, ].Hence, we have is monotone increasing.
It obvious that if we let () denote the exact solution of (9),   () the approximate solution obtained by the RKHS method as given by ( 18), and   () the difference between () and   (), where  ∈ [,], then ‖  ()‖ 2  .In fact, this is just the proof of the following theorem.

Numerical Outcomes
In this section, we propose few numerical simulations implemented by Mathematica 7.0 software package for solving some specific examples of (1) and ( 2).However, we apply the techniques described in the previous sections to some linear and nonlinear test examples in order to demonstrate the efficiency, accuracy, and applicability of the proposed method.Results obtained by the method are compared with the analytical solution of each example by computing the exact and relative errors and are found to be in good agreement with each other.Using RKHS method, taking   = ( − 1)/( − 1),  = 1, 2, . . .,  with the reproducing kernel function   () on [0, 1], the approximate solution   () is calculated by (13).
The numerical results at some selected grid points for  = 26 are given in Table 1.

Conclusions
The main concern of this work has been to propose an efficient algorithm for the solutions of two-point BVPs for fourth-order Fredholm-Volterra IDEs (1) and (2).The main goal has been achieved by introducing the RKHS method to solve this class of IDEs.We can conclude that the RKHS method is powerful and efficient technique in finding approximate solution for linear and nonlinear problems.In the proposed algorithm, the solution () and the approximate solution   () are represented in the form of series in  5  2 [, ].Moreover, the approximate solution and all its derivatives converge uniformly to the exact solution and all its derivatives up to order four, respectively.There is an important point to make here; the results obtained by the  RKHS method are very effective and convenient in linear and nonlinear cases with less computational, iteration steps, work, and time.This confirms our belief that the efficiency of our technique gives it much wider applicability in the future for general classes of linear and nonlinear problems.

Theorem 18 .
The difference function   () is monotone decreasing in the sense of the norm of  5 2 [, ].

Table 1 :
Numerical results for Example 19.

Table 2 :
Numerical results for Example 20.

Table 3 :
Numerical results for Example 21.

Table 4 :
Numerical results for Example 22.

Table 5 :
Numerical results for Example 23.