A New Direct Method for Solving Nonlinear Volterra-Fredholm-Hammerstein Integral Equations via Optimal Control Problem

The nonlinear integral equations arise in the theory of parabolic boundary value problems, engineering, various mathematical physics, and theory of elasticity 1–3 . In recent years, several analytical and numerical methods of this kind of problems have been presented 4, 5 . Analytically, the decomposition methods are used in 6, 7 . The classical method of successive approximations was introduced in 8 , while some kind of appropriate projection such as Galerkin and collocation methods have been applied in 9–13 . These methods often transform integral or integrodifferential equations into a system of linear algebraic equations which can be solved by direct or iterative methods. In 14 , the authors used Taylor series to solve the following nonlinear Volterra-Fredholm integral equation:


Introduction
The nonlinear integral equations arise in the theory of parabolic boundary value problems, engineering, various mathematical physics, and theory of elasticity 1-3 . In recent years, several analytical and numerical methods of this kind of problems have been presented 4,5 . Analytically, the decomposition methods are used in 6, 7 . The classical method of successive approximations was introduced in 8 , while some kind of appropriate projection such as Galerkin and collocation methods have been applied in 9-13 . These methods often transform integral or integrodifferential equations into a system of linear algebraic equations which can be solved by direct or iterative methods. In 14 , the authors used Taylor series to solve the following nonlinear Volterra-Fredholm integral equation: whereas the Legendre wavelets method for a special type was applied in 15 for solving the nonlinear Volterra-Fredholm integral equation of the form where f x and the kernels k 1 x, t and k 2 x, t are assumed to be in L 2 R on the interval 0 ≤ x, t ≤ 1. The nonlinear Volterra-Fredholm-Hammerstein integral equation is given in 16 as follows: t 0 k 1 t, s g 1 s, y s ds λ 2 1 0 k 2 t, s g 2 s, y s ds, 0 ≤ t, s ≤ 1.

1.3
In this paper, we introduce a method to find the numerical solution of a nonlinear Volterra-Fredholm-Hammerstein integral equation of the form: where f t , V t, s, φ s , and F t, s, φ s are assumed to be in L 2 R and satisfy the Lipschitz condition This paper is organized as follows. In Section 2, we present a form of 1.4 by Fredholm type integral equation, which can convert it into optimal control problem OC . In Section 3, the existence and uniqueness are presented. The computational results are shown in Section 4.

Problem Reformulation
Let the VFH given in 1.4 be written in the form where M, K are arbitrary constants. It is easy to see that 2.1 can be written as follows: Since By integrating 2.9 , we have b a |G t |dt 0. 2.10 4

Journal of Applied Mathematics
On the other hand, one can define the following equality: This will lead us to the following inequality: With the boundary conditions At the end, we have the following OC problem: minimize φ a and φ b are defined in 2.14 where Ω a, b × a, b . The existence and uniqueness of 2.1 will be considered in the next section by using the successive approximation method.

Existence and Uniqueness
The solution φ t of 2.1 can be approximated successively as follows: Thus, we obtain sequence of functions φ 0 t , φ 1 t , . . . , φ n t , such that It is convenient to introduce ψ n t φ n t − φ n−1 t , n ≥ 1, 3.3 with ψ 0 t f t . Subtracting from 3.2 , the same equation with replacing n by n − 1, we get The existence and uniqueness of the solution can be followed.

3.7
We now show that this φ t satisfies 2.1 . The series 3.6 is uniformly convergent since the term ψ i t is dominated by λ b−a M. Then This proves that φ t , defined in 3.6 , satisfies 2.1 . Since each of the ψ i t is clearly continuous, therefore φ t is continuous, where it is the limit of a uniformly convergent sequence of continuous functions.
To show that φ t is a unique continuous solution, suppose that there exists another continuous solution φ t of 2.1 , Then, 3.10 Subtracting 3.10 from 2.1 , we get Since φ t and φ t are both continuous, there exists a constant B such that By using the condition of 2.3 , the inequality 3.12 becomes

3.13
For the large enough n, the right-hand side is arbitrary small, then φ t φ t .

3.14
This completes the proof.

Computational Results
In this section, some numerical experiments will be carried out in order to compare the performances of the new method with respect to the classical collocation methods. The method has been applied to the following three test problems 16, 17 .

4.1
The exact solution is given by x s cos s .  The computational maximum absolute errors for different values of N are shown in Table 1. It is clear that the optimal control method is more accurate for small values of N. It seems that the errors for N 16, in case of OC method, are caused by machine error. The numerical solutions are computed by two methods and summarized in Figure 1, and it seems that our method compared very well with those obtained via the collocation method. The exact solution is x s cos s . This example can be solved by using the proposed OC method. The numerical results together with computational effort of errors in boundaries and CPU time/iteration are given in Table 2.
The computational efforts presented here proved that we could rearrange in a way to avoid the rounding errors in collocation and reducing the CPU time/iteration processes. Furthermore, this rearrangement of the computation leads to a much more accurate and robust method.  In Figure 2, the proposed OC method shows the observed results for Example 4.2 for N 16. It seems also that OC method is more accurate than collocation methods.

Conclusion
In this paper, the optimal control method is introduced to simplify the implementation of general nonlinear integral equations of the second kind. We have shown, in numerical examples, that this method is fast and gains better results compared with collocation method. The important thing to note is that the control-state constraint is satisfied everywhere. Furthermore, the structure of the optimal control agrees with the results obtained in 16 .