Polynomial Least Squares Method for the Solution of Nonlinear Volterra-Fredholm Integral Equations

The present paper presents the application of the polynomial least squares method to nonlinear integral equations of the mixed Volterra-Fredholm type. For this type of equations, accurate approximate polynomial solutions are obtained in a straightforward manner and numerical examples are given to illustrate the validity and the applicability of themethod. A comparison with previous results is also presented and it emphasizes the accuracy of the method.

Equations of this type are frequently used to model applications from various fields of science such as elasticity, electricity, and magnetism, fluid dynamics, the dynamic of populations, and mathematical economics.
In general, the exact solution of these nonlinear integral equations cannot be found and thus it is often necessary to find approximate solutions for such equations.In this regard, many approximation techniques were employed over the years.Some of the approximation methods employed in recent years include the following (see the examples in Section 3).
In the next section, we will present the polynomial least squares method (PLSM), which allows us to determine analytical approximate polynomial solutions for nonlinear integral equations.In the third section, we will compare approximate solutions obtained using PLSM with approximate solutions computed recently for several test problems.If the exact solution of the test problem is polynomial, PLSM is able to find the exact solution.If not, PLSM allows us to obtain approximations with an error relative to the exact solution smaller than the errors obtained using other methods.In most cases, the approximate solutions obtained not only are more precise but also present a simpler expression in comparison to previous ones.

The Polynomial Least Squares Method
We consider the following operator, corresponding to (1): (2) We also consider the so-called remainder associated to (1), defined as the error obtained by replacing the exact solution  with an approximate solution  app : Before we present the actual steps of the method, we introduce the following types of solutions.Definition 1.One calls an -approximate polynomial solution of (1) an approximate polynomial solution  app satisfying the relation (3).Definition 2. One calls a weak -approximate polynomial solution of (1) an approximate polynomial solution  app satisfying the relation: One also considers the following type of convergence.
The aim of PLSM is to find a weak -polynomial solution of the type: The values of the constants  0 ,  1 , . . .,   are calculated using the following steps.
Step 1.By substituting the approximate solution (5) in (1), we obtain the following expression: Step 2. Next, we attach to (1) the following real functional: Step 3. We compute  0 0 ,  0 1 , . . .,  0  as the values which give the minimum of the functional (7).We remark that the computation of the minimum can be performed in many ways and some examples are presented in the next section.
Theorem 4. The necessary condition for (1) to admit a sequence of polynomials   () convergent to the solution of this equation is Moreover, ∀ > 0, ∃ 0 ∈ N such that for ∀ ∈ N,  >  0 , it follows that   () is a weak -approximate polynomial solution of (1).
Proof.Based on the way the coefficients of polynomial   () are computed and taking into account the relations ( 5)-( 8), the following inequality holds: It follows that We obtain lim From this limit, we obtain that ∀ > 0, ∃ 0 ∈ N such that for ∀ ∈ N,  >  0 , it follows that   () is a weak approximate polynomial solution of (1).
Step 5. Taking into account the fact that any -approximate polynomial solution of ( 1) is also a weak  2 (−)-approximate polynomial solution (but the opposite is not always true), it follows that the set of weak approximate solutions of (1) also contains the approximate solutions of the equation.As a consequence, in order to find -approximate polynomial solutions of (1) by PLSM, we will first compute weak approximate polynomial solutions, ỹapp .If |(, ỹapp )| <  then ỹapp is also an -approximate polynomial solution of the problem.

Applications
In this section, we compute approximate polynomial solutions for several test problems previously solved using other methods and compare the results.
Since the solution is a polynomial, we expected that, by using PLSM, we would be able to find, if not the exact solution, at least a very accurate approximation.
In the following, in order to obtain our approximation, we will perform the steps described in the previous section.The computations were performed using the SAGE open source software (v5.5, available at http://www.sagemath.org/).
We choose the polynomial (5) as In Step 1, the expression ( 6) is The corresponding functional (7) from Step 2 is In Step 3, we must compute the minimum of  with respect to  0 ,  1 .As mentioned in the previous section, the minimization can be performed in more than one way.The algorithms used in our computations include the following three possible approaches.

Minimization Based on the Exact Computation of Critical
Points.For relatively simple problems such as (13), it is possible to compute directly the critical points of  and subsequently select the value corresponding to the minimum.
The critical points corresponding to a functional  = ( 0 ,  1 , . . .,   ) are the solution of the system: For the problem (13), the system (17) becomes Using the "solve" command in SAGE and excluding the complex solutions, we find the critical points: In order to find the minimum, we use the second partial derivative test, which is easy enough to be implemented in SAGE, and find that both  0 = 1,  1 = 1/3 and  0 = 1,  1 = 1 are local minima.
Generally speaking, if the degree of ỹ() is too high or if the problem studied is too complicated (e.g., with a strong nonlinearity), then the exact solution of (17) cannot be found exactly.In the case of SAGE, the command "solve" fails to find the solutions exiting with some kind of error message.
In this situation, it is still possible to find good approximations of the solutions of the problem solving the system (17) by means of a numerical method.

Minimization Based on Approximate Computation of Critical Points.
In this subsection, we will find approximate solutions for the given problem (13) solving (17) by means of a SAGE implementation of the well-known Newton method.We will use the same problem (13) and the same initial polynomial ỹ() =  0 + 1  for the sake of simplicity and clarity, even though we already found exact solutions.
As the following results will show, the Newton method is able to find approximate solutions of (17) which can lead to highly accurate approximate solutions of the problem (13).
In order for the sequence of approximations given by Newtons' formula to converge to the solution(s) of the system (17), the starting point ( 0 ,  1 , . . .,   ) of the sequence will take successively values on a given grid of the type  =  0 ×  1 × ⋅ ⋅ ⋅ ×   , where   is a division of an interval [−, ].
More precisely, we found the following approximate solutions: The absolute errors for these approximations, computed as the differences in absolute value between the approximate solutions and the corresponding exact solutions, are of the order of 10 −15 .
We remark that, for polynomials ỹ() of higher degree, in principle the grid  presented above can become quite large.However, in practice, we observed that, in all the examples tested, two or three values in each division   were enough to arrive at the approximation sought.

Minimization Based on a Dedicated Solver.
A third approach in finding the minimum of the functional  from (7), and probably the most convenient one, is the use of a specialized optimization package.In SAGE, we can use the "minimize" command which is based on the well-known open-source SciPy/NumPy libraries (http://www.scipy.org/).The "minimize" command allows us to choose the minimization algorithm used in the computation, the possible choices including among others the Nelder-Mead method, Powell's method, the conjugate gradient method, and simulated annealing method.
In the case of the problem (13) and of the polynomial ỹ() =  0 +  1 , choosing the Powell's method in the "minimize" command, we obtain the following approximations: The absolute errors corresponding to these approximations are of the order of 10 −8 .
In the conclusion of this first application, we remark that, in the following applications, depending on the problem and also depending on the precision sought for the approximate solution, we presented one of the three approaches presented above.If the known solution of the problem is polynomial one, we search for the exact solution.If the solution is not polynomial, from the other two approaches, we presented the one which gave the most accurate approximation, as in the case of application 4.

Application 2. Our second application is a nonlinear
Volterra integral equation ( [3,4]): This equation is obtained from (1) by choosing the constants The exact solution of ( 23) is   () =  2 − 1.In [3] and [4], approximate solutions of (23) were computed using approximations methods based on Chebyshev polynomials.The absolute errors of the approximate solutions obtained are of the order of 10 −2 in [3] and of 10 −15 in [4].
In the following, we will compute the exact solution of the problem (23) using PLSM.We choose the polynomial (5) as The critical points of the corresponding functional ( 7) are the solutions of the system: Using the "solve" command in SAGE and excluding the complex solutions, we obtain the following critical points: Using the second partial derivative test, we deduce that only the first two critical points are minimum points.Computing the values of  for these two minimum points, we see that the global minimum is obtained for  0 = −1,  1 = 0, and  2 = 1 and thus the solution obtained using PLSM is in fact the exact solution of (23): The exact solution of (2) is   =  2 −2.In [2], approximate solutions of (28) were computed using a Rationalized Haar functions method, in [5], approximate solutions of (28) were computed using a Triangular functions method, in [9], approximate solutions of (28) were computed using a Radial basis functions method, and, in [8], approximate solutions of (28) were computed using an Optimal control method.The values of the absolute errors of the approximate solutions obtained varied from 10 −3 to 10 −15 but none of these methods could find the exact solution.
We will compute the exact solution of the problem (23) using PLSM.We choose again the polynomial (5) as The critical points of the corresponding functional ( 7) are the solutions of the system: Using the "solve" command in SAGE and again excluding the complex solutions, we obtain the following critical points: Using the second partial derivative test, it follows that only the first two critical points are minimum points and, by computing the values of  for these two minimum points, we see that the global minimum is obtained for  0 = −2,  1 = 0, and  2 = 1 and the solution obtained using PLSM is the exact solution of (28): (32) (33) The exact solution of (33) is   = cos().Approximate solutions for this equation were computed in [1] using the Rationalized Haar functions method RHM and in [7] using a