MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2014/147079 147079 Research Article Polynomial Least Squares Method for the Solution of Nonlinear Volterra-Fredholm Integral Equations Căruntu Bogdan http://orcid.org/0000-0002-7145-9076 Bota Constantin Milovanović Gradimir 1 Department of Mathematics “Politehnica” University of Timişoara P-ta Victoriei 2, 300006 Timişoara Romania upt.ro 2014 14102014 2014 24 01 2014 26 08 2014 14 10 2014 2014 Copyright © 2014 Bogdan Căruntu and Constantin Bota. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The present paper presents the application of the polynomial least squares method to nonlinear integral equations of the mixed Volterra-Fredholm type. For this type of equations, accurate approximate polynomial solutions are obtained in a straightforward manner and numerical examples are given to illustrate the validity and the applicability of the method. A comparison with previous results is also presented and it emphasizes the accuracy of the method.

1. Introduction

In this paper, we consider nonlinear integral Volterra-Fredholm equations of the general form: (1) y ( t ) = f ( t ) + λ 1 a t k 1 ( t , s ) g 1 ( s , y ( s ) ) d s + λ 2 a b k 2 ( t , s ) g 2 ( s , y ( s ) ) d s , where a ,   b ,   λ 1 , and  λ 2 are constants and f ,   k 1 ,   k 2 ,   g 1 , and  g 2 are functions assumed to have suitable derivatives on the [ a , b ] interval.

Equations of this type are frequently used to model applications from various fields of science such as elasticity, electricity, and magnetism, fluid dynamics, the dynamic of populations, and mathematical economics.

In general, the exact solution of these nonlinear integral equations cannot be found and thus it is often necessary to find approximate solutions for such equations. In this regard, many approximation techniques were employed over the years. Some of the approximation methods employed in recent years include the following (see the examples in Section 3).

Rationalized Haar functions method ([1, 2])

Chebyshev polynomials method ([3, 4])

Triangular functions (TF) method ()

Sinc approximation method ()

Collocation methods ()

Optimal control method ()

Radial basis functions method ()

Bernoulli matrix method ()

Homotopy analysis method ().

In the next section, we will present the polynomial least squares method (PLSM), which allows us to determine analytical approximate polynomial solutions for nonlinear integral equations. In the third section, we will compare approximate solutions obtained using PLSM with approximate solutions computed recently for several test problems. If the exact solution of the test problem is polynomial, PLSM is able to find the exact solution. If not, PLSM allows us to obtain approximations with an error relative to the exact solution smaller than the errors obtained using other methods. In most cases, the approximate solutions obtained not only are more precise but also present a simpler expression in comparison to previous ones.

2. The Polynomial Least Squares Method

We consider the following operator, corresponding to (1): (2) D ( y ) = y ( t ) - f ( t ) - λ 1 a t k 1 ( t , s ) g 1 ( s , y ( s ) ) d s - λ 2 a b k 2 ( t , s ) g 2 ( s , y ( s ) ) d s .

We also consider the so-called remainder associated to (1), defined as the error obtained by replacing the exact solution y with an approximate solution y app : (3) R ( t , y app ) = D ( y app ( t ) ) , t [ a , b ] .

Before we present the actual steps of the method, we introduce the following types of solutions.

Definition 1.

One calls an ϵ -approximate polynomial solution of (1) an approximate polynomial solution y app satisfying the relation (3).

Definition 2.

One calls a weak δ -approximate polynomial solution of (1) an approximate polynomial solution y app satisfying the relation: (4) a b R 2 ( t , y app ) d t δ .

One also considers the following type of convergence.

Definition 3.

One considers the sequence of polynomials P m ( t ) = a 0 + a 1 t + + a m t m , a i R , i = 0,1 , , m . One calls the sequence of polynomials P m ( t ) convergent to the solution of (1) if lim m D ( P m ( t ) ) = 0 .

The aim of PLSM is to find a weak ϵ -polynomial solution of the type: (5) y ~ ( t ) = k = 0 m c k t k , m > n .

The values of the constants c 0 , c 1 , , c m are calculated using the following steps.

Step 1.

By substituting the approximate solution (5) in (1), we obtain the following expression: (6) R ( t , c 0 , c 1 , , c m ) = R ( t , y ~ ) = y ~ ( t ) - f ( t ) - λ 1 a t k 1 ( t , s ) g 1 ( s , y ~ ( s ) ) d s - λ 2 a b k 2 ( t , s ) g 2 ( s , y ~ ( s ) ) d s .

Step 2.

Next, we attach to (1) the following real functional: (7) J ( c 0 , c 1 , , c m ) = a b R 2 ( t , c 0 , c 1 , , c m ) d t .

Step 3.

We compute c 0 0 , c 1 0 , , c m 0 as the values which give the minimum of the functional (7). We remark that the computation of the minimum can be performed in many ways and some examples are presented in the next section.

Step 4.

Using the constants c 0 0 , c 1 0 , , c m 0 determined in the previous step, we consider the polynomial: (8) T m ( t ) = k = 0 m c k 0 t k .

The following convergence theorem holds.

Theorem 4.

The necessary condition for (1) to admit a sequence of polynomials P m ( t ) convergent to the solution of this equation is (9) lim m a b R 2 ( t , T m ) d t = 0 . Moreover, ϵ > 0 ,   m 0 N such that for m N ,   m > m 0 , it follows that T m ( t ) is a weak ϵ -approximate polynomial solution of (1).

Proof.

Based on the way the coefficients of polynomial T m ( t ) are computed and taking into account the relations (5)–(8), the following inequality holds: (10) 0 a b R 2 ( t , T m ( t ) ) d t a b R 2 ( t , P m ( t ) ) d t , m N . It follows that (11) 0 lim m a b R 2 ( t , T m ( t ) ) d t lim m a b R 2 ( t , P m ( t ) ) d t = 0 . We obtain (12) lim m a b R 2 ( t , T m ( t ) ) d t = 0 . From this limit, we obtain that ϵ > 0 ,   m 0 N such that for m N ,   m > m 0 , it follows that T m ( t ) is a weak ϵ -approximate polynomial solution of (1).

Step 5.

Taking into account the fact that any ϵ -approximate polynomial solution of (1) is also a weak ϵ 2 ( b - a ) -approximate polynomial solution (but the opposite is not always true), it follows that the set of weak approximate solutions of (1) also contains the approximate solutions of the equation. As a consequence, in order to find ϵ -approximate polynomial solutions of (1) by PLSM, we will first compute weak approximate polynomial solutions, y ~ app . If | R ( t , y ~ app ) | < ϵ then y ~ app is also an ϵ -approximate polynomial solution of the problem.

3. Applications

In this section, we compute approximate polynomial solutions for several test problems previously solved using other methods and compare the results.

3.1. Application 1

Our first application is a simple nonlinear Fredholm integral equation ([6, 10]): (13) y ( t ) = 1 - 5 12 t + 0 1 t s y 2 ( s ) d s . This equation is obtained from (1) by choosing the constants a = 0 ,   b = 1 ,   λ 1 = 0 , and   λ 2 = 1 and the functions f ( t ) = 1 - ( 5 / 12 ) t ,   k 2 ( t , s ) = t s ,  and g 2 ( s , y ( s ) ) = y 2 ( s ) .

The exact solution of (13) presented in [6, 10] is y e ( t ) = 1 + t / 3 . In , approximate solutions of (13) were computed using two Sinc-collocation-type methods and, in  the exact solution, y e ( t ) was determined using a Bernoulli matrix method.

Since the solution is a polynomial, we expected that, by using PLSM, we would be able to find, if not the exact solution, at least a very accurate approximation.

In the following, in order to obtain our approximation, we will perform the steps described in the previous section. The computations were performed using the SAGE open source software (v5.5, available at http://www.sagemath.org/).

We choose the polynomial (5) as (14) y ~ ( t ) = c 0 + c 1 t .

In Step 1, the expression (6) is (15) R ( t , c 0 , c 1 ) = - 1 2 c 0 2 t - 2 3 c 0 c 1 t - 1 4 c 1 2 t + c 1 t + c 0 + 5 12 t - 1 .

The corresponding functional (7) from Step 2 is (16) J ( c 0 , c 1 ) = 1 18 ( 2 c 0 - 3 ) c 1 3 + 1 12 c 0 4 + 1 48 c 1 4 + 1 216 ( 50 c 0 2 - 150 c 0 + 111 ) c 1 2 - 1 2 c 0 3 + 1 54 ( 12 c 0 3 - 54 c 0 2 + 80 c 0 - 39 ) c 1 + 49 36 c 0 2 - 19 12 c 0 + 277 432 .

In Step 3, we must compute the minimum of J with respect to c 0 ,   c 1 . As mentioned in the previous section, the minimization can be performed in more than one way. The algorithms used in our computations include the following three possible approaches.

3.1.1. Minimization Based on the Exact Computation of Critical Points

For relatively simple problems such as (13), it is possible to compute directly the critical points of J and subsequently select the value corresponding to the minimum.

The critical points corresponding to a functional J = J ( c 0 , c 1 , , c m ) are the solution of the system: (17) J c 0 = 0 J c 1 = 0 J c m = 0 .

For the problem (13), the system (17) becomes (18) 25 108 ( 2 c 0 - 3 ) c 1 2 + 1 3 c 0 3 - 3 2 c 0 2 + 1 9 c 1 3 + 2 27 ( 9 c 0 2 - 27 c 0 + 20 ) c 1 + 49 18 c 0 - 19 12 = 0 1 6 ( 2 c 0 - 3 ) c 1 2 + 2 9 c 0 3 + 1 12 c 1 3 + 1 108 ( 50 c 0 2 - 150 c 0 + 111 ) c 1 - c 0 2 + 40 27 c 0 - 13 18 = 0 .

Using the “solve” command in SAGE and excluding the complex solutions, we find the critical points: (19) c 0 = 1 , c 1 = 1 3 , c 0 = 1 , c 1 = 1 c 0 = 0.997942386831 , c 1 = 0.669410080769 .

In order to find the minimum, we use the second partial derivative test, which is easy enough to be implemented in SAGE, and find that both c 0 = 1 ,   c 1 = 1 / 3 and c 0 = 1 ,   c 1 = 1 are local minima.

Moreover, we find that J ( 1 , 1 / 3 ) = J ( 1,1 ) = 0 , which means that, by using PLSM, we found not one but two exact solutions of (13): (20) y PLSM 1 = 1 + t 3 , y PLSM 2 = 1 + t .

We remark that the exact solutions can be found this way even if the initial polynomial y ~ ( t ) has a degree greater than one. For example, for the given problem (13) using a third degree polynomial leads to the local minima c 0 = 1 ,   c 1 = 1 / 3 ,   c 2 = 0 ,   c 3 = 0 and c 0 = 1 ,   c 1 = 1 ,   c 2 = 0 ,   c 3 = 0 .

Generally speaking, if the degree of y ~ ( t ) is too high or if the problem studied is too complicated (e.g., with a strong nonlinearity), then the exact solution of (17) cannot be found exactly. In the case of SAGE, the command “solve” fails to find the solutions exiting with some kind of error message.

In this situation, it is still possible to find good approximations of the solutions of the problem solving the system (17) by means of a numerical method.

3.1.2. Minimization Based on Approximate Computation of Critical Points

In this subsection, we will find approximate solutions for the given problem (13) solving (17) by means of a SAGE implementation of the well-known Newton method. We will use the same problem (13) and the same initial polynomial y ~ ( t ) = c 0 + c 1 t for the sake of simplicity and clarity, even though we already found exact solutions.

As the following results will show, the Newton method is able to find approximate solutions of (17) which can lead to highly accurate approximate solutions of the problem (13).

In order for the sequence of approximations given by Newtons’ formula to converge to the solution(s) of the system (17), the starting point ( s 0 , s 1 , , s m ) of the sequence will take successively values on a given grid of the type G = I 0 × I 1 × × I m , where I i is a division of an interval [ - α , α ] .

In the case of the problem (13) and of the polynomial y ~ ( t ) = c 0 + c 1 t , the grid G = { - 1,0 , 1 } × { - 1,0 , 1 } is large enough in the sense that if the starting point ( s 0 , s 1 ) scans G , we can obtain using Newtons’ formula approximations for both solutions y PLSM 1 = 1 + t / 3 and y PLSM 2 = 1 + t .

More precisely, we found the following approximate solutions: (21) y PLSM 1 a = 0.333333333333336 t + 1.00000000000000 , y PLSM 2 a = 0.999999999999991 t + 1.00000000000000 .

The absolute errors for these approximations, computed as the differences in absolute value between the approximate solutions and the corresponding exact solutions, are of the order of 1 0 - 15 .

We remark that, for polynomials y ~ ( t ) of higher degree, in principle the grid G presented above can become quite large. However, in practice, we observed that, in all the examples tested, two or three values in each division I i were enough to arrive at the approximation sought.

3.1.3. Minimization Based on a Dedicated Solver

A third approach in finding the minimum of the functional J from (7), and probably the most convenient one, is the use of a specialized optimization package. In SAGE, we can use the “minimize” command which is based on the well-known open-source SciPy/NumPy libraries (http://www.scipy.org/). The “minimize” command allows us to choose the minimization algorithm used in the computation, the possible choices including among others the Nelder-Mead method, Powell’s method, the conjugate gradient method, and simulated annealing method.

In the case of the problem (13) and of the polynomial y ~ ( t ) = c 0 + c 1 t , choosing the Powell’s method in the “minimize” command, we obtain the following approximations: (22) y PLSM 1 a = 0.333333264594 t + 1.00000000881 , y PLSM 2 a = 0.99999992986 t + 0.999999992207 . The absolute errors corresponding to these approximations are of the order of 1 0 - 8 .

In the conclusion of this first application, we remark that, in the following applications, depending on the problem and also depending on the precision sought for the approximate solution, we presented one of the three approaches presented above. If the known solution of the problem is polynomial one, we search for the exact solution. If the solution is not polynomial, from the other two approaches, we presented the one which gave the most accurate approximation, as in the case of application 4.

3.2. Application 2

Our second application is a nonlinear Volterra integral equation ([3, 4]): (23) y ( t ) = 1 15 ( 2 t 6 - 5 t 4 + 15 t 2 - 8 t - 20 ) bbbbb + - 1 t ( t - 2 s ) y 2 ( s ) d s . This equation is obtained from (1) by choosing the constants a = - 1 , λ 1 = 1 , λ 2 = 0 and the functions f ( t ) = ( 1 / 15 ) ( 2 t 6 - 5 t 4 + 15 t 2 - 8 t - 20 ) ,   k 1 ( t , s ) = t - 2 s , g 1 ( s , y ( s ) ) = y 2 ( s ) .

The exact solution of (23) is   y e ( t ) = t 2 - 1 . In  and , approximate solutions of (23) were computed using approximations methods based on Chebyshev polynomials. The absolute errors of the approximate solutions obtained are of the order of 1 0 - 2 in  and of 1 0 - 15 in .

In the following, we will compute the exact solution of the problem (23) using PLSM. We choose the polynomial (5) as (24) y ~ ( t ) = c 0 + c 1 t + c 2 t 2 .

The critical points of the corresponding functional (7) are the solutions of the system: (25) J c 0 = 1 315 ( 682 c 0 - 357 ) c 1 2 + 1 315 ( 3622 c 0 + 273 ) c 1 2 + 1 14175 ( 19745 c 0 - 15588 c 1 - 9450 ) c 2 2 + 1 14175 ( 81695 c 0 - 62964 c 1 - 24570 ) c 2 2 + 32 3 c 0 3 - 12 c 0 2 - 136 45 c 1 3 + 944 693 c 2 3 - 1 2100 ( 6090 c 0 2 - 7700 c 0 - 983 ) c 1 - 1 6300 ( 112770 c 0 2 - 27300 c 0 - 37627 ) c 1 - 1 62370 ( 99 ( 10531 c 0 - 1491 ) c 1 - 723492 c 0 2 mmimmm - 373868 c 1 2 + 461538 c 0 + 165152 ) c 2 - 1 62370 ( 99 ( 2173 c 0 - 1085 ) c 1 - 141372 c 0 2 mmiimmii - 83732 c 1 2 + 170478 c 0 + 30248 ) c 2 - 1828 315 c 0 + 88 21 , J c 1 = - 1 120 ( 199 c 0 - 80 ) c 1 2 - 1 120 ( 889 c 0 + 240 ) c 1 2 - 1 12600 ( 13856 c 0 - 11240 c 1 - 4725 ) c 2 2 - 1 4200 ( 18656 c 0 - 14984 c 1 - 1785 ) c 2 2 - 104 15 c 0 3 + 832 405 c 1 3 - 88 75 c 2 3 + 2 31185 ( 33759 c 0 2 - 35343 c 0 - 7562 ) c 1 + 2 31185 ( 179289 c 0 2 + 27027 c 0 - 41288 ) c 1 + 1 113400 ( 40 ( 33988 c 0 + 4347 ) c 1 - 947790 c 0 2 mmiimmmi - 481464 c 1 2 + 268380 c 0 + 380295 ) c 2 + 1 113400 ( 40 ( 7612 c 0 - 2835 ) c 1 - 195570 c 0 2 mmmiimmi - 121608 c 1 2 + 195300 c 0 + 50553 ) c 2 + 4 c 0 2 + 10144 1575 c 0 + 32 45 , J c 2 = 1 5670 ( 7612 c 0 - 2835 ) c 1 2 + 1 5670 ( 33988 c 0 + 4347 ) c 1 2 + 1 13860 ( 12612 c 0 - 10395 c 1 - 3850 ) c 2 2 + 1 69300 ( 220140 c 0 - 191961 c 1 - 60830 ) c 2 2 + 208 45 c 0 3 - 2792 1575 c 1 3 + 6176 6825 c 2 3 - 1 2520 ( 21062 c 0 2 - 5964 c 0 - 8451 ) c 1 - 1 12600 ( 21730 c 0 2 - 21700 c 0 - 5617 ) c 1 - 1 8108100 ( 3861 ( 18656 c 0 - 1785 ) c 1 mmmiimmm - 46729540 c 0 2 - 28926612 c 1 2 mmmiimmm + 28108080 c 0 + 14355168 ) c 2 - 1 8108100 ( 1287 ( 13856 c 0 - 4725 ) c 1 mmmmiimm - 11294140 c 0 2 - 7232940 c 1 2 mmmmiimm + 10810800 c 0 + 3256080 ) c 2 - 76 15 c 0 2 - 19540 6237 c 0 + 1048 945 .

Using the “solve” command in SAGE and excluding the complex solutions, we obtain the following critical points: (26) c 0 = - 1 , c 1 = 0 , c 2 = 1 c 0 = 5.63142580019 , c 1 = 5.9171539961 , c 2 = - 3.41555711282 c 0 = 3.62669864109 , c 1 = 2.43201607502 , c 2 = - 2.53283458022 c 0 = 4.30719794344 , c 1 = 4.67369589345 , c 2 = - 3.24057971014 c 0 = 5.97333772219 , c 1 = 5.19967444384 , c 2 = - 2.21509009009 .

Using the second partial derivative test, we deduce that only the first two critical points are minimum points. Computing the values of J for these two minimum points, we see that the global minimum is obtained for c 0 = - 1 ,   c 1 = 0 , and  c 2 = 1 and thus the solution obtained using PLSM is in fact the exact solution of (23): (27) y PLSM = t 2 - 1 .

3.3. Application 3

The third application is a nonlinear mixed Volterra-Fredholm integral equation ([2, 5, 9]): (28) y ( t ) = - 1 30 t 6 + 1 3 t 4 - t 2 + 5 3 t - 5 4 + 0 t ( t - s ) ( y ( s ) ) 2 d s + 0 1 ( t + s ) y ( s ) d s .

The exact solution of (2) is y e = t 2 - 2 . In , approximate solutions of (28) were computed using a Rationalized Haar functions method, in , approximate solutions of (28) were computed using a Triangular functions method, in , approximate solutions of (28) were computed using a Radial basis functions method, and, in , approximate solutions of (28) were computed using an Optimal control method. The values of the absolute errors of the approximate solutions obtained varied from 1 0 - 3 to 1 0 - 15 but none of these methods could find the exact solution.

We will compute the exact solution of the problem (23) using PLSM. We choose again the polynomial (5) as (29) y ~ ( t ) = c 0 + c 1 t + c 2 t 2 .

The critical points of the corresponding functional (7) are the solutions of the system: (30) J c 0 = ( 0.0135802469136 c 0 + 0.00555555555556 c 1 - 0.00886243386243 ) c 2 2 + 0.2 c 0 3 + 0.0555555555556 c 0 c 1 2 + 0.25 c 0 2 + 0.00694444444444 c 1 3 + 0.0010101010101 c 2 3 + ( 0.166666666667 c 0 2 + 0.0722222222222 c 0 - 0.155 0.166666666667 c 0 2 + 0.0722222222222 c 0 ) c 1 + ( + 0.0714285714286 c 0 2 ( 0.0527777777778 c 0 - 0.0186507936508 ) c 1 + 0.0714285714286 c 0 2 ) c 2 + ( 0.0104938271605 c 1 2 - 0.0222222222222 c 0 - 0.138135321469 0.0104938271605 c 1 2 ) c 2 - 0.145502645503 c 0 + 0.151984126984 , J c 1 = 3 ( 0.00694444444444 c 0 - 0.00277777777778 ) c 1 2 + ( 0.00555555555556 c 0 + 0 . 00282828282828 c 1 - 0.00813492063492 ) c 2 2 + 0.0555555555556 c 0 3 + 0.00308641975309 c 1 3 + 0.000555555555556 c 2 3 + 2 ( 0.0277777777778 c 0 2 + 0.014265672599 ) c 1 + ( + 0.0263888888889 c 0 2 + 0.005 c 1 2 2 ( 0.0104938271605 c 0 - 0.00939153439153 ) c 1 + 0.0263888888889 c 0 2 + 0.005 c 1 2 ) c 2 × ( - 0.0186507936508 c 0 + 0.0565079365079 ) c 2 + 0.0361111111111 c 0 2 - 0.155 c 0 - 0.190674603175 , J c 2 = ( 0.0104938271605 c 0 - 0.00939153439153 ) c 1 2 + 3 ( 0.0010101010101 c 0 + 0.000555555555556 c 1 - 0.00224867724868 0.0010101010101 c 0 + 0.000555555555556 c 1 ) c 2 2 + 0.0238095238095 c 0 3 + 0.00166666666667 c 1 3 + 0.00034188034188 c 2 3 + ( 0.0263888888889 c 0 2 iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii - 0.0186507936508 c 0 iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii + 0.0565079365079 0.0263888888889    c 0 2 ) c 1 + 2 ( + 0.00679012345679 c 0 2 ( 0.00555555555556 c 0 - 0.00813492063492 ) c 1 mim + 0.00679012345679 c 0 2 ) c 2 + ( 0.00141414141414 c 1 2 - 0.00886243386243 c 0 + 0.045963018463 ( 0.00141414141414    c 1 2 - 0.00886243386243    c 0 ) c 2 - 0.0111111111111 c 0 2 - 0.138135321469 c 0 - 0.210582010582 .

Using the “solve” command in SAGE and again excluding the complex solutions, we obtain the following critical points: (31) c 0 = - 2 , c 1 = 0 , c 2 = 1 , c 0 = 0.397332877415 , c 1 = 2.25632083697 , c 2 = 2.111946533 , c 0 = - 1.25269343781 , c 1 = 4.26258389262 , c 2 = - 2.02633451957 .

Using the second partial derivative test, it follows that only the first two critical points are minimum points and, by computing the values of J for these two minimum points, we see that the global minimum is obtained for c 0 = - 2 ,   c 1 = 0 , and  c 2 = 1 and the solution obtained using PLSM is the exact solution of (28): (32) y PLSM = t 2 - 2 .

3.4. Application 4

The next application is the nonlinear Volterra-Fredholm integral equation ([1, 7]): (33) y ( t ) = 2 cos ( t ) - 2 + 3 0 t sin ( t - s ) y 2 ( s ) d s + 6 7 - 6 cos ( 1 ) 0 1 ( 1 - s ) cos 2 ( s ) ( s + y ( s ) ) d s .

The exact solution of (33) is y e = cos ( t ) . Approximate solutions for this equation were computed in  using the Rationalized Haar functions method RHM and in  using a composite collocation method CCM consisting of a hybrid of block-pulse functions and Lagrange polynomials. The solution in  contained sixteen terms while the one in  is a piecewise polynomial solution consisting of two polynomials of fourth degree.

Using the PLSM, we computed a seventh order polynomial approximate solution of (33). We used the second approach described in application 1, solving the corresponding system (17) by means of Newton’s method. We obtained the approximate solution: (34) y PLSM = 0.000103064070567775 t 7 - 0.00157260234848809 t 6 + 0.000179224005722049 t 5 + 0.0415635195906221 t 4 + 0.0000354244251299783 t 3 - 0.500006936162663 t 2 + ( 7.11478876757492 × 1 0 - 7 ) t + 1.00000000094979 .

Table 1 presents the comparison of the absolute errors corresponding to the three approximate solutions y RHM (), y CCM (), and y PLSM . The approximate solution given by PLSM is much closer to the exact solution and has a simpler form.

Comparison of the absolute errors of the approximate solutions for Problem (33).

RHM CCM PLSM
t = 0 5 · 1 0 - 5 3.1021 · 1 0 - 5 9.4979 · 1 0 - 10
t = 0.2 5 · 1 0 - 6 3.2341 · 1 0 - 6 3.1009 · 1 0 - 08
t = 0.4 1 · 1 0 - 4 1.9092 · 1 0 - 5 3.7752 · 1 0 - 08
t = 0.6 1 · 1 0 - 4 1.5029 · 1 0 - 5 4.9982 · 1 0 - 08
t = 0.8 5 · 1 0 - 5 3.6499 · 1 0 - 6 7.0526 · 1 0 - 08
t = 1 1 · 1 0 - 4 2.4290 · 1 0 - 5 1.0015 · 1 0 - 07
4. Conclusions

The paper presents the computation of approximate polynomial solutions for nonlinear integral equations of mixed Volterra-Fredholm type by using the polynomial least squares method, which is presented as a straightforward and efficient method.

The test problems solved clearly illustrate the accuracy of the method, since, in all of the cases, we were able to compute better approximations than the ones computed in previous papers, and, in most cases, the exact solutions were found. Moreover, the expressions of the approximations computed by PLSM are also simpler than the expressions of the approximations computed by using other methods.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Ordokhani Y. Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via rationalized HAAr functions Applied Mathematics and Computation 2006 180 2 436 443 10.1016/j.amc.2005.12.034 MR2270022 Ordokhani Y. Razzaghi M. Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via a collocation method and rationalized HAAr functions Applied Mathematics Letters 2008 21 1 4 9 10.1016/j.aml.2007.02.007 MR2435219 Maleknejad K. Sohrabi S. Rostami Y. Numerical solution of nonlinear Volterra integral equations of the second kind by using Chebyshev polynomials Applied Mathematics and Computation 2007 188 1 123 128 10.1016/j.amc.2006.09.099 MR2327098 ZBL1114.65370 Yang C. Chebyshev polynomial solution of nonlinear integral equations Journal of the Franklin Institute. Engineering and Applied Mathematics 2012 349 3 947 956 10.1016/j.jfranklin.2011.10.023 MR2899319 ZBL1278.65206 Maleknejad K. Almasieh H. Roodaki M. Triangular functions (TF) method for the solution of nonlinear Volterra-Fredholm integral equations Communications in Nonlinear Science and Numerical Simulation 2010 15 11 3293 3298 10.1016/j.cnsns.2009.12.015 MR2646158 Maleknejad K. Nedaiasl K. Application of sinc-collocation method for solving a class of nonlinear Fredholm integral equations Computers & Mathematics with Applications 2011 62 8 3292 3303 10.1016/j.camwa.2011.08.045 MR2837761 Marzban H. R. Tabrizidooz H. R. Razzaghi M. A composite collocation method for the nonlinear mixed Volterra-Fredholm-Hammerstein integral equations Communications in Nonlinear Science and Numerical Simulation 2011 16 3 1186 1194 10.1016/j.cnsns.2010.06.013 MR2736626 ZBL1221.65340 El-Ameen M. A. El-Kady M. A new direct method for solving nonlinear Volterra-Fredholm-Hammerstein integral equations via optimal control problem Journal of Applied Mathematics 2012 2012 10 714973 10.1155/2012/714973 MR2915712 Parand K. Rad J. A. Numerical solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via collocation method based on radial basis functions Applied Mathematics and Computation 2012 218 9 5292 5309 10.1016/j.amc.2011.11.013 MR2870050 ZBL1244.65245 Bhrawy A. H. Tohidi E. Soleymani F. A new Bernoulli matrix method for solving high-order linear and nonlinear Fredholm integro-differential equations with piecewise intervals Applied Mathematics and Computation 2012 219 2 482 497 10.1016/j.amc.2012.06.020 MR2956980 ZBL06284144 Ghanbari B. The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations The Scientific World Journal 2014 2014 7 465951 10.1155/2014/465951