The present paper presents the application of the polynomial least squares method to nonlinear integral equations of the mixed Volterra-Fredholm type. For this type of equations, accurate approximate polynomial solutions are obtained in a straightforward manner and numerical examples are given to illustrate the validity and the applicability of the method. A comparison with previous results is also presented and it emphasizes the accuracy of the method.
1. Introduction
In this paper, we consider nonlinear integral Volterra-Fredholm equations of the general form:
(1)y(t)=f(t)+λ1∫atk1(t,s)g1(s,y(s))ds+λ2∫abk2(t,s)g2(s,y(s))ds,
where a, b, λ1, and λ2 are constants and f, k1, k2, g1, and g2 are functions assumed to have suitable derivatives on the [a,b] interval.
Equations of this type are frequently used to model applications from various fields of science such as elasticity, electricity, and magnetism, fluid dynamics, the dynamic of populations, and mathematical economics.
In general, the exact solution of these nonlinear integral equations cannot be found and thus it is often necessary to find approximate solutions for such equations. In this regard, many approximation techniques were employed over the years. Some of the approximation methods employed in recent years include the following (see the examples in Section 3).
Rationalized Haar functions method ([1, 2])
Chebyshev polynomials method ([3, 4])
Triangular functions (TF) method ([5])
Sinc approximation method ([6])
Collocation methods ([7])
Optimal control method ([8])
Radial basis functions method ([9])
Bernoulli matrix method ([10])
Homotopy analysis method ([11]).
In the next section, we will present the polynomial least squares method (PLSM), which allows us to determine analytical approximate polynomial solutions for nonlinear integral equations. In the third section, we will compare approximate solutions obtained using PLSM with approximate solutions computed recently for several test problems. If the exact solution of the test problem is polynomial, PLSM is able to find the exact solution. If not, PLSM allows us to obtain approximations with an error relative to the exact solution smaller than the errors obtained using other methods. In most cases, the approximate solutions obtained not only are more precise but also present a simpler expression in comparison to previous ones.
2. The Polynomial Least Squares Method
We consider the following operator, corresponding to (1):
(2)D(y)=y(t)-f(t)-λ1∫atk1(t,s)g1(s,y(s))ds-λ2∫abk2(t,s)g2(s,y(s))ds.
We also consider the so-called remainder associated to (1), defined as the error obtained by replacing the exact solution y with an approximate solution yapp:
(3)R(t,yapp)=D(yapp(t)),t∈[a,b].
Before we present the actual steps of the method, we introduce the following types of solutions.
Definition 1.
One calls an ϵ-approximate polynomial solution of (1) an approximate polynomial solution yapp satisfying the relation (3).
Definition 2.
One calls a weak δ-approximate polynomial solution of (1) an approximate polynomial solution yapp satisfying the relation:
(4)∫abR2(t,yapp)dt≤δ.
One also considers the following type of convergence.
Definition 3.
One considers the sequence of polynomials Pm(t)=a0 + a1t+⋯+amtm, ai∈R, i=0,1,…,m. One calls the sequence of polynomials Pm(t) convergent to the solution of (1) if limm→∞D(Pm(t))=0.
The aim of PLSM is to find a weak ϵ-polynomial solution of the type:
(5)y~(t)=∑k=0mcktk,m>n.
The values of the constants c0,c1,…,cm are calculated using the following steps.
Step 1.
By substituting the approximate solution (5) in (1), we obtain the following expression:
(6)R(t,c0,c1,…,cm)=R(t,y~)=y~(t)-f(t)-λ1∫atk1(t,s)g1(s,y~(s))ds-λ2∫abk2(t,s)g2(s,y~(s))ds.
Step 2.
Next, we attach to (1) the following real functional:
(7)J(c0,c1,…,cm)=∫abR2(t,c0,c1,…,cm)dt.
Step 3.
We compute c00,c10,…,cm0 as the values which give the minimum of the functional (7). We remark that the computation of the minimum can be performed in many ways and some examples are presented in the next section.
Step 4.
Using the constants c00,c10,…,cm0 determined in the previous step, we consider the polynomial:(8)Tm(t)=∑k=0mck0tk.
The following convergence theorem holds.
Theorem 4.
The necessary condition for (1) to admit a sequence of polynomials Pm(t) convergent to the solution of this equation is
(9)limm→∞∫abR2(t,Tm)dt=0.
Moreover, ∀ϵ>0, ∃m0∈N such that for ∀m∈N, m>m0, it follows that Tm(t) is a weak ϵ-approximate polynomial solution of (1).
Proof.
Based on the way the coefficients of polynomial Tm(t) are computed and taking into account the relations (5)–(8), the following inequality holds:
(10)0≤∫abR2(t,Tm(t))dt≤∫abR2(t,Pm(t))dt,∀m∈N.
It follows that
(11)0≤limm→∞∫abR2(t,Tm(t))dt≤limm→∞∫abR2(t,Pm(t))dt=0.
We obtain
(12)limm→∞∫abR2(t,Tm(t))dt=0.
From this limit, we obtain that ∀ϵ>0, ∃m0∈N such that for ∀m∈N, m>m0, it follows that Tm(t) is a weak ϵ-approximate polynomial solution of (1).
Step 5.
Taking into account the fact that any ϵ-approximate polynomial solution of (1) is also a weak ϵ2(b-a)-approximate polynomial solution (but the opposite is not always true), it follows that the set of weak approximate solutions of (1) also contains the approximate solutions of the equation. As a consequence, in order to find ϵ-approximate polynomial solutions of (1) by PLSM, we will first compute weak approximate polynomial solutions, y~app. If |R(t,y~app)|<ϵ then y~app is also an ϵ-approximate polynomial solution of the problem.
3. Applications
In this section, we compute approximate polynomial solutions for several test problems previously solved using other methods and compare the results.
3.1. Application 1
Our first application is a simple nonlinear Fredholm integral equation ([6, 10]):
(13)y(t)=1-512t+∫01tsy2(s)ds.
This equation is obtained from (1) by choosing the constants a=0, b=1, λ1=0, and λ2=1 and the functions f(t)=1-(5/12)t, k2(t,s)=ts, and g2(s,y(s))=y2(s).
The exact solution of (13) presented in [6, 10] is ye(t)=1+t/3. In [6], approximate solutions of (13) were computed using two Sinc-collocation-type methods and, in [10] the exact solution, ye(t) was determined using a Bernoulli matrix method.
Since the solution is a polynomial, we expected that, by using PLSM, we would be able to find, if not the exact solution, at least a very accurate approximation.
In the following, in order to obtain our approximation, we will perform the steps described in the previous section. The computations were performed using the SAGE open source software (v5.5, available at http://www.sagemath.org/).
We choose the polynomial (5) as
(14)y~(t)=c0+c1t.
In Step 1, the expression (6) is
(15)R(t,c0,c1)=-12c02t-23c0c1t-14c12t+c1t+c0+512t-1.
The corresponding functional (7) from Step 2 is
(16)J(c0,c1)=118(2c0-3)c13+112c04+148c14+1216(50c02-150c0+111)c12-12c03+154(12c03-54c02+80c0-39)c1+4936c02-1912c0+277432.
In Step 3, we must compute the minimum of J with respect to c0, c1. As mentioned in the previous section, the minimization can be performed in more than one way. The algorithms used in our computations include the following three possible approaches.
3.1.1. Minimization Based on the Exact Computation of Critical Points
For relatively simple problems such as (13), it is possible to compute directly the critical points of J and subsequently select the value corresponding to the minimum.
The critical points corresponding to a functional J=J(c0,c1,…,cm) are the solution of the system:
(17)∂J∂c0=0∂J∂c1=0⋮∂J∂cm=0.
For the problem (13), the system (17) becomes
(18)25108(2c0-3)c12+13c03-32c02+19c13+227(9c02-27c0+20)c1+4918c0-1912=016(2c0-3)c12+29c03+112c13+1108(50c02-150c0+111)c1-c02+4027c0-1318=0.
Using the “solve” command in SAGE and excluding the complex solutions, we find the critical points:
(19)c0=1,c1=13,c0=1,c1=1c0=0.997942386831,c1=0.669410080769.
In order to find the minimum, we use the second partial derivative test, which is easy enough to be implemented in SAGE, and find that both c0=1, c1=1/3 and c0=1, c1=1 are local minima.
Moreover, we find that J(1,1/3)=J(1,1)=0, which means that, by using PLSM, we found not one but two exact solutions of (13):
(20)yPLSM1=1+t3,yPLSM2=1+t.
We remark that the exact solutions can be found this way even if the initial polynomial y~(t) has a degree greater than one. For example, for the given problem (13) using a third degree polynomial leads to the local minima c0=1, c1=1/3, c2=0, c3=0 and c0=1, c1=1, c2=0, c3=0.
Generally speaking, if the degree of y~(t) is too high or if the problem studied is too complicated (e.g., with a strong nonlinearity), then the exact solution of (17) cannot be found exactly. In the case of SAGE, the command “solve” fails to find the solutions exiting with some kind of error message.
In this situation, it is still possible to find good approximations of the solutions of the problem solving the system (17) by means of a numerical method.
3.1.2. Minimization Based on Approximate Computation of Critical Points
In this subsection, we will find approximate solutions for the given problem (13) solving (17) by means of a SAGE implementation of the well-known Newton method. We will use the same problem (13) and the same initial polynomial y~(t)=c0+c1t for the sake of simplicity and clarity, even though we already found exact solutions.
As the following results will show, the Newton method is able to find approximate solutions of (17) which can lead to highly accurate approximate solutions of the problem (13).
In order for the sequence of approximations given by Newtons’ formula to converge to the solution(s) of the system (17), the starting point (s0,s1,…,sm) of the sequence will take successively values on a given grid of the type G=I0×I1×⋯×Im, where Ii is a division of an interval [-α,α].
In the case of the problem (13) and of the polynomial y~(t)=c0+c1t, the grid G={-1,0,1}×{-1,0,1} is large enough in the sense that if the starting point (s0,s1) scans G, we can obtain using Newtons’ formula approximations for both solutions yPLSM1=1+t/3 and yPLSM2=1+t.
More precisely, we found the following approximate solutions:
(21)yPLSM1a=0.333333333333336t+1.00000000000000,yPLSM2a=0.999999999999991t+1.00000000000000.
The absolute errors for these approximations, computed as the differences in absolute value between the approximate solutions and the corresponding exact solutions, are of the order of 10-15.
We remark that, for polynomials y~(t) of higher degree, in principle the grid G presented above can become quite large. However, in practice, we observed that, in all the examples tested, two or three values in each division Ii were enough to arrive at the approximation sought.
3.1.3. Minimization Based on a Dedicated Solver
A third approach in finding the minimum of the functional J from (7), and probably the most convenient one, is the use of a specialized optimization package. In SAGE, we can use the “minimize” command which is based on the well-known open-source SciPy/NumPy libraries (http://www.scipy.org/). The “minimize” command allows us to choose the minimization algorithm used in the computation, the possible choices including among others the Nelder-Mead method, Powell’s method, the conjugate gradient method, and simulated annealing method.
In the case of the problem (13) and of the polynomial y~(t)=c0+c1t, choosing the Powell’s method in the “minimize” command, we obtain the following approximations:
(22)yPLSM1a=0.333333264594t+1.00000000881,yPLSM2a=0.99999992986t+0.999999992207.
The absolute errors corresponding to these approximations are of the order of 10-8.
In the conclusion of this first application, we remark that, in the following applications, depending on the problem and also depending on the precision sought for the approximate solution, we presented one of the three approaches presented above. If the known solution of the problem is polynomial one, we search for the exact solution. If the solution is not polynomial, from the other two approaches, we presented the one which gave the most accurate approximation, as in the case of application 4.
3.2. Application 2
Our second application is a nonlinear Volterra integral equation ([3, 4]):
(23)y(t)=115(2t6-5t4+15t2-8t-20)bbbbb+∫-1t(t-2s)y2(s)ds.
This equation is obtained from (1) by choosing the constants a=-1,λ1=1,λ2=0 and the functions f(t)=(1/15)(2t6-5t4+15t2-8t-20), k1(t,s)=t-2s, g1(s,y(s))=y2(s).
The exact solution of (23) is ye(t)=t2-1. In [3] and [4], approximate solutions of (23) were computed using approximations methods based on Chebyshev polynomials. The absolute errors of the approximate solutions obtained are of the order of 10-2 in [3] and of 10-15 in [4].
In the following, we will compute the exact solution of the problem (23) using PLSM. We choose the polynomial (5) as
(24)y~(t)=c0+c1t+c2t2.
The critical points of the corresponding functional (7) are the solutions of the system:
(25)∂J∂c0=1315(682c0-357)c12+1315(3622c0+273)c12+114175(19745c0-15588c1-9450)c22+114175(81695c0-62964c1-24570)c22+323c03-12c02-13645c13+944693c23-12100(6090c02-7700c0-983)c1-16300(112770c02-27300c0-37627)c1-162370(99(10531c0-1491)c1-723492c02mmimmm-373868c12+461538c0+165152)c2-162370(99(2173c0-1085)c1-141372c02mmiimmii-83732c12+170478c0+30248)c2-1828315c0+8821,∂J∂c1=-1120(199c0-80)c12-1120(889c0+240)c12-112600(13856c0-11240c1-4725)c22-14200(18656c0-14984c1-1785)c22-10415c03+832405c13-8875c23+231185(33759c02-35343c0-7562)c1+231185(179289c02+27027c0-41288)c1+1113400(40(33988c0+4347)c1-947790c02mmiimmmi-481464c12+268380c0+380295)c2+1113400(40(7612c0-2835)c1-195570c02mmmiimmi-121608c12+195300c0+50553)c2+4c02+101441575c0+3245,∂J∂c2=15670(7612c0-2835)c12+15670(33988c0+4347)c12+113860(12612c0-10395c1-3850)c22+169300(220140c0-191961c1-60830)c22+20845c03-27921575c13+61766825c23-12520(21062c02-5964c0-8451)c1-112600(21730c02-21700c0-5617)c1-18108100(3861(18656c0-1785)c1mmmiimmm-46729540c02-28926612c12mmmiimmm+28108080c0+14355168)c2-18108100(1287(13856c0-4725)c1mmmmiimm-11294140c02-7232940c12mmmmiimm+10810800c0+3256080)c2-7615c02-195406237c0+1048945.
Using the “solve” command in SAGE and excluding the complex solutions, we obtain the following critical points:
(26)c0=-1,c1=0,c2=1c0=5.63142580019,c1=5.9171539961,c2=-3.41555711282c0=3.62669864109,c1=2.43201607502,c2=-2.53283458022c0=4.30719794344,c1=4.67369589345,c2=-3.24057971014c0=5.97333772219,c1=5.19967444384,c2=-2.21509009009.
Using the second partial derivative test, we deduce that only the first two critical points are minimum points. Computing the values of J for these two minimum points, we see that the global minimum is obtained for c0=-1, c1=0, and c2=1 and thus the solution obtained using PLSM is in fact the exact solution of (23):
(27)yPLSM=t2-1.
3.3. Application 3
The third application is a nonlinear mixed Volterra-Fredholm integral equation ([2, 5, 9]):
(28)y(t)=-130t6+13t4-t2+53t-54+∫0t(t-s)(y(s))2ds+∫01(t+s)y(s)ds.
The exact solution of (2) is ye=t2-2. In [2], approximate solutions of (28) were computed using a Rationalized Haar functions method, in [5], approximate solutions of (28) were computed using a Triangular functions method, in [9], approximate solutions of (28) were computed using a Radial basis functions method, and, in [8], approximate solutions of (28) were computed using an Optimal control method. The values of the absolute errors of the approximate solutions obtained varied from 10-3 to 10-15 but none of these methods could find the exact solution.
We will compute the exact solution of the problem (23) using PLSM. We choose again the polynomial (5) as
(29)y~(t)=c0+c1t+c2t2.
The critical points of the corresponding functional (7) are the solutions of the system:
(30)∂J∂c0=(0.0135802469136c0+0.00555555555556c1-0.00886243386243)c22+0.2c03+0.0555555555556c0c12+0.25c02+0.00694444444444c13+0.0010101010101c23+(0.166666666667c02+0.0722222222222c0-0.1550.166666666667c02+0.0722222222222c0)c1+(+0.0714285714286c02(0.0527777777778c0-0.0186507936508)c1+0.0714285714286c02)c2+(0.0104938271605c12-0.0222222222222c0-0.1381353214690.0104938271605c12)c2-0.145502645503c0+0.151984126984,∂J∂c1=3(0.00694444444444c0-0.00277777777778)c12+(0.00555555555556c0+0.00282828282828c1-0.00813492063492)c22+0.0555555555556c03+0.00308641975309c13+0.000555555555556c23+2(0.0277777777778c02+0.014265672599)c1+(+0.0263888888889c02+0.005c122(0.0104938271605c0-0.00939153439153)c1+0.0263888888889c02+0.005c12)c2×(-0.0186507936508c0+0.0565079365079)c2+0.0361111111111c02-0.155c0-0.190674603175,∂J∂c2=(0.0104938271605c0-0.00939153439153)c12+3(0.0010101010101c0+0.000555555555556c1-0.002248677248680.0010101010101c0+0.000555555555556c1)c22+0.0238095238095c03+0.00166666666667c13+0.00034188034188c23+(0.0263888888889c02iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii-0.0186507936508c0iiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii+0.05650793650790.0263888888889c02)c1+2(+0.00679012345679c02(0.00555555555556c0-0.00813492063492)c1mim+0.00679012345679c02)c2+(0.00141414141414c12-0.00886243386243c0+0.045963018463(0.00141414141414c12-0.00886243386243c0)c2-0.0111111111111c02-0.138135321469c0-0.210582010582.
Using the “solve” command in SAGE and again excluding the complex solutions, we obtain the following critical points:
(31)c0=-2,c1=0,c2=1,c0=0.397332877415,c1=2.25632083697,c2=2.111946533,c0=-1.25269343781,c1=4.26258389262,c2=-2.02633451957.
Using the second partial derivative test, it follows that only the first two critical points are minimum points and, by computing the values of J for these two minimum points, we see that the global minimum is obtained for c0=-2, c1=0, and c2=1 and the solution obtained using PLSM is the exact solution of (28):
(32)yPLSM=t2-2.
3.4. Application 4
The next application is the nonlinear Volterra-Fredholm integral equation ([1, 7]):
(33)y(t)=2cos(t)-2+3∫0tsin(t-s)y2(s)ds+67-6cos(1)∫01(1-s)cos2(s)(s+y(s))ds.
The exact solution of (33) is ye=cos(t). Approximate solutions for this equation were computed in [1] using the Rationalized Haar functions method RHM and in [7] using a composite collocation method CCM consisting of a hybrid of block-pulse functions and Lagrange polynomials. The solution in [1] contained sixteen terms while the one in [7] is a piecewise polynomial solution consisting of two polynomials of fourth degree.
Using the PLSM, we computed a seventh order polynomial approximate solution of (33). We used the second approach described in application 1, solving the corresponding system (17) by means of Newton’s method. We obtained the approximate solution:
(34)yPLSM=0.000103064070567775t7-0.00157260234848809t6+0.000179224005722049t5+0.0415635195906221t4+0.0000354244251299783t3-0.500006936162663t2+(7.11478876757492×10-7)t+1.00000000094979.
Table 1 presents the comparison of the absolute errors corresponding to the three approximate solutions yRHM ([1]), yCCM ([7]), and yPLSM. The approximate solution given by PLSM is much closer to the exact solution and has a simpler form.
Comparison of the absolute errors of the approximate solutions for Problem (33).
RHM
CCM
PLSM
t=0
5·10-5
3.1021·10-5
9.4979·10-10
t=0.2
5·10-6
3.2341·10-6
3.1009·10-08
t=0.4
1·10-4
1.9092·10-5
3.7752·10-08
t=0.6
1·10-4
1.5029·10-5
4.9982·10-08
t=0.8
5·10-5
3.6499·10-6
7.0526·10-08
t=1
1·10-4
2.4290·10-5
1.0015·10-07
4. Conclusions
The paper presents the computation of approximate polynomial solutions for nonlinear integral equations of mixed Volterra-Fredholm type by using the polynomial least squares method, which is presented as a straightforward and efficient method.
The test problems solved clearly illustrate the accuracy of the method, since, in all of the cases, we were able to compute better approximations than the ones computed in previous papers, and, in most cases, the exact solutions were found. Moreover, the expressions of the approximations computed by PLSM are also simpler than the expressions of the approximations computed by using other methods.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
OrdokhaniY.Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via rationalized HAAr functionsOrdokhaniY.RazzaghiM.Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via a collocation method and rationalized HAAr functionsMaleknejadK.SohrabiS.RostamiY.Numerical solution of nonlinear Volterra integral equations of the second kind by using Chebyshev polynomialsYangC.Chebyshev polynomial solution of nonlinear integral equationsMaleknejadK.AlmasiehH.RoodakiM.Triangular functions (TF) method for the solution of nonlinear Volterra-Fredholm integral equationsMaleknejadK.NedaiaslK.Application of sinc-collocation method for solving a class of nonlinear Fredholm integral equationsMarzbanH. R.TabrizidoozH. R.RazzaghiM.A composite collocation method for the nonlinear mixed Volterra-Fredholm-Hammerstein integral equationsEl-AmeenM. A.El-KadyM.A new direct method for solving nonlinear Volterra-Fredholm-Hammerstein integral equations via optimal control problemParandK.RadJ. A.Numerical solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via collocation method based on radial basis functionsBhrawyA. H.TohidiE.SoleymaniF.A new Bernoulli matrix method for solving high-order linear and nonlinear Fredholm integro-differential equations with piecewise intervalsGhanbariB.The convergence study of the homotopy analysis method for solving nonlinear Volterra-Fredholm integrodifferential equations