We introduce a new combination of Bernstein polynomials (BPs) and Block-Pulse functions (BPFs) on the interval [0, 1]. These functions are suitable for finding an approximate solution of the second kind integral equation. We call this method Hybrid Bernstein Block-Pulse Functions Method (HBBPFM). This method is very simple such that an integral equation is reduced to a system of linear equations. On the other hand, convergence analysis for this method is discussed. The method is computationally very simple and attractive so that numerical examples illustrate the efficiency and accuracy of this method.
1. Introduction
In recent years, many different basic functions have been used for solving integral equations, such as Block-Pulse functions [1, 2], Triangular functions [3], Haar functions [4], Hybrid Legendre and Block-Pulse functions [5], Hybrid Chebyshev and Block-Pulse functions [6, 7], Hybrid Taylor and Block-Pulse functions [8], and Hybrid Fourier and Block-Pulse functions [9].
Block-Pulse functions were introduced in electrical engineering by Harmuth. After that study, several researchers have discussed applications of Block-Pulse functions [10, 11].
Bernstein polynomials have been applied in various fields of mathematics. For example, some researchers applied the Bernstein polynomials for solving high order differential equations [12], some classes of integral equations [13], partial differential equations, and optimal control problems [14]. Also, we introduced new operational matrices of fractional derivative and integral operators by Bernstein polynomials and then used them for solving fractional differential equations [15–17], system of fractional differential equations [18], and fractional optimal control problems [19, 20].
In this work, we combine the Bernstein polynomials (BPs) and Block-Pulse functions (BPFs) on the interval [0,1]. Then, we use these bases for finding an approximate solution of the second kind integral equation. We call this method Hybrid Bernstein Block-Pulse Functions Method (HBBPFM). In this method the integral equation is reduced to a system of linear equations. Also, we discuss the convergence analysis for this method. Furthermore, we compare the accuracy of obtained results of BPFs, BPs, and HBBPFM by some examples.
The rest of this paper is as follows. In Section 2, HBBPFs are introduced; therefore we approximate functions by using HBBPFs and also we discuss best approximation and convergence analysis in Section 3. Then we apply HBBPF method to find an approximate solution for the second kind integral equations and we survey error analysis for proposed method in Section 4. Also, we apply the proposed method on some examples. We observe that the accuracy and efficiency of this method are more than the near methods. Finally, Section 6 concludes our work in this paper.
2. Hybrid of Bernstein and Block-Pulse Functions
In this section, we recall some definitions and properties of Bernstein polynomials and Block-Pulse functions.
Lemma 1 (see [19]).
The Bernstein polynomials (BPs) of mth-degree are defined on the interval [0,1] as follows:
(1)Bi,m(x)=(mi)xi(1-x)m-i,i=0,1,…,m,
where
(2)(mi)=m!i!(m-i)!.
Then {B0,m,B1,m,…,Bm,m} in Hilbert space L2[0,1] is a complete basis. Therefore, any polynomial of degree m can be expanded in terms of linear combination of Bi,m(x)(i=0,1,…,m).
Lemma 2.
Let a set of Block-Pulse functions (BPFs) bi(t), i=1,2,…,N be on the interval [0,1) such that. (3)bi(t)={1,i-1N≤t<iN,0,otherwise.
Then, the following properties for these functions satisfy the following:
disjointness,
orthogonality,
completeness.
Proof.
The disjointness property can be clearly obtained from the definition of Block-Pulse functions as follows:
(4)bi(t)bj(t)={bi(t),i=j,0,i≠j,
where i,j=1,2,…,N.
The other property is orthogonality. It is clear that
(5)∫01bi(t)bj(t)dt=1Nδij,
where i,j=1,2,…,N and δij is the Kroneker delta.
The third property is completeness. For every f∈L2([0,1)), when m approaches the infinity, Parseval's identity holds:
(6)∫01f2(x)dx=∑i=0∞(fi2∥bi(t)∥2),
where fi=N∫01f(t)bi(t)dt.
Hn,m(t), n=1,2,…,N, m=0,1,…,M, have three arguments; n and m are the order of BPFs and BPs, respectively, and t is the normalized time. HBBPFs are defined on the interval [0,1) as follows:
(7)Hn,m(t)={Bm,M(Nt-n+1),n-1N≤t≤nN,0,otherwise.
In the next section, we deal with the problem of approximation of these functions.
3. Approximation of Functions by Using HBBPFs and Convergence AnalysisTheorem 4.
Suppose that the function f:[0,1]→R is m+1 times continuously differentiable, and S=Span{B0,m,B1,m,…,Bm,m}. Then cTB=s0=∑i=0mciBi,m∈S is the best approximation f out of S⊆L2[0,1] with the following inner product:
(8)〈f,B〉=∫01f(x)B(x)Tdx=[〈f,B0,m〉,〈f,B1,m〉,…,〈f,Bm,m〉],
where BT=[B0,m,B1,m,…,Bm,m] and cT=[c1,c2,…,cm]. Also, one can obtain the following inequality:
(9)∥f-cTB∥L2[0,1]≤K^(m+1)!2m+3,
where K^=maxx∈[0,1]|f(m+1)(x)|.
Proof.
We prove that cTB is the best approximation for f out of S. We can prove that S is a convex subset of a real inner product space L2[0,1] (see [21]). Therefore, for any x∈L2[0,1], x^∈S is its best approximation in S if and only if it satisfies
(10)〈x-x^,z-x^〉≤0∀z∈S,
where the inner product is defined by 〈f,g〉=∫01f(t)g(t)dt. Then for any x∈L2[0,1], its best approximation is unique. Also, we know that S⊂L2[0,1] is a convex and closed finite-dimensional subset of an inner product space L2[0,1]. Then for any x∈L2[0,1], there is a unique element x^∈S such that ∥x-x^∥=infz∈S∥x-z∥. Therefore, there exist the unique coefficients ci, i=0,1,…,m such that
(11)f≅s0=∑i=0mciBi,m=cTB.
On the other hand, we can consider that {1,x,…,xn} is a basis for polynomials space of degree m. Therefore we define y1(x)=f(0)+xf′(0)+(x2/2!)f′′(0)+⋯+(xm/m!)f(m)(0). Hence, from Taylor expansion we have
(12)|f(x)-y1(x)|=|f(m+1)(ξx)xm+1(m+1)!|,
where ξx∈(0,1). Since cTB is the best approximation f out of S, and we assume that y1∈S, therefore, we have
(13)∥f-cTB∥L2[0,1]2≤∥f-y1∥L2[0,1]2=∫01|f(x)-y1(x)|2dx=∫01|f(m+1)(ξx)|2(xm+1(m+1)!)2dx≤K^2(m+1)!2∫01x2m+2dx=K^2(m+1)!2(2m+3).
Then by taking square roots, the proof is complete.
The previous theorem shows that the error vanishes as m→∞.
Corollary 5.
One can write cT〈B,B〉≅〈f,B〉, such that one defines Q=〈B,B〉 that is a (m+1)×(m+1) matrix and is said dual matrix of B, and one can obtain
(14)Qi+1,j+1=∫01Bi,m(x)Bj,m(x)dx=(mi)(mj)(2m+1)(2mi+j),i,j=0,1,…,m.
Proof.
We know
(15)f≅s0=∑i=0mciBi,m=cTB;
therefore, the proof is complete.
Corollary 6.
A function f(t)∈L2([0,1]) may be expanded as follows:
(16)f(t)=∑n=1∞∑m=0∞pn,mHn,m(t).
If the infinite series in (16) is truncated, then we have
(17)f(t)≈∑n=1N∑m=0Mpn,mHn,m(t)=PTH(t),
where
(18)H(t)=[H1,0(t),H1,1(t),…,H1,M(t),hhlH2,0(t),H2,1(t),…,HN,M(t)]T,(19)P=[p1,0,p1,1,…,p1,M,p2,0,p2,1,…,pN,M]T.
Therefore we can get
(20)PT〈H(t),H(t)〉=〈f(t),H(t)〉.
Then
(21)P=D-1〈f(t),H(t)〉,
where
(22)D=〈H(t),H(t)〉=∫01H(t)HT(t)dt=[D10⋯00D2⋯0⋮⋮⋱⋮00⋯DN],
where by using (7), Dn(n=1,2,…,N) is defined as follows:
(23)(Dn)i+1,j+1=∫(n-1)/Nn/NBi,M(Nt-n+1)Bj,M(Nt-n+1)dt=1N∫01Bi,M(t)Bj,M(t)dt=(Mi)(Mj)N(2M+1)(2Mi+j),i,j=0,1,…,M.
We can also approximate the function k(t,s)∈L2([0,1]×[0,1]) as follows:
(24)k(t,s)≈HT(t)KH(s),
where K is an N(M+1)×N(M+1) matrix that we can obtain as follows:
(25)K=D-1〈H(t),〈k(t,s),H(s)〉〉D-1.
Theorem 7.
Let the function f:[0,1]→R be M+1 times continuously differentiable; then we have
(26)∥f-PTH∥L2[0,1]≤K~NM+1(M+1)!2M+3,
where K~=maxt∈[0,1]|f(M+1)(t)|.
Proof.
By using Theorem 4 we get
(27)∥f-PTH∥L2[0,1]2=∫01|f(t)-PTH(t)|2dx=∑n=1N(∫(n-1)/Nn/N|f(t)∑m=0Mpn,mBm,M(Nt-n+1)2∑m=0Mpn,mBm,M(Nt-n+1)2hhhhhhhhhhhhhhh-∑m=0Mpn,mBm,M(Nt-n+1)|2)dt=1N∑n=1N∫01|f(t+n-1N)-∑m=0Mpn,mBn,m(t)|2dt≤1N2M+3∑n=1N∫01|f(M+1)(ξn)|2t2M+2(M+1)!2dt≤1N2M+3∑n=1NK^n2(M+1)!2(2M+3)≤K~2N2M+2(M+1)!2(2M+3),
where ξn∈((n-1)/N,n/N) and K^n=maxt∈[(n-1)/N,n/N]|f(M+1)(t)|. Therefore by taking square roots, the proof is complete.
The above theorem shows that the approximation error vanishes as M,N→∞.
4. HBBPFs for the Second Kind Integral Equations and Error Analysis
In this section, we are dealing with the following Fredholm equations of the second kind:
(28)u(t)=∫01k(t,s)u(s)ds+f(t),
where u,f∈L2([0,1]), k∈L2([0,1]×[0,1]), and u(t) is an unknown function.
Let us approximate u, f, and k by (18) and (25) as follows:
(29)u(t)≈UTH(t),f(t)≈FTH(t),k(t,s)≈HT(t)KH(t).
By substituting (29) in (28) we obtain
(30)HT(t)U=∫01HT(t)KH(s)HT(s)Uds+HT(t)F=HT(t)K(∫01H(s)HT(s)ds)U+HT(t)F=HT(t)KDU+HT(t)F=HT(t)(KDU+F).
Therefore we have the following linear system:
(31)(I-KD)U=F,
that by solving this linear system we can obtain the vector U.
Theorem 8.
Suppose that u(t) is exact solution of (28) and uN,M(t) is approximate solution by HBBPFs for u(t) and EN,M(t) is perturbation function that depends only on uN,M(t)(i.e.,uN,M(t)=∫01k(t,s)uN,M(s)ds+f(t)+EN,M(t)). Let R=max0≤t,s≤1|k(s,t)|<∞. Then EN,M(t)→0 as M,N→∞.
Proof.
Suppose eN,M(t)=u(t)-uN,M(t) is the error function of approximate solution uN,M(t) to the exact solution u(t). Therefore we get
(32)eN,M(t)=∫01k(t,s)eN,M(s)ds-EN,M(t).
By taking absolute value and using Holder inequality we get
(33)|EN,M(t)|≤∫01|k(t,s)||eN,M(s)|ds+|eN,M(t)|≤(∫01|k(t,s)|2ds)1/2(∫01|eN,M(t)|2ds)1/2+|eN,M(t)|≤R∥eN,M(t)∥L2[0,1]+|eN,M(t)|.
Now, by taking norm L2([0,1]) we obtain
(34)∥EN,M(t)∥L2[0,1]≤(R+1)∥eN,M(t)∥L2[0,1].
Finally, from Theorem 7 we can write
(35)∥EN,M(t)∥L2[0,1]≤(R+1)K-NM+1(M+1)!2M+3,
where K-=maxt∈[0,1]|u(M+1)(t)|.
Therefore, we can show that EN,M(t)→0 as M,N→∞.
5. Numerical Examples
In this section we discuss the implementation of the new method and investigate its accuracy by applying it to different examples. In the following examples, we suppose that uN(t), uM(t), and uM,N(t) are approximate solutions by BPFs, BPs, and HBBPFM for the exact solution u(t), respectively.
Example 1.
Consider the following integral equation:
(36)u(t)=∫01(t+s)u(s)ds+sin(t)hhhh-t+(t+1)cos(1)-sin(1).
We know that the exact solution is u(t)=sin(t). The obtained results of BPFs, BPs, and HBBPFs are reported in Tables 1 and 2 and are plotted in Figures 1 and 2. We compare the obtained results and observe that HBBPFM is very effective and accuracy of approximate solutions in this method is more than methods of BPFs and BPs.
Absolute errors by using BPFs for N=4, BPs for M=3, and HBBPFM for N=4, M=3 in Example 1.
t
Method
BPFs
BPs
HBBPFM
N=4
M=3
N=4, M=3
0
0.159448
0.000252739
2.57612×10-7
0.1
0.0596148
0.0000539886
5.73616×10-8
0.2
0.0392211
0.000110834
1.23088×10-7
0.3
0.118936
0.0000398714
3.42659×10-7
0.4
0.0250381
0.0000566614
2.06685×10-7
0.5
0.167325
0.000106028
1.3331×10-6
0.6
0.0821085
0.0000743689
3.07359×10-7
0.7
0.00253325
0.0000243121
5.58694×10-7
0.8
0.125405
0.000119641
7.24512×10-7
0.9
0.0594347
0.000076931
4.20127×10-7
Absolute errors by using BPFs for N=5, BPs for M=4, and HBBPFs for N=5, M=4 in Example 1.
t
Method
BPFs
BPs
HBBPFM
N=5
M=4
N=5, M=4
0
0.120718
0.0000294061
9.38506×10-9
0.1
0.0208845
0.0000117512
4.43327×10-10
0.2
0.124426
4.28074×10-6
7.87196×10-9
0.3
0.0275755
7.98903×10-6
1.13157×10-9
0.4
0.124293
8.7962×10-6
6.2511×10-9
0.5
0.034286
2.23581×10-7
1.14373×10-9
0.6
0.120604
8.9113×10-6
1.22382×10-8
0.7
0.0410285
7.57027×10-6
1.62177×10-9
0.8
0.230475
7.94076×10-6
4.74483×10-10
0.9
0.00358729
0.0000229948
9.2012×10-9
Plot of error functions by using BPFs for N=4 (a), BPs for M=3 (b), and HBBPFM for N=4, M=3 (c) in Example 1.
Plot of error functions by using BPFs for N=5 (a), BPs for M=4 (b), and HBBPFM for N=5, M=4 (c) in Example 1.
Example 2.
Consider the following integral equation:
(37)u(t)=∫01tsu(s)ds+et-t,
with exact solution u(t)=et. We obtain the computational by BPFs, BPs, and HBBPFM with N=4, M=3, and N=5, M=4; then we compare them together. The results are reported in Tables 3 and 4 and are plotted in Figures 3 and 4. Similar to the previous example, we see that the method HBBPFM is very effective and accuracy of solution in this method is more than methods of BPFs and BPs.
Absolute errors by using BPFs for N=4, BPs for M=3, and HBBPFM for N=4, M=3 in Example 2.
t
Method
BPFs
BPs
HBBPFM
N=4
M=3
N=4, M=3
0
0.134438
0.000939946
2.60043×10-6
0.1
0.0292675
0.000210236
6.00397×10-7
0.2
0.0869644
0.000396173
1.08124×10-6
0.3
0.103935
0.000126329
1.37399×10-6
0.4
0.0380311
0.000213179
7.99831×10-7
0.5
0.216077
0.00037144
4.28735×10-6
0.6
0.0426798
0.000246979
9.89894×10-7
0.7
0.148954
0.0000965353
1.78268×10-6
0.8
0.167943
0.000412916
2.26538×10-6
0.9
0.0661188
0.000254268
1.31873×10-6
Absolute errors by using BPFs for N=5, BPs for M=4, and HBBPFs for N=5, M=4 in Example 2.
t
Method
BPFs
BPs
HBBPFM
N=5
M=4
N=5, M=4
0
0.106159
0.0000526416
1.44355×10-8
0.1
0.000988576
0.0000210365
1.12472×10-9
0.2
0.128144
8.80224×10-6
1.65937×10-8
0.3
0.000312002
0.0000141662
1.06195×10-9
0.4
0.155374
0.0000171016
9.87722×10-9
0.5
0.00152224
7.80052×10-7
2.99127×10-9
0.6
0.189012
0.0000166986
4.29645×10-8
0.7
0.00262215
0.0000156271
7.92177×10-9
0.8
0.230475
7.94076×10-6
4.74483×10-10
0.9
0.00358729
0.0000229948
9.2012×10-9
Plot of error functions by using BPFs for N=4 (a), BPs for M=3 (b), and HBBPFM for N=4, M=3 (c) in Example 2.
Plot of error functions by using BPFs for N=5 (a), BPs for M=4 (b), and HBBPFM for N=5, M=4 (c) in Example 2.
6. Conclusion
In this paper, HBBPFs are used to solve second kind integral equations we call this method with HBBPFM. This method converts second kind integral equations to systems of linear equations whose answers are coefficient of HBBPFs expansion of the solution of second kind integral equations. Also, by using several lemmas and theorems, we have discussed convergence analysis of the proposed method. Numerical examples show the efficiency and accuracy of the method. Moreover we see that accuracy of solutions in HBBPFM is more satisfactory than the methods of BPFs and BPs.
Conflict of Interests
The authors declare that there is no conflict of interests in this paper.
BabolianE.MasouriZ.Direct method to solve Volterra integral equation of the first kind using operational matrix with block-pulse functions20082201-2515710.1016/j.cam.2007.07.029MR2444153ZBL1146.65082MaleknejadK.RahimiB.Modification of block pulse functions and their application to solve numerically Volterra integral equation of the first kind20111662469247710.1016/j.cnsns.2010.09.032MR2765199ZBL1221.65338MirzaeeF.PiroozfarS.Numerical solution of the linear two-dimensional Fredholm integral equations of the second kind via two-dimensional triangular orthogonal functions20102241851932-s2.0-8475516103310.1016/j.jksus.2010.04.007OrdokhaniY.Solution of nonlinear Volterra-Fredholm-Hammerstein integral equations via rationalized Haar functions2006180243644310.1016/j.amc.2005.12.034MR2270022ZBL1102.65141MarzbanH. R.TabrizidoozH. R.RazzaghiM.A composite collocation method for the nonlinear mixed Volterra-Fredholm-Hammerstein integral equations20111631186119410.1016/j.cnsns.2010.06.013MR2736626ZBL1221.65340Tavassoli KajaniM.Hadi VenchehA.Solving second kind integral equations with hybrid Chebyshev and block-pulse functions20051631717710.1016/j.amc.2003.11.044MR2115576ZBL1067.65151WangX. T.LiY. M.Numerical solutions of integrodifferential systems by hybrid of general block-pulse functions and the second Chebyshev polynomials2009209226627210.1016/j.amc.2008.12.044MR2493403ZBL1161.65099MaleknejadK.MahmoudiY.Numerical solution of linear Fredholm integral equation by using hybrid Taylor and block-pulse functions2004149379980610.1016/S0096-3003(03)00180-2MR2033162ZBL1038.65147AsadyB.Tavassoli KajaniM.Hadi VenchehA.HeydariA.Solving second kind integral equations with hybrid Fourier and block-pulse functions2005160251752210.1016/j.amc.2003.11.038MR2102826ZBL1063.65144Prasada RaoG.1983Berlin, GermanySpringerMR791405MohanB. M.DattaK. B.1995MR1412142DohaE. H.BhrawyA. H.SakerM. A.Integrals of Bernstein polynomials: an application for the solution of high even-order differential equations201124455956510.1016/j.aml.2010.11.013MR2749745ZBL1236.65091MandalB. N.BhattacharyaS.Numerical solution of some classes of integral equations using Bernstein polynomials200719021707171610.1016/j.amc.2007.02.058MR2339763ZBL1122.65416YousefiS. A.BehroozifarM.Operational matrices of Bernstein polynomials and their applications201041670971610.1080/00207720903154783MR2662740ZBL1195.65061AlipourM.RostamyD.Solving nonlinear fractional differential equations by Bernstein polynomials operational matrices201253185196RostamyD.AlipourM.JafariH.BaleanuD.Solving multi-term orders fractional differential equations by operational matrices of BPs with convergence analysis2013652334349BaleanuD.AlipourM.JafariH.The Bernstein operational matrices for solving the fractional quadratic Riccati differential equations with the Riemann-Liouville derivative20132013710.1155/2013/461970461970MR3070004AlipourM.BaleanuD.Approximate analytical solution for nonlinear system of fractional differential equations by BPs operational matrices20132013910.1155/2013/954015954015MR3041672ZBL1273.34004AlipourM.RostamyD.BaleanuD.Solving multi-dimensional fractional optimal control problems with inequality constraint by Bernstein polynomials operational matrices201319162523254010.1177/1077546312458308AlipourM.RostamyD.BPs operational matrices for solving time varying fractional optimal control problems20136292304AtkinsonK.HanW.20002ndSpringer