This paper is devoted to the stability and convergence analysis of the Additive Runge-Kutta methods with the Lagrangian interpolation (ARKLMs) for the numerical solution of multidelay-integro-differential equations (MDIDEs). GDN-stability and D-convergence are introduced and proved. It is shown that strongly algebraically stability gives D-convergence, DA- DAS- and ASI-stability give GDN-stability. A numerical example is given to illustrate the theoretical results.
1. Introduction
Delay differential equations arise in a variety of fields as biology, economy, control theory, electrodynamics (see, e.g., [1–5]). When considering the applicability of numerical methods for the solution of DDEs, it is necessary to analyze the stability of the numerical methods. In the last three decades, many works had dealt with these problems (see, e.g., [6]). For the case of nonlinear delay differential equations, this kind of methodology had been first introduced by Torelli [7] and then developed by [8–12].
In this paper, we consider the following nonlinear multidelay-integro-differential equations (MDIDEs) with m delays:y′(t)=f[1](t,y(t),y(t-τ1),∫t-τ1tg[1](t,s,y(s))ds)+f[2](t,y(t),y(t-τ2),∫t-τ2tg[2](t,s,y(s))ds)+⋯+f[m](t,y(t),y(t-τm),∫t-τ2tg[m](t,s,y(s))ds),t∈[t0,T],y(t)=φ(t),t∈[t0-τ,t0],
where τ1≤τ2≤⋯≤τm=τ, f[v]:[t0,T]×CN×CN×CN→CN,g[v]:[t0,T]×CN×CN→CNv=1,2,…,m, and φ:[t0-τ,t0]→CN are continuous functions such that (1.1) has a unique solution. Moreover, we assume that there exist some inner product 〈·,·〉 and the induced norm ||·|| such that
Re〈f[v](t,y1,u1,w1)-f[v](t,y2,u2,w2),y1-y2〉≤αv‖y1-y2‖2+βv‖u1-u2‖2+σv‖w1-w2‖2,v=1,2,…,m,t≥t0,‖f[v](t,y,u1,w)-f[v](t,y,u2,w)‖≤rv‖u1-u2‖,‖g[v](t,s,w1)-g[v](t,s,w2)‖≤r̃v‖w1-w2‖,(t,s)∈D
forallt∈[t0,T],forally,y1,y2,u,u1,u2,w,w1,w2∈CN, (-αv),βv,σv,rv,r̃v are all nonnegative constants. Throughout this paper, we assume that the problem (1.1) has unique exact solution y(t). Space discretization of some time-dependent delay partial differential equations give rises to such delay differential equations containing additive terms with different stiffness properties. In these situations, additive Runge-Kutta (ARK) methods are used. Some recent works about ARK can refer to [13, 14]. For the additive MDIDEs (1.1), similar to the proof of Theorem 2.1 in [7], it is straightforward to prove that under the conditions (1.2)~(1.4), the analytic solutions satisfy‖y(t)-z(t)‖≤maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖,
where z(t) is the solution of the perturbed problem to (1.1).
To demand the discrete numerical solutions to preserve the convergence properties of the analytic solutions, Torelli [7] introduced a concept of RN-, GRN-stability for numerical methods applied to dissipative nonlinear systems of DDEs such as (1.1) when g[v](t,s,y(s))=0,v=1,2,…,m, which is the straightforward generalization of the well-known concept of BN-stability of numerical methods with respect to dissipative systems of ODEs (see also [9]). More recently, one has noticed a growing interesting the analysis of delay integro-differential equations (DIDEs). This type of equations have been investigated in various fields, such as mathematical biology and control theory (see [15–17]). The theory of computational methods for delay integro-differential equations (DIDEs) has been studied by many authors, and a great deal of interesting results have been obtained (see [18–22]). Koto [23] dealt with the linear stability of Runge-Kutta (RK) methods for systems of DIDEs; Huang and Vandewalle [24] gave sufficient and necessary stability conditions for exact and discrete solutions of linear Scalar DIDEs. However, little attention has been paid to nonlinear multidelay-integro-differential equations (MDIDEs).
So, the aim of this paper is the study of stability and convergence properties for ARK methods when they are applied to nonlinear multidelay-integro-differential equations (MDIDEs) with m delays.
2. The GDN-Stability of the Additive Runge-Kutta Methods
An additive Runge-Kutta method with the Lagrangian interpolation (ARKLM) of s stages and m levels can be organized in the Butcher tableau:CA[1]A[2]⋯A[m]b[1]Tb[2]T⋯b[m]T=c1a11[1]a12[1]⋯a1s[1]a11[m]a12[m1]⋯a1s[m]c2a21[1]a22[1]⋯a2s[1]a21[m]a22[m]⋯a2s[m]⋮⋮⋮⋱⋮⋯⋮⋮⋱⋮csas1[1]as2[1]⋯ass[1]as1[m]as2[m]⋯ass[m]b1[1]b2[1]⋯bs[1]⋯b1[m]b2[m]⋯bs[m],
where C=[c1,c2,…,cs]T, b[v]=[b1[v],b2[v],…,bs[v]], and A[v]=(aij[v])i,j=1s.
The adoption of the method (2.1) for solving the problem (1.1) leads toyn+1=yn+h∑v=1m∑j=1sbj[v]f[v](tj(n),yj(n),ỹj[v](n),wj[v](n)),yi(n)=yn+h∑v=1m∑j=1saij[v]f[v](tj(n),yj(n),ỹj[v](n),wj[v](n)),
where tn=t0+nh,tj(n)=tn+cjh, yn, and yj(n), ỹj[v](n) are approximations to the analytic solution y(tn), y(tn+cjh), y(tn+cjh-τv) of (1.1), respectively, and the argument ỹj[v](n) is determined byỹj[v](n)={φ(tn+cjh-τv)tn+cjh-τv≤0∑Pv=-drLPv(δv)yj(n-mv+Pv)tn+cjh-τv>0,
with τv=(mv-δv)h, δv∈[0,1), integer mv≥r+1, r,d≥0, andLPv(δv)=∏k=-dk≠Pvr(δv-kPv-k),Pv=-d,-d+1,…,r.
We assume mv≥r+1 is to guarantee that no (unknown) values yj(i) with i≥n are used in the interpolation procedurewj[v](n)isanapproximationtow(tj(n)):=∫tj(n-mv)tj(n)g[v](tj(n),s,y(s))ds,
which can be computed by a appropriate compound quadrature rule:wj[v](n)=h∑q=0mvdqg[v](tj(n),tj(n-q),yj(n-q)),v=1,2,…,m,j=1,2,…,s.
As for the quadrature rule (2.6), we usually adopt the compound trapezoidal rule, the compound Simpsons rule or the compound Newton-Cotes rule, and so forth according to the requirement of the convergence of the method (see [19]) and denote M=max1≤v≤m{mv} and η=max1≤v≤m{ηv} with ηv satisfing ∑q=0mv|dq|<ηv, v=1,2,…,m.
In addition, we always putyj(n)=φ(tn+cjh), yn=φ(tn) whenever n≤0.
In order to write (2.2), (2.3), (2.5), and (2.6) in a more compact way, we introduce some notations. The N×N identity matrix will be denoted by IN, e=(1,1,…,1)T∈RS, G̃=G⊗IN is the Kronecker product of matrix G and IN. For u=(u1,u2,…,us)T, v=(v1,v2,…,vs)T∈CNS, we define the inner product and the induced norm in CNS as follows: 〈u,v〉=∑i=1s〈ui,vi〉,‖u‖=∑i=1s‖ui‖2.
Moreover, we also adopt thaty(n)=[y1(n)y2(n)⋮ys(n)],ỹ[v](n)=[ỹ1[v](n)ỹ2[v](n)⋮ỹs[v](n)],w[v](n)=[w1[v](n)w2[v](n)⋮ws[v](n)],T(n)=[t1(n)t2(n)⋮ts(n)],f[v](T(n),y(n),ỹ[v](n),w[v](n))=[f[v](t1(n),y1(n),ỹ1[v](n),w1[v](n))f[v](t2(n),y2(n),ỹ2[v](n),w2[v](n))⋮f[v](ts(n),ys(n),ỹs[v](n),ws[v](n))].
With the above notation, method (2.2),(2.3), (2.5), and (2.6) can be written asyn+1=yn+h∑v=1mb̃[v]Tf[v](T(n),y(n),ỹ[v](n),w[v](n)),y(n)=ẽyn+h∑v=1mÃ[v]f[v](T(n),y(n),ỹ[v](n),w[v](n)),ỹj[v](n)={ẽφ(tn+cjh-τv),tn+cjh-τv≤t0,∑Pv=-drLPv(δv)yj(n-mv+Pv),tn+cjh-τv>t0,wj[v](n)=h∑q=0mvdqg[v](tn+cjh,tn-q+cjh,yj(n-q)).
In 1997, Zhang and Zhou [25] introduced the extension of RN-stability to GDN-stability as follows.
Definition 2.1.
An ARKLM (2.1) for DDEs is called GDN-stable if, numerical approximations yn and zn to the solution of (1.1) and its perturbed problem, respectively, satisfy
‖yn-zn‖≤Cmaxt0-τ≤t≺t0‖φ(t)-ψ(t)‖,n≥0,
where constant C>0 depends only on the method, the parameter αv,βv,σv,rv,r̃v, and the interval length T-t0, ψ(t) is the initial function to the perturbed problem of (1.1).
Definition 2.2.
An ARKLM (2.1) is called strongly algebraically stable if matrices Mγμ are nonnegative definite, where
Mγμ=B[γ]A[μ]+A[γ]TB[μ]-b[γ]b[μ]T,B[γ]=diag(b1[γ],b2[γ],…,bs[γ]),
for μ,γ=1,2,…,m.
Let
{yn,yj(n),ỹj[1](n),ỹj[2](n),…,ỹj[m](n),wj[1](n),wj[2](n),…,wj[m](n)}j=1s,{zn,zj(n),z̃j[1](n),z̃j[2](n),…,z̃j[m](n),ŵj[1](n),ŵj[2](n),…,ŵj[m](n)}j=1sbe two sequences of approximations to problems (1.1) and its perturbed problem, respectively. From method (2.1) with the same step size h, and write
Ui(n)=yi(n)-zi(n),Ũi[v](n)=ỹi[v](n)-z̃i[v](n),U0(n)=yn-zn,Qi[v](n)=h[f[v](ti(n),yi(n),ỹi[v](n),wi[v](n))-f[v](ti(n),zi(n),z̃i[v](n),ŵi[v](n))],i=1,2,…,s,v=1,2,…,m.
Then (2.2) and (2.3) read
U0(n+1)=U0(n)+∑v=1m∑j=1sbj[v]Qj[v](n),Ui(n)=U0(n)+∑v=1m∑j=1saij[v]Qj[v](n).
Our main results about GDN-stability are contained in the following theorem.
Theorem 2.3.
Assume ARK method (2.2) is strongly algebraically stable, and then the corresponding ARKLM (2.1) is GDN-stable, and satisfies
‖yn-zn‖≤Cmaxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,n≥0,
whereC=exp[6(T-t0)ms∑v=1mβvLv2(mv+d+1)]maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,Lv=max-d≤pv≤r{Lpν},
Proof.
From (2.14) and (2.15) we get
‖U0(n+1)‖2=〈U0(n)+∑v=1m∑i=1sbi[v]Qi[v](n),U0(n)+∑v=1m∑i=1sbi[v]Qi[v](n)〉=‖U0(n)‖2+2∑v=1m∑i=1sbi[v]Re〈Qi[v](n),U0(n)〉+∑u,v=1m∑i,j=1sbi[u]bj[v]〈Qi[u](n),Qj[v](n)〉=‖U0(n)‖2+2∑v=1m∑i=1sbi[v]Re〈Qi[v](n),Ui(n)-∑v=1m∑j=1saij[v]Qj[v](n)〉+∑u,v=1m∑i,j=1sbi[u]bj[v]〈Qi[v](n),Qj[u](n)〉=‖U0(n)‖2+2∑v=1m∑i=1sbi[v]Re〈Qi[v](n),Ui(n)〉-∑u,v=1m∑i,j=1s(bi[u]aij[v]+bj[v]aij[u]-bi[u]bj[v])〈Qi[v](n),Qj[u](n)〉.
If the matrices Mγμ are nonnegative definite, then
‖U0(n+1)‖2≤‖U0(n)‖2+2∑v=1m∑j=1sbj[v]Re〈Qj[v](n),Uj(n)〉.
Furthermore, by conditions (1.2)~(1.4) and Schwartz inequality we have
Re〈Qj[v](n),Uj(n)〉=hRe〈f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))-f[v](tj(n),zj(n),z̃j[v](n),ŵj[v](n)),Uj(n)〉≤hαv‖Uj(n)‖2+hβv‖Ũj[v](n)‖+hσv‖wj[v](n)-ŵj[v](n)‖2≤hαv‖Uj(n)‖2+hβv‖Ũj[v](n)‖+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2=hαv‖Uj(n)‖2+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2,tn+cjh-τv≤t0=hαv‖Uj(n)‖2+hβv‖∑pv=-drLpv(δv)Uj(n-mv+pv)‖2+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2,tn+cjh-τv>t0≤hαv‖Uj(n)‖2+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2,tn+cjh-τv≤t0≤hαv‖Uj(n)‖2+2hβvLv2‖∑pv=-drUj(n-mv+pv)‖2+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2,tn+cjh-τv>t0.
For (2.23), we have
(2.23)≤2hβvLv2‖∑pv=-drUj(n-mv+pv)‖2+2h3r̃v2ηv2σv∑q=0mv‖Uj(n-q)‖2≤3hβvLv2‖∑pv=-dmvUj(n-mv+pv)‖2.
By the same way, we can also get
(2.22)≤3hβvLv2‖∑Pv=-dmvUj(n-mv+pv)‖2.
Substituting (2.25) and (2.24) into (2.19), yields
‖U0(n+1)‖2≤‖U0(n)‖2+2h∑v=1m∑j=1s3βvLv2bj[v]‖∑pv=-dmvUj(n-mv+pv)‖2≤‖U0(n)‖2+6h∑v=1m∑j=1sβvLv2bj[v](mv+d+1)max-d≤pv≤mv‖Uj(n-mv+pv)‖2≤‖U0(n)‖2+6hms∑v=1mβvLv2(mv+d+1)max(j,pv)∈Ev‖Uj(n-mv+pv)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)]max{‖U0(n)‖2,max(j,pv)∈Ev‖Uj(n-mv+pv)‖2},
where Ev={(j,Pv)1≤j≤s,-d≤Pv≤r}.
Similar to (2.27), the inequalities:
‖Ui(n)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)]max{‖U0(n)‖2,max(j,Pv)∈E‖Uj(n-mv+Pv)‖2}
follows for i=1,2,…,s.
In the following, with the help of inequalities (2.27), (2.28), and induction we shall prove the inequalities:
‖Ui(n)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)](n+1)maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,
for n≥0, i=1,2,…,s.
In fact, it is clear from (2.27), (2.28), and mv≥r+1 such that
‖Ui(0)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)]maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,i=0,1,2,…,s.
Suppose for n≤k(k≥0)that
‖Ui(n)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)](n+1)maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,i=0,1,2,…,s.
Then from (2.27) and (2.28), mv≥r+1 and 1+6hms∑v=1mβvLv2(mv+d+1)>1, we conclude that
‖Ui(k+1)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)](k+2)maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2,i=0,1,2,…,s.
This completes the proof of inequalities (2.29). In view of (2.29), we get for n≥0 that
‖U0(n)‖2≤[1+6hms∑v=1mβvLv2(mv+d+1)](n+1)maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2≤exp[(n+1)6hms∑v=1mβvLv2(mv+d+1)]maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2≤exp[6(T-t0)ms∑v=1mβvLv2(mv+d+1)]maxt0-τ≤t≤t0‖φ(t)-ψ(t)‖2.As a result, we know that method (2.1) is GDN-stable.
3. D-Convergence
In order to study the convergence of numerical methods for MDIDEs, we have to mention the concept of the convergence for stiff ODEs.
In 1981, Frank et al. [26] introduced the important concept of B-convergence for numerical methods applied to nonlinear stiff initial value problems of ordinary differential equations. Later, there have been rapid developments in the study of B-convergence, and a significant number of important results have already been found for Runge-Kutta methods. In fact, B-convergence result is nothing but a realistic global error estimate based on one-sided Lipschitz constant [27]. In this section, we start discussing the convergence of ARKLM (2.1) for MDIDEs (1.1) with conditions (1.2)–(1.4). The approach to the derivation of these estimates is similar to that used in [25]. We assume the analytic solution y(t) of (1.1) is smooth enough, and its derivatives used later are bounded by‖D(i)y(t)‖≤M̃i,t∈[t0-τ,T],
where
D(i)y(t)={y(i)(t),t∈(t0+(j-1)h,t0+jh),y(i)(t0+jh-0),t=t0+jh.
If we introduce some notationsY(n)=[y(tn+c1h)y(tn+c2h)⋮y(tn+csh)],Ỹ[v](n)=[y(tn+c1h-τv)y(tn+c2h-τv)⋮y(tn+csh-τv)],w̃[v](n)=[w(tn+c1h-τv)w(tn+c2h-τv)⋮w(tn+csh-τv)].
With the above notations, the local errors in (2.9) can be defined asy(tn+1)=y(tn)+h∑v=1mb̃[v]Tf[v](T(n),Y(n),Ỹ[v](n),w̃[v](n))+Qn,Y(n)=ẽy(tn)+h∑v=1mÃ[v]f[v](T(n),Y(n),Ỹ[v](n),w̃[v](n))+rn,Ỹ[v](n)=(Ỹ1[v](n),Ỹ2[v](n),…,Ỹs[v](n))T,
withỸj[v](n)={φ(tn+cjh-τv),tn+cjh-τv≤t0,∑Pv=-drLPv(δv)yj(n-mv+Pv)+ρj[v](n),tn+cjh-τv>t0,wj[v](n)=h∑q=0mvdqg[v](tn+cjh,tn-q+cjh,yj(n-q))+Rj[v](n).
If we take y̆n=y(tn), y̆(n)=Y(n), y̆[v](n)=Ỹ[v](n), and w̃̆[v](n)=w̃[v](n)
Then we can get the perturbed scheme of (2.9),y̆n+1=y̆n+h∑v=1mb̃[v]Tf[v](T(n),y̆(n),ỹ̆[v](n),w̃̆[v](n))+Qn,y̆(n)=ẽy̆n+h∑v=1mÃ[v]f[v](T(n),y̆(n),ỹ̆[v](n),w̃̆[v](n))+rn,ỹ̆j[v](n)={ẽφ(tn+cjh-τv),tn+cjh-τv≤0,∑Pv=-drLPv(δv)y̆j(n-mv+Pv)+ρj[v](n),tn+cjh-τv>0,wj[v](n)=h∑q=0mvdqg[v](tn+cjh,tn-q+cjh,yj(n-q))+Rj[v](n).
With perturbations, Qn∈CN, rn=(r1(n)T,r2(n)T,…,rs(n)T)T,R[v](n)=(R1[v](n),R2[v](n),…,Rs[v](n))T, ρ(n)=(ρ1(n)T,ρ2(n)T,…,ρs(n)T)∈CNS, according to Taylor formula and the formula in [28, pages 205–212], Qn,rn and ρn can be determined respectively, as follows:Qn=∑l=1Phl(l-1)!(1l-∑v=1m∑j=1sbj[v]cjl-1)D(l)y(tn)+R0(n),ri(n)=∑l=1Phl(l-1)!(1lcil-∑v=1m∑j=1saij[v]cjl-1)D(l)y(tn)+Ri(n),ρi(n)=hq+1(q+1)!∑v=1m∏Pv=-dr(δv-Pv)D(q+1)y(ξi(n)),ξi(n)∈(tn-mv-d+cih,tn-mv+r+cih),
where q=d+r,Ri(n), and ξi(n) satisfy ||Ri(n)||≤M̂ihi+1,i=0,1,2,…,s, h∈(0,h0], h0 depends only on the method, and M̂i(i=0,1,2,…,s) depends only on the method and some M̃i in (3.2).
Combining (2.2), (2.3), (2.5), and (2.6) with (3.9), (3.10), (3.11), and (3.12) yields the following recursion scheme for the ε0(n+1)=y̆n+1-yn+1:ε0(n+1)=ε0(n)+h∑v=1mb̃[v]T{gn[v]f[v](T(n),y̆(n),ỹ̆[v](n),w[v](n))-f[v](T(n),y̆(n),ỹ[v](n),w[v](n))+gn[v]εn+H[v](n)(w̃̆[v](n)-w[v](n))}+Qn,εn=ẽε0(n)+h∑v=1mÃ[v]{gn[v]f[v](T(n),y̆(n),ỹ̆[v](n),w[v](n))-f[v](T(n),y̆(n),ỹ[v](n),w[v](n))+gn[v]εn+H[v](n)(w̃̆[v](n)-w[v](n))}+rn,
where ε0(n+1)=y̆n+1-yn+1, εn=(ε1(n)T,ε2(n)T,…,εs(n)T)T=y̆(n)-y(n),gi[v](n)=∫01f2(tn+cih,yi(n)+θ(y̆i(n)-yi(n)),ỹ̆i[v](n),w̃̆i[v](n))dθ,i=1,2,…,s,Hi[v](n)=∫01f4(tn+cih,yi(n),ỹ̆i[v](n),w̃̆i[v](n)+θ(w̃̆i[v](n)-wi[v](n)))dθ,
here, fi(x1,x2,x3,x4) is the Jacobian matrix (∂f(x1,x2,x3,x4)/∂xi)i=1,2,3,4.H[v](n)(w̃̆[v](n)-w[v](n))=hH[v](n)∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]+hH[v](n)R[v](n)+hH[v](n)d0[g[v](tn,tn,y̆(n))-g[v](tn,tn,y(n))]=hH[v](n)∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]+hH[v](n)R[v](n)+hH[v](n)d0∫01g3[v](tn,tn,y(n)+θ(y̆(n)-y(n)))dθ⋅εn.
Assume that (Ĩs-h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))) is regular, from (3.16) and (3.17), (3.18), we can getε0(n+1)={IN+h∑v=1mb̃[v]T[Ĩs-h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))]-1×ẽ(g[v](n)+hH[v](n)d0g3[v](n))[Ĩs-h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))]-1}ε0(n)+h∑v=1mb̃[v]T(g[v](n)+hH[v](n)d0g3[v](n))[Ĩs-h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))]-1×(rn+h2∑v=1mÃ[v]H[v](n)R[v](n))+h∑v=1mb̃[v]T[Ĩs-h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))]-1×[h∑v=1mÃ[v](g[v](n)+hH[v](n)d0g3[v](n))]⋅{∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]f[v](T(n),y̆(n),ỹ̆[v](n),w[v](n))-f[v](T(n),y̆(n),ỹ[v](n),w[v](n))+hH[v](n)∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]}+Qn+h2∑v=1mb̃[v]TH[v](n)R[v](n).
Now, we introduce the concept of D-convergence from [25].
Definition 3.1.
An ARKLM (2.1) withyn=y(tn)(n≤0), yi(n)=y(tn+cih)(n<0) and ỹi[v](n)=y(tn+cih-τv)(n<0) is called D-convergence of order p if this method, when applied to any given DIDEs (1.1) subject to (1.2)–(1.4); produce an approximation sequence yn and the global error satisfies a bound of the form:
‖y(tn)-yn‖≤C(tn)hP,h∈(0,h0],
where the maximum stepsize h0 depends on characteristic parameter αv,βv,σv,rv,r̃v and the method, the function C(t) depends only on some M̃i in (3.2), delay τv, characteristic parameters αv,βv,σv,rv,r̃v, v=1,2,…,m, and the method.
Definition 3.2.
The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be DA-stable if the matrix(Is-∑ν=1mA[ν]ξ) is regular for ξ∈C-∶={ξ∈C∣Reξ≤0}, and |Ri(ξ)|≤1forallξ∈C-,i=0,1,…,s.
Where
Ri(ε1)=1+∑v=1mAi[v]ε1(Is-∑ν=1mA[ν]ξ)-1e,A0[v]=b[v],Ai[v]=(ai1[v],ai2[v],…,ais[v])T,i=0,1,…,s.
Definition 3.3.
The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be ASI-stable if the matrix (Is-∑ν=1MA[ν]ξ) is regular forξ∈C-, and (Is-∑ν=1MA[ν]ξ)-1 is uniformly bounded for ξ∈C-.
Definition 3.4.
The ARKLM (2.2), (2.3), (2.5), and (2.6) is said to be DAS-stable if the matrix(Is-∑ν=1MA[ν]ξ) is regular for ξ∈C-, and ∑ν=1mAi[ν]Tξ(Is-∑ν=1MA[ν]ξ)-1(i=0,1,…,s) is uniformly bounded for ξ∈C-.
Lemma 3.5.
Suppose the ARKLM (2.2), (2.3), (2.5), and (2.6) is DA- DAS- and ASI-stable, then there exist positive constants h0,γ1,γ2,γ3, which depend only on the method and the parameter αv,βv,σv,rv,r̃v such that
‖Ĩs-∑ν=1MÃ[ν]ξ‖≤γ1,‖IN+∑v=1mÃi[v]Tξ(Ĩs-∑v=1mÃ[v]ξ)-1ẽ‖≤1+γ2h,‖∑v=1mÃi[v]Tξ(Ĩs-∑v=1mÃ[v]ξ)-1v‖≤γ3‖v‖,v∈CNS,h∈(0,h0],i=0,1,2…,s.
Proof.
This Lemma can be proved in the similar way as that of in [29, Lemmas 3.5–3.7].
Theorem 3.6.
Suppose the ARKLM (2.2), (2.3), (2.5), and (2.6) is DA- DAS- and ASI-stable, then there exist positive constants h0,γ3,γ4,γ5, which depend only on the method and the parametersαv,βv,σv,rv,r̃v, such that for h∈(o,h0],
‖εi(n)‖≤{(1+hγ4)max{‖ε0(n+1)‖,max(i,pv)∈E‖εi(n-mv+pv)‖,max(i,q)∈Eq‖εi(n-q)‖}+hγ5max1≤i≤s‖ρi(n-1)‖+‖Q̃n-1‖+γ3‖γ̃n-1‖,i=0,(1+hγ4)max{‖ε0(n+1)‖,max(i,pv)∈E‖εi(n-mv+pv)‖,max(i,q)∈Eq‖εi(n-q)‖}+hγ5max1≤i≤s‖ρi(n)‖+‖Q̃n‖+γ3‖γ̃n‖,i=1,2,…,s,
where ε0(n)=y̆n-yn,εi(n)=y̆i(n)-yi(n),E={(i,pv)∣1≤i≤s,-d≤pv≤γ}, Eq={(i,q)∣1≤i≤s,1≤q≤m},Q̃n=Qn+h2∑v=1mb̃[v]TH[v](n)R[v](n), r̃n=rn+h2∑v=1mÃ[v]H[v](n)R[v](n).
Proof.
Using (3.19) and Lemma 3.5, for h∈(0,h0], we obtain that
ε0(n+1)≤(1+γ2h)‖ε0(n)‖+γ3‖r̃n‖+‖Q̃n‖+hγ3‖∑v=1mÃ[v]{∑f[v](T(n),y̆(n),ỹ̆[v](n),w[v](n))-f[v](T(n),y̆(n),ỹ[v](n),w[v](n))+hH[v](n)∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]}‖+h‖∑v=1mb̃[v]T{∑f[v](T(n),y̆(n),ỹ̆[v](n),w[v](n))-f[v](T(n),y̆(n),ỹ[v](n),w[v](n))+hH[v](n)⋅∑q=1mvdq[g[v](tn,tn-q,y̆(n-q))-g[v](tn,tn-q,y(n-q))]}‖≤(1+γ2h)‖ε0(n)‖+γ3‖r̃n‖+‖Q̃n‖+hγ3∑v=1m{∑i=1s‖∑j=1saij[v][f[v](tj(n),yj(n),ỹ̆j[v](n),wj[v](n))-f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))]+hHj[v](n)∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]‖2}1/2+h∑v=1m‖∑j=1sbj[v]{∑f[v](tj(n),yj(n),ỹ̆j[v](n),wj[v](n))-f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))+hHj[v](n)∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]}‖.For
hγ3∑v=1m{∑i=1s‖∑j=1saij[v][f[v](tj(n),yj(n),ỹ̆j[v](n),wj[v](n))-f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))]+hHj[v](n)∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]‖2}1/2=(3.25b).
Then
(3.25b)≤hγ3∑v=1m{∑i=1s2∑j=1s|aij[v]|2{-g[v](tn,tn-q,yj(n-q))]∑‖2‖f[v](tj(n),yj(n),ỹ̆j[v](n),wj[v](n))-f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))‖2+‖hHj[v](n)∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]∑‖2}}1/2≤hγ3∑v=1m{∑i=1s{2∑j=1s|aij[v]|2rv2‖ỹ̆j[v](n)-ỹj[v](n)‖2+2∑j=1s|aij[v]|2h2Hj[v](n)22∑q=1mvdq⋅‖g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))‖2∑q=1mvdq}}1/2≤hγ3∑v=1m∑i=1s2∑j=1s|aij[v]|2rv2‖ỹ̆j[v](n)-ỹj[v](n)‖2+2h2γ3∑v=1m∑i=1s∑j=1s|aij[v]|2Hj[v](n)2∑q=1mvdq2⋅‖g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))‖2≤2hγ3∑v=1m∑i=1s∑j=1s|aij[v]|rv‖ỹ̆j[v](n)-ỹj[v](n)‖+2h2γ3∑v=1m∑i=1s∑j=1s∑q=1mvdq|aij[v]||Hj[v](n)|⋅‖g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))‖≤2hγ3∑v=1m∑i,j=1s|aij[v]|rv‖ỹ̆j[v](n)-ỹj[v](n)‖+2h2γ3∑q=1mvdq∑v=1m∑i,j=1s|aij[v]||Hj[v](n)|r̃v‖y̆j(n-q)-yj(n-q)‖.
For
h∑v=1m‖∑j=1sbj[v]{∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]f[v](tj(n),yj(n),ỹ̆j[v](n),wj[v](n))-f[v](tj(n),yj(n),ỹj[v](n),wj[v](n))+hHj[v](n)∑q=1mVdq[g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))]∑}‖=(3.25c),then
(3.25c)≤h∑v=1m∑j=1s|bj(ν)|γv‖ỹ̆j[v](n)-ỹj[v](n)‖+h2∑v=1m∑j=1s|bj[v]||Hj[v](n)|{∑q=1mvdq‖g[v](tn,tn-q,y̆j(n-q))-g[v](tn,tn-q,yj(n-q))‖}≤h∑v=1m∑j=1s|bj(ν)|γv‖ỹ̆j[v](n)-ỹj[v](n)‖+h2∑v=1m∑j=1s|bj[v]||Hj[v](n)|{∑q=1mvdqr̃v‖y̆j(n-q)-yj(n-q)‖}.
Combine (3.27), (3.29), and (3.25a), we have
ε0(n+1)≤(1+γ2h)‖ε0(n)‖+γ3‖r̃n‖+‖Q̃n‖+h∑v=1mγv(2γ3∑i,j=1s|aij[v]|+∑j=1s|bj[v]|)‖ỹ̆j[v](n)-ỹj[v](n)‖+h2∑v=1mr̃v∑q=1mvdq∑j=1s{2γ3∑i=1s|aij[v]|⋅|Hj[v](n)|+|bj[v]|⋅|Hj[v](n)|}‖εj(n-q)‖.
Moreover, it follows from (2.5) and (3.11) that
‖ỹ̆j[v](n)-ỹj[v](n)‖≤supδv∈[0,1)∑Pv=-dr|Lpv(δv)|max-d≤Pr≤r‖εj(n-mv+Pv)‖+‖ρj[v](n)‖.
Substituting (3.31) in (3.30), we get
‖ε0(n+1)‖≤(1+γ4(0)h+γ6(0)h2)max{‖ε0(n)‖,max(j,pv)∈E‖εj(n-mv+Pv)‖,max(j.q)∈Eq‖εj(n-q)‖}+‖Q̃n‖+γ3‖r̃n‖+hγ5(0)max(j,v)∈Em‖ρj[v](n)‖,h∈(0,h0],
where
γ4(0)=γ2+γ5(0)supδv∈[0,1)∑Pv=-dr|Lpv(δv)|,γ5(0)=∑v=1mrv(2γ3∑i,j=1s|aij[v]|+∑j=1s|bj[v]|),γ6(0)=∑v=1mγ̃v∑q=1mv∑j=1sdq{2γ3∑i=1s|aij[v]⋅Hj[v](n)|+|bj[v]⋅Hj[v](n)|},E={(j,Pv)∣1≤j≤s,-d≤Pv≤γ},Eq={(j,q)∣1≤j≤s,1≤q≤mv},Em={(j,v)∣1≤j≤s,1≤v≤m}.
By Lemma 3.5, similar to (3.32), we can obtain the inequalities:
‖εi(n)‖≤(1+hγ4(i)+h2γ6(i))max{‖ε0(n)‖,max(j,Pv∈E)‖εj(n-mv+Pv)‖,max(j,q)∈Eq‖εj(n-q)‖}+hγ5(i)max(j,v)∈Em‖ρj[v](n)‖+‖Q̃n‖+γ3‖r̃n‖,i=1,2,…,s,h∈(0,h0],
where
γ4(i)=γ2+γ5(i)supδv∈[0,1)∑Pv=-dr|Lpv(δv)|,γ5(i)=∑v=1mrv(2γ3∑i,j=1s|aij[v]|+∑j=1s|aij[v]|),γ6(i)=∑v=1mγ̃v∑q=1mv∑j=1sdq(2γ3∑i=1s|aij[v]⋅Hj[v](n)|+|aij[v]⋅Hj[v](n)|).
Setting γ4=max0≤i≤sγ4(i), γ5=max0≤i≤sγ5(i), γ6=max0≤i≤sγ6(i).
Combining (3.32) with (3.34), we immediately obtain the conclusion of this theorem.
Now, we turn to study the convergence of ARKLM (2.1) for (1.1). It is always assumed that the analytic solution y(t) of (1.1) is smooth enough on each internal of the form (t0+(j-1)h,t0+jh) (j is a positive integer) as (3.2) defined.
Theorem 3.7.
Assume ARKLM (2.1) with stage order p is DA-, DAS- and ASI-stable, then the ARKLM (2.1) is D-convergent of order min{p,q+1,s+1}, where q=d+r.
Proof.
By Theorem 3.6, we have for h∈(0,h0]‖εi(n)‖≤{(1+hγ4+h2γ6)max{‖ε0(n-1)‖,max(i,pv)∈E‖εi(n-1-mv+pv)‖,max(j,q∈Eq)‖εj(n-q)‖}+T1hp+1+T2hq+2+T3hs+2,i=0(1+hγ4+h2γ6)max{‖ε0(n)‖,max(i,pv)∈E‖εi(n-mv+pv)‖,max(j,q)∈Eq‖εj(n-q)‖}+T1hp+1+T2hq+2+T3hs+2,i=1,2,…,s,
where
T1=M̂0+γ3∑i=1sM̂i2,T2=γs(q+1)!∑pv=-dr|δv-Pv|Mq+1,T3=(∑v=1mb̃[v]TH[v]R+∑v=1mÃ[v]H[v]R)g[v](s)(ξ).
It follows from an induction to (3.36) for n that
‖εi(n)‖≤{∑j=0n(1+2hγ4)j(T1hp+1+T2hq+2+T3hs+2),i=0,∑j=0n+1(1+2hγ4)j(T1hp+1+T2hq+2+T3hs+2),i=1,2,…,s.Hence, forh∈(0,h0], we arrive at
‖y(tn)-yn‖=‖ε0(n)‖≤∑j=0n(1+2hγ4)j(T1hp+1+T2hq+2+T3hs+2)=(1+2hγ4)n+1-12hγ4(T1hp+1+T2hq+2+T3hs+2)≤exp[2(n+1)hγ4]-12hγ4(T1hp+1+T2hq+2+T3hs+2)≤exp[2(T-t0)γ4]exp(2h0γ4)-12γ4(T1hp+T2hq+1+T3hs+1).
Therefore, the ARKLM (2.1) is D-Convergent of order min{p,q+1,s+1},(q=r+d).
4. Some Examples
Consider the following initial value problem of multidelay-integro-differential equations:y′(t)=(-104+99i)[1+y(t)]21+[1+y(t)]2+500[y(t-1)-y(t-2)]-500∫t-1ty(s)ds+500∫t-2ty(s)ds-exp(-t)+(104-99i)×[1+exp(-t)]21+[1+exp(-t)]2,0≤t≤3,y(t)=exp(-t),-2≤t≤0,
and its perturbed problem:z′(t)=(-104+99i)[1+z(t)]21+[1+z(t)]2+500[z(t-1)-z(t-2)]-500∫t-1tz(s)ds+500∫t-2tz(s)ds-exp(-t)+(104-99i)×[1+exp(-t)]21+[1+exp(-t)]2,0≤t≤3,z(t)=exp(-t)+0.01,-2≤t≤0.
It an be easily verified that a1=a2=-9.5×103, β1=β2=250,σ1=σ2=250, and r̃1=r̃2=1, with analytic solution y(t)=exp(-t), where the inner product is standard inner product. We apply the two-stages and two-order additive R-K method:110101121201121201
to the problem (4.1) and its perturbed problem (4.2). Since the order of the method is 2, we adopt the compound trapezoidal rule for computing the integer part. According to the result of Theorems 2.3 and 3.7, the corresponding method for DIDEs is GDN-stable and D-convergent. We denote the numerical solution of problem (4.1) and (4.2)yn and zn, where yn and zn are approximations to y(tn) and z(tn), respectively. The values yn and zn with h=0.1 are listed inFigure 1, (where the abscissa and ordinate denote variable n and yn, resp.) and Figure 2, (where the abscissa and ordinate denote variable n and zn, resp.). It is shown in Table 1 that the numerical solutions are toward to the exact solutions as h→0.
Comparison between the numerical solutions for different h and the exact solution.
t
Numerical solution for h=0.2
Numerical solution for h=0.1
Numerical solution for h=0.05
Exact solution y(t)
0.2
2.87569414
1.212021722
0.898712327
0.818730753
0.4
2.751727517
1.165089633
0.801604459
0.670320046
0.6
2.448183694
1.0957815
0.626189996
0.548811636
Values yn with h=0.1.
Values zn with h=0.1.
It is obvious that the corresponding method for MDIDEs is GDN-stable and D-convergent.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (11101109) and the Natural Science Foundationof Hei-long-jiang Province of China (A201107).
WheldonT. E.KirkJ.FinlayH. M.Cyclical granulopoiesis in chronic granulocytic leukemia: a simulation study19744333793872-s2.0-0016372525RuehliA.MiekkalaU.BellenA.HeebH.Stable time domain solutions for EMC problems using PEEC circuit modelsProceedings of the IEEE International Symposium on Electromagnetic CompatibilityAugust 19943713762-s2.0-0028754654Brayton R. K.Small signal stability criterion for networks containing lossless transmission lines 19681243144010.1147/rd.126.0431GuglielmiN.Inexact Newton methods for the steady state analysis of nonlinear circuits199661435710.1142/S02182025960000431373336ZBL0852.65065KuangY.1993191Boston, Mass, USAAcademic Press Inc.xii+398Mathematics in Science and Engineering1218880BellenA.ZennaroM.2003New York, NY, USAThe Clarendon Press Oxford University Pressxiv+395Numerical Mathematics and Scientific Computation10.1093/acprof:oso/9780198506546.001.00011997488TorelliL.Stability of numerical methods for delay differential equations1989251152610.1016/0377-0427(89)90071-X977881ZBL0664.65073ZhangC. J.SunG.Nonlinear stability of Runge-Kutta methods applied to infinite-delay- differential equations2004394-54955032-s2.0-194242504510.1016/S0895-7177(04)90520-1ZBL1068.65106BellenA.ZennaroM.Strong contractivity properties of numerical methods for ordinary and delay differential equations199293-532134610.1016/0168-9274(92)90025-91158492ZBL0749.65042ZhangC.-J.LiS.-F.Dissipativity and exponentially asymptotic stability of the solutions for nonlinear neutral functional-differential equations200111911091152-s2.0-034631137710.1016/S0096-3003(99)00264-7ZBL1030.34067ZennaroM.Contractivity of Runge-Kutta methods with respect to forcing terms199311432134510.1016/0168-9274(93)90013-H1199855ZBL0774.65054ZennaroM.Asymptotic stability analysis of Runge-Kutta methods for nonlinear systems of delay differential equations199777454956310.1007/s0021100503001473396ZBL0886.65092AraújoA.A note on B-stability of splitting methods200462-3535710.1007/s00791-003-0108-x2061265García-CelayetaB.HiguerasI.RoldánT.Contractivity/monotonicity for additive Runge-Kutta methods: inner product norms200656686287810.1016/j.apnum.2005.07.0012222124ZBL1100.65064HaleJ. K.Verduyn LunelS. M.199399New York, NY, USASpringerx+447Applied Mathematical Sciences1243878KolmanovskiĭV.MyshkisA.199285Dordrecht, The NetherlandsKluwer Academic Publishers Groupxvi+234Mathematics and its Applications1256486KuangY.1993191Boston, MassAcademic Pressxii+398Mathematics in Science and Engineering1218880ZhangC.VandewalleS.General linear methods for Volterra integro-differential equations with memory20062762010203110.1137/0406070582211437ZBL1104.65133ZhangC.VandewalleS.Stability analysis of Runge-Kutta methods for nonlinear Volterra delay-integro-differential equations200424219321410.1093/imanum/24.2.1932046174ZBL1057.65104ZhangC. J.VandewalleS.Stability criteria for exact and discrete solutions of neutral multidelay-integro-differential equations20082843833992-s2.0-4154914265610.1007/s10444-007-9037-4ZBL1154.65101KotoT.Stability of θ-methods for delay integro-differential equations2003161239340410.1016/j.cam.2003.04.0042017021ZhangC. J.HeY. Y.The extended one-leg methods for nonlinear neutral delay-integro-differential equations2009596140914182-s2.0-6254914666510.1016/j.apnum.2008.08.006ZBL1163.65052KotoT.Stability of Runge-Kutta methods for delay integro-differential equations2002145248349210.1016/S0377-0427(01)00596-91917284ZBL1002.65148HuangC.VandewalleS.An analysis of delay-dependent stability for ordinary and partial differential equations with fixed and distributed delays20042551608163210.1137/S10648275024097172087328ZBL1064.65078ZhangC. J.ZhouS. Z.Nonlinear stability and D-convergence of Runge-Kutta methods for delay differential equations199785222523710.1016/S0377-0427(97)00118-01482166FrankR.SchneidJ.UeberhuberC. W.The concept of -convergence198118575378010.1137/071805162966210.1137/0718051DekkerK.VerwerJ. G.19842Amsterdam, The NetherlandsNorth-Holland Publishingix+307CWI Monographs774402LambertJ. D.1991Chichester, UKJohn Wiley & Sonsx+2931127425HundsdorferW. H.Stability and B-convergence of linearly implicit Runge-Kutta methods1986501839510.1007/BF01389669864306