JAMSAJournal of Applied Mathematics and Stochastic Analysis1687-21771048-9533Hindawi Publishing Corporation47315610.1155/2008/473156473156Research ArticleCharacterisation of Exponential Convergence to Nonequilibrium Limits for Stochastic Volterra EquationsApplebyJohn A. D.1DevinSiobhán2ReynoldsDavid W.1YongJiongmin1School of Mathematical SciencesDublin City UniversityDublin 9Irelanddcu.ie2School of Mathematical SciencesUniversity College CorkCorkIrelanducc.ie200824062008200825102007110520082008Copyright © 2008This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper considers necessary and sufficient conditions for the solution of a stochastically and deterministically perturbed Volterra equation to converge exponentially to a nonequilibrium and nontrivial limit. Convergence in an almost sure and pth mean sense is obtained.

1. Introduction

In this paper, we study the exponential convergence of the solution of dX(t)=(AX(t)+0tK(ts)X(s)ds+f(t))dt+Σ(t)dB(t),t>0,X(0)=X0, to a nontrivial random variable. Here the solution X is an n-dimensional vector-valued function on [0,), A is a real n×n-dimensional matrix, K is a continuous and integrable n×n-dimensional matrix-valued function on [0,), f is a continuous n-dimensional vector-valued function on [0,), Σ is a continuous n×d-dimensional matrix-valued function on [0,) and B(t)=(B1(t),B2(t),,Bd(t)), where each component of the Brownian motion is independent. The initial condition X0 is a deterministic constant vector.

The solution of (1.1a)-(1.1b) can be written in terms of the solution of the resolvent equation R(t)=AR(t)+0tK(ts)R(s)ds,t>0,R(0)=I, where the matrix-valued function R is known as the resolvent or fundamental solution. In , the authors studied the asymptotic convergence of the solution R of (1.2a)-(1.2b) to a nontrivial limit R. It was found that RR being integrable and the kernel being exponentially integrable were necessary and sufficient for exponential convergence. This built upon a result of Murakami  who considered the exponential convergence of the solution to a trivial limit and a result of Krisztin and Terjéki  who obtained necessary and sufficient conditions for the integrability of RR. A deterministically perturbed version of (1.2a)-(1.2b), x(t)=Ax(t)+0tK(ts)x(s)ds+f(t),t>0,x(0)=x0,was also studied in . It was shown that the exponential decay of the tail of the perturbation f combined with the integrability of RR and the exponential integrability of the kernel were necessary and sufficient conditions for convergence to a nontrivial limit.

The case where (1.2a)-(1.2b) is stochastically perturbed dX(t)=(AX(t)+0tK(ts)X(s)ds)dt+Σ(t)dB(t),t>0,X(0)=X0, has been considered. Various authors including Appleby and Freeman , Appleby and Riedle , Mao , and Mao and Riedle  have studied convergence to equilibrium. In particular the paper by Appleby and Freeman  considered the speed of convergence of solutions of (1.4a)-(1.4b) to equilibrium. It was shown that under the condition that the kernel does not change sign on [0,) then (i) the almost sure exponential convergence of the solution to zero, (ii) the pth mean exponential convergence of the solution to zero, and (iii) the exponential integrability of the kernel and the exponential square integrability of the noise are equivalent.

Two papers by Appleby et al. [8, 9] considered the convergence of solutions of (1.4a)-(1.4b) to a nonequilibrium limit in the mean square and almost sure senses, respectively. Conditions on the resolvent, kernel, and noise for the convergence of solutions to an explicit limiting random variable were found. A natural progression from this work is the analysis of the speed of convergence.

This paper examines (1.1a)-(1.1b) and builds on the results in [1, 8, 9]. The analysis of (1.1a)-(1.1b) is complicated, particularly in the almost sure case, due to presence of both a deterministic and stochastic perturbation. Nonetheless, the set of conditions which characterise the exponential convergence of the solution of (1.1a)-(1.1b) to a nontrivial random variable is found. It can be shown that the integrability of RR, the exponential integrability of the kernel, the exponential square integrability of the noise combined with the exponential decay of the tail of the deterministic perturbation, ttf(s)ds, are necessary and sufficient conditions for exponential convergence of the solution to a nontrivial random limit.

2. Mathematical Preliminaries

In this section, we introduce some standard notation as well as giving a precise definition of (1.1a)-(1.1b) and its solution.

Let denote the set of real numbers and let n denote the set of n-dimensional vectors with entries in . Denote by ei the ith standard basis vector in n. Denote by A the standard Euclidean norm for a vector A=(a1,,an) given byA2=i=1nai2=trAAT,where tr denotes the trace of a square matrix.

Let n×n be the space of n×n matrices with real entries where I is the identity matrix. Let diag(a1,a2,,an) denote the n×n matrix with the scalar entries a1,a2,,an on the diagonal and 0 elsewhere. For A=(aij)n×d the norm denoted by is defined byA2=i=1nj=1d|aij|2.

The set of complex numbers is denoted by ; the real part of z in being denoted by Rez. The Laplace transform of the function A:[0,)n×d is defined asA^(z)=0A(t)eztdt.If ϵ and 0A(t)eϵtdt< then A^(z) exists for Rezϵ and zA^(z) is analytic for Rez>ϵ.

If J is an interval in and V a finite-dimensional normed space with norm then C(J,V) denotes the family of continuous functions ϕ:JV. The space of Lebesgue integrable functions ϕ:[0,)V will be denoted by L1([0,),V) where 0ϕ(t)dt<. The space of Lebesgue square-integrable functions ϕ:[0,)V will be denoted by L2([0,),V) where 0ϕ(t)2dt<. When V is clear from the context, it is omitted it from the notation.

We now make our problem precise. We assume that the function K:[0,)n×n satisfiesKC([0,),n×n)L1([0,),n×n),the function f:[0,)n satisfiesfC([0,),n)L1([0,),n),and the function Σ:[0,)n×d satisfiesΣC([0,),n×d).Due to (2.4) we may define K1 to be a function K1C([0,),n×n) withK1(t)=tK(s)ds,t0,where this function defines the tail of the kernel K. Similarly, due to (2.5), we may define f1 to be a function f1C([0,),n) given byf1(t)=tf(s)ds,t0.We let {B(t)}t0 denote d-dimensional Brownian motion on a complete probability space (Ω,,{B(t)}t0,) where the filtration is the natural one B(t)=σ{B(s):0st}.

Under the hypothesis (2.4), it is well known that (1.2a)-(1.2b) has a unique continuous solution R, which is continuously differentiable. We define the function tX(t;X0,f,Σ) to be the unique solution of the initial value problem (1.1a)-(1.1b). If Σ and f are continuous then for any deterministic initial condition X0 there exists an almost surely unique continuous and B-adapted solution to (1.1a)-(1.1b) given byX(t;X0,Σ,f)=R(t)X0+0tR(ts)f(s)ds+0tR(ts)Σ(s)dB(s),t0.When X0,f, and Σ are clear from the context, we omit them from the notation.

The notion of convergence and integrability in pth mean and almost sure senses are now defined: the n-valued stochastic process {X(t)}t0converges in pth mean to X if limt𝔼X(t)Xp=0; the process is pth mean exponentially convergent to X if there exists a deterministic βp>0 such thatlimsupt1tlog(EX(t)Xp)βp;we say that the difference between the stochastic process {X(t)}t0 and X is integrable in the pth mean sense if0EX(t)Xpdt<.If there exists a -null set Ω0 such that for every ωΩ0, the following holds: limtX(t,ω)=X(ω), then Xconverges almost surely to X; we say X is almost surely exponentially convergent to X if there exists a deterministic β0>0 such thatlimsupt1tlogX(t,ω)X(ω)β0,a.s.Finally, the difference between the stochastic process {X(t)}t0 and X is square integrable in the almost sure sense if0X(t,ω)X(ω)2dt<.Henceforth, 𝔼[Xp] will be denoted by 𝔼Xp except in cases where the meaning may be ambiguous. A number of inequalities are used repeatedly in the sequel; they are stated here for clarity. If, for p,q(0,), the finite-dimensional random variables X and Y satisfy 𝔼Xp< and 𝔼Yq<, respectively, then the Lyapunov inequality is useful when considering the pth mean behaviour of random variables as any exponent p>0 may be considered:E[Xp]1/pE[Xq]1/q,0<pq.The following proves useful in manipulating norms:(i=1n|xi|)knk1i=1n|xi|k,n,k.

3. Discussion of Results

We begin by stating the main result of this paper. That is, we state the necessary and sufficient conditions required on the resolvent, kernel, deterministic perturbation, and noise terms for the solution of (1.1a)-(1.1b) to converge exponentially to a limiting random variable. In this paper, we are particularly interested in the case when the limiting random variable is nontrivial, although the result is still true for the case when the limiting value is zero.

Theorem 3.1.

Let K satisfy (2.4) and0t2K(t)dt<,let Σ satisfy (2.6), and let f satisfy (2.5). If K satisfieseachentryofKdoesnotchangesignon[0,),then the following are equivalent.

There exists a constant matrix R such that the solution R of (1.2a)-(1.2b) satisfiesRRL2([0,),n×n),and there exist constants α>0,γ>0,ρ>0, and c1>0 such that K satisfies0K(s)eαsds<,Σ satisfies0Σ(s)2e2γsds<,and f1, the tail of f, defined by (2.8) satisfiesf1(t)c1eγt,t0.

For all initial conditions X0 and constants p>0 there exists an a.s. finite B()-measurable random variable X(X0,Σ,f) with 𝔼Xp< such that the unique continuous adapted process X(;X0,Σ,f) which obeys (1.1a)-(1.1b) satisfiesE[X(t)Xp]mp*eβp*t,t0,where βp* and mp*=mp*(X0) are positive constants.

For all initial conditions X0 there exists an a.s. finite B()-measurable random variable X(X0,Σ,f) such that the unique continuous adapted process X(;X0,Σ,f) which obeys (1.1a)-(1.1b) satisfieslimsupt1tlogX(t)Xβ0*a.s.,where β0* is a positive constant.

The proof of Theorem 3.1 is complicated by the presence of two perturbations so as an initial step the case when f:=0 is considered. That is we consider the conditions required for exponential convergence of (1.4a)-(1.4b) to a limiting random variable.

Theorem 3.2.

Let K satisfy (2.4) and (3.1) and let Σ satisfy (2.6). If K satisfies (3.2) then the following are equivalent.

There exists a constant matrix R such that the solution R of (1.2a)-(1.2b) satisfies (3.3) and there exist constants α>0 and γ>0 such that K and Σ satisfy (3.4) and (3.5), respectively.

For all initial conditions X0 and constants p>0 there exists an a.s. finite B()-measurable random variable X(X0,Σ) with 𝔼Xp< such that the unique continuous adapted process X(;X0,Σ) which obeys (1.4a)-(1.4b) satisfiesE[X(t)Xp]mpeβpt,t0,where βp and mp=mp(X0) are positive constants.

For all initial conditions X0 there exists an a.s. finite B()-measurable random variable X(X0,Σ) such that the unique continuous adapted process X(;X0,Σ) which obeys (1.4a)-(1.4b) satisfieslimsupt1tlogX(t)Xβ0a.s.,where β0 is a positive constant.

This result is interesting in its own right as it generalises a result in  where necessary and sufficient conditions for exponential convergence to zero are found. Theorem 3.2 collapses to this case if R=0.

It is interesting to note the relationship between the behaviour of the solutions of (1.1a)-(1.1b), (1.2a)-(1.2b), (1.3a)-(1.3b), and (1.4a)-(1.4b) and the behaviour of the inputs K,f, and Σ. It is seen in  that K being exponentially integrable is the crucial condition for exponential convergence when we consider the resolvent equation. Each perturbed equation then builds on this resolvent case: for the deterministically perturbed equation we require the exponential integrability of K and the exponential decay of the tail of the perturbation f (see ); for the stochastically perturbed case we require the exponential integrability of K and the exponential square integrability of Σ. In the stochastically and deterministically perturbed case it is seen that the perturbations do not interact in a way that exacerbates or diminishes the influence of the perturbations on the system: we can isolate the behaviours of the perturbations and show that the same conditions on the perturbations are still necessary and sufficient.

Theorem 3.1 has application in the analysis of initial history problems. In particular this theoretical result could be used to interpret the equation as an epidemiological model. Conditions under which a disease becomes endemic (which is the interpretation that is given when solutions settle down to a nontrivial limit) were studied in . The theoretical results obtained in this paper could be exploited to highlight the speed at which this can occur within a population.

The remainder of this paper deals with the proofs of Theorems 3.1 and 3.2. In Section 4 we prove the sufficiency of conditions on R,K, and Σ for the exponential convergence of the solution of (1.4a)-(1.4b) while in Section 5 we prove the necessity of these conditions. In Section 6 we prove the sufficiency of conditions on R,K,Σ, and f for the exponential convergence of the solution of (1.1a)-(1.1b), while Section 7 deals with the necessity of the condition on Σ. In Section 8 we combine our results to prove the main theorems, namely, Theorems 3.1 and 3.2.

4. Sufficient Conditions for Exponential Convergence of Solutions of (<xref ref-type="disp-formula" rid="eq1.4a">1.4a</xref>)-(<xref ref-type="disp-formula" rid="eq1.4b">1.4b</xref>)

In this section, sufficient conditions for exponential convergence of solutions of (1.4a)-(1.4b) to a nontrivial limit are obtained. Proposition 4.1 concerns convergence in the pth mean sense while Proposition 4.2 deals with the almost sure case.

Proposition 4.1.

Let K satisfy (2.4) and (3.1), let Σ satisfy (2.6) and R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). If there exist constants α>0 and γ>0 such that (3.4) and (3.5) hold, then there exist constants βp>0, independent of X0, and mp=mp(X0)>0, such that statement (ii) of Theorem 3.2 holds.

Proposition 4.2.

Let K satisfy (2.4) and (3.1), let Σ satisfy (2.6) and R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). If there exist constants α>0 and γ>0 such that (3.4) and (3.5) hold, then there exists a constant β0>0, independent of X0 such that statement (iii) of Theorem 3.2 holds.

In , the conditions which give mean square convergence to a nontrivial limit were considered. So a natural progression in this paper is the examination of the speed of convergence in the mean square case. Lemma 4.3 examines the case when p=2 in order to highlight this important case. This lemma may be then used when generalising the result to all p>0.

Lemma 4.3.

Let K satisfy (2.4) and (3.1), let Σ satisfy (2.6), and let R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). If there exist constants α>0 and γ>0 such that (3.4) and (3.5) hold, then there exist constants λ>0, independent of X0, and m=m(X0)>0, such thatEX(t)X2m(X0)e2λt,t0.

From [8, 9] it is evident that RRL2([0,),n×n) is a more natural condition on the resolvent than RRL1([0,),n×n) when studying convergence of solutions of (1.4a)-(1.4b). However, the deterministic results obtained in  are based on the assumption that RRL1([0,),n×n). Lemma 4.4 is required in order to make use of these results in this paper; this result isolates conditions that ensure the integrability of RR once RR is square integrable.

Lemma 4.4.

Let K satisfy (2.4) and (3.1) and let R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). Then the solution R of (1.2a)-(1.2b) satisfiesRRL1([0,),n×n).

We now state some supporting results. It is well known that the behaviour of the resolvent Volterra equation influences the behaviour of the perturbed equation. It is unsurprising therefore that an earlier result found in  concerning exponential convergence of the resolvent R to a limit R in needed in the proof of Theorems 3.1 and 3.2.

Theorem 4.5.

Let K satisfy (2.4) and (3.1). Suppose there exists a constant matrix R such that the solution R of (1.2a)-(1.2b) satisfies (4.2). If there exists a constant α>0 such that K satisfies (3.4) then there exist constants β>0 and c>0 such thatR(t)Rceβt,t0.

In the proof of Propositions 4.1 and 4.2, an explicit representation of X is required. In [8, 9] the asymptotic convergence of the solution of (1.4a)-(1.4b) was considered. Sufficient conditions for convergence were obtained and an explicit representation of X was found:

Theorem 4.6.

Let K satisfy (2.4) and0tK(t)dt<,and let Σ satisfy (2.6) and0Σ(t)2dt<.Suppose that the resolvent R of (1.2a)-(1.2b) satisfies (3.3). Then the solution X of (1.4a)-(1.4b) satisfies limtX(t)=X almost surely, where X is an almost surely finite and B()-measurable random variable given byX=R(X0+0Σ(t)dB(t))a.s.

Lemma 4.7 concerns the structure of X in the almost sure case. It was proved in .

Lemma 4.7.

Let K satisfy (2.4) and (4.4). Suppose that for all initial conditions X0 there is an almost surely finite random variable X(X0,Σ) such that the solution tX(t;X0,Σ) of (1.4a)-(1.4b) satisfies lim tX(t;X0,Σ)=X(X0,Σ)a.s.,X(;X0,Σ)X(X0,Σ)L2([0,),n)a.s.Then(A+0K(s)ds)X=0a.s.

It is possible to apply this lemma using our a priori assumptions due to Theorem 4.8, which was proved in .

Theorem 4.8.

Let K satisfy (2.4) and (4.4) and let Σ satisfy (2.6). If Σ satisfies (4.5) and there exists a constant matrix R such that the solution R of (1.2a)-(1.2b) satisfies (3.3), then for all initial conditions X0 there is an almost surely finite B()-measurable random variable X(X0,Σ), such that the unique continuous adapted process X(;X0,Σ) which obeys (1.4a)-(1.4b) satisfies (4.7).

Moreover, if the function Σ also satisfies0tRΣ(t)2dt<,then (4.8) holds.

Lemma 4.9 below is required in the proof of Lemma 4.4. It is proved in . Before citing this result some notation is introduced. Let M=A+0K(t)dt and T be an invertible matrix such that J=T1MT has Jordan canonical form. Let ei=1 if all the elements of the ith row of J are zero, and ei=0 otherwise. Let Dp=diag(e1,e2,,en) and put P=TDpT1 and Q=IP.

Lemma 4.9.

Let K satisfy (2.4) and (4.4). If there exists a constant matrix R such that the resolvent R of (1.2a)-(1.2b) satisfies (3.3), thendet[I+F^(z)]0,Rez0,where F is defined byF(t)=et(Q+QA)(e*QK)(t)+PtK(u)du,t0.

Lemma 4.10 concerns the moments of a normally distributed random variable. It can be extracted from [4, Theorem 3.3] and it is used in Proposition 4.1.

Lemma 4.10.

Suppose the function σC([0,)×[0,),p×r) thenEabσ(s,t)dB(s)2mdm(p,r)(abσ(s,t)2ds)m,where dm(p,r)=pm+1r2m+1(2m)!(m!2m)1c2(p,r)m.

The following lemma is used in Proposition 4.2. A similar result is proved in .

Lemma 4.11.

Suppose that KC([0,),n×n)L1([0,),n×n) and0K(s)eαsds<.If λ>0 and η=2λα then0te2λ(ts)eαsK(s)dsceηt,where c is a positive constant.

The proofs of Propositions 4.1 and 4.2 and Lemmas 4.3 and 4.4 are now given.

Proof of Lemma <xref ref-type="statement" rid="lem4.1">4.3</xref>.

From Theorem 4.6 we see that X(t)X almost surely where X is given by (4.6), so we see thatEX2=E[tr(XXT)]=RX02+0RΣ(s)2ds<.SinceE[X(t)X2]=E[tr(X(t)X)(X(t)X)T],we use (2.9) and (4.6) to expand the right hand side of (4.17) to obtainE[X(t)X2]=(R(t)R)X02+0t(R(ts)R)Σ(s)2ds+tRΣ(s)2ds.In order to obtain an exponential upper bound on (4.18) each term is considered individually. We begin by considering the first term on the right-hand side of (4.18). Using (3.1) and (3.3) we can apply Lemma 4.4 to obtain (4.2). Then using (3.1), (4.2), and (3.4) we see from Theorem 4.5 that(R(t)R)X02c1X02e2βt.We provide an argument to show that the second term decays exponentially. Using (3.5) and the fact that R decays exponentially quickly to R we can choose 0<λ<min(β,γ) such that eλΣ and eλ(RR)L2[0,) where the function eλ is defined by eλ(t)=eλt. Since the convolution of an L2[0,) function with an L2[0,) function is itself an L2[0,) function we gete2λt0t(R(ts)R)Σ(s)2ds0te2λ(ts)R(ts)R2e2λsΣ(s)2dsc2,and so the second term of (4.18) decays exponentially quickly.

We can show that the third term on the right hand side of (4.18) decays exponentially using (3.5) and the following argument:Σ¯:=0Σ(s)2e2γsdstΣ(s)2e2γsdse2γttΣ(s)2ds.

Combining these facts we see thatE[X(t)X2]m(X0)e2λt,where m(X0)=c1X02+c2+Σ¯R2 and λ<min(β,γ).

Proof of Proposition <xref ref-type="statement" rid="prop4.1">4.1</xref>.

Consider the case where 0<p2 and p>2 separately. We begin with the case where 0<p2. The argument given by (4.16) shows that 𝔼[X2]<. Now applying Lyapunov's inequality we see thatEXpE[X2]p/2<.We now show that (3.9) holds for 0p2. Lyapunov's inequality and Lemma 4.3 can be applied as follows:E[X(t)Xp]E[X(t)X2]p/2mp(X0)eβpt,t0,where mp(X0)=m(X0)p/2 and βp=λp.

Now consider the case where p>2. In this case, there exists a constant m such that 2(m1)<p2m. We now seek an upper bound on 𝔼X2m and 𝔼[X(t)X2m], which will in turn give an upper bound on 𝔼Xp and 𝔼[X(t)Xp] by using Lyapunov's inequality. By applying Lemma 4.10 we see thatEX2mcRX02m+c(0RΣ(s)2ds)m<,where c is a positive constant, so 𝔼Xp𝔼[X2m]p/2m<.

Now consider 𝔼[X(t)X2m]. Using the variation of parameters representation of the solution and the expression obtained for X, taking norms, raising both sides of the equation to the 2mth power, then taking expectations across the inequality, we arrive atE[X(t)X2m]32m1((R(t)R)X02m+E[0t(R(ts)R)Σ(s)dB(s)2m]+E[tRΣ(s)dB(s)2m]).We consider each term on the right hand side of (4.26). By Theorem 4.5 we have(R(t)R)X02mc1X02me2mβt.Now, consider the second term on the right-hand side of (4.26). By (4.20) we see that 0t(R(ts)R)Σ(s)2dsc2e2λt where λ<min(β,γ). Using this and Lemma 4.10 we see thatE[0t(R(ts)R)Σ(s)dB(s)2m]dm(n,d)(0tR(ts)R2Σ(s)2ds)mdm(n,d)c2me2mλt.

Using (4.21) combined with Lemma 4.10 and Fatou's lemma, we show that the third term decays exponentially quickly:E[tΣ(s)dB(s)2m]dm(n,d)(tΣ(s)2ds)mdm(n,d)Σ¯me2mγt.Combining (4.27), (4.28), and (4.29) the inequality (4.26) becomesE[X(t)X2m]32m1(c1X02me2mβt+dm(n,d)c2me2mλt+dm(n,d)R2mΣ¯me2mγt).Using Lyapunov's inequality, the inequality (4.30) impliesE[X(t)Xp]mp(X0)eβpt,where mp(X0)=3p((2m1)/2m)(c1X02m+dm(n,d)c2m+dm(n,d)R2mΣ¯m)p/2m and βp=λp.

Proof of Proposition <xref ref-type="statement" rid="prop4.2">4.2</xref>.

In order to prove this proposition we show thatE[supn1tnX(t)X2]m(X0)e2η(n1),η>0.For each t>0 there exists n such that n1t<n. Define Δ(t)=X(t)X. Integrating (1.4a)-(1.4b) over [n1,t], then adding and subtracting X on both sides we getX(t)X=(X(n1)X)+n1tA(X(s)X)ds+n1t0sK(su)(X(u)X)duds+n1tΣ(s)dB(s)+n1t(A+0K(u)du)Xdsn1tsK(v)Xdvds.By applying Theorem 4.8, we see that (4.7) and (4.8) hold so Lemma 4.7 may be applied to obtainΔ(t)=Δ(n1)+n1t(AΔ(s)+(K*Δ)(s))ds+n1tΣ(s)dB(s)n1tK1(s)dsX.Taking norms on both sides of (4.34), squaring both sides, taking suprema, before finally taking expectations yields:E[supn1tnΔ(t)2]4{EΔ(n1)2+E[(n1nAΔ(s)+(K*Δ)(s)ds)2]+E[supn1tnn1tΣ(s)dB(s)2]+(n1nK1(s)ds)2EX2}.We now consider each term on the right hand side of (4.35). From Lemma 4.3 we see that the first term satisfiesE[Δ(n1)2]m(X0)e2λ(n1).In order to obtain an exponential bound on the second term on the right hand side of (4.26) we make use of the Cauchy-Schwarz inequality as follows:(n1nAΔ(s)+(K*Δ)(s)ds)22n1n(A2Δ(s)2+(K*Δ)2(s))ds2n1n[A2Δ(s)2+(0seα(su)/2K(su)1/2eα(su)/2K(su)1/2Δ(s)du)2]ds2n1n[A2Δ(s)2+K¯α0seα(su)K(su)Δ(s)2du]ds,where K¯α=0eαtK(t)dt. Take expectations and examine the two terms within the integral. Using Lemma 4.3 we obtainE[n1nA2Δ(s)2ds]A2m(X0)n1ne2λsdsc1(X0)e2λ(n1).In order to obtain an exponential upper bound for the second term within the integral we apply Lemma 4.11 with K=K, α=α,λ=λ and η=η:E[n1nK¯α0seα(su)K(su)Δ(s)2duds]m(X0)K¯αn1n0seαuK(u)e2λ(su)dudsc2(X0)eη(n1).Next, we obtain an exponential upper bound on the third term. Using (4.21) and the Burkholder-Davis-Gundy inequality, there exists a constant c3>0 such thatE[supn1tnn1tΣ(s)dB(s)2]c3Σ¯e2γ(n1).Now consider the last term on the right hand side of (4.35). Using (3.4) we see thatK¯α:=0K(s)eαsdseαttK(s)dseαtK1(t).Using this and the fact that 𝔼X2< (see (4.16)) we obtain(n1nK1(s)ds)2EX2EX2(n1nK¯αeαsds)2c4e2α(n1).Combining (4.36), (4.38), (4.39), (4.40), and (4.42) we obtainE[supn1tnX(t)X2]m(X0)e2η(n1),where m(X0)=4(m(X0)+c1(X0)+c2(X0)+c3Σ¯+c4) and η<min(2λ,α).

We can now apply the line of reasoning used in [10, Theorem 4.4.2] to obtain (3.10).

Proof of Lemma <xref ref-type="statement" rid="lem4.2">4.4</xref>.

We use a reformulation of (1.2a)-(1.2b) in the proof of this result. It is obtained as follows: multiply both sides of R(s)=AR(s)+(K*R)(s) across by the function Φ(ts), where Φ(t)=P+etQ, integrate over [0,t], use integration by parts, add and subtract R from both sides to obtainY(t)+(F*Y)(t)=G(t),t0,where Y=RR, F is defined by (4.12) and G is defined byG(t)=etQet(QR+QAR)+tsPK(u)RdudstQK(u)Rdu(e*QKR)(t),t0.

Consider the reformulation of (1.2a)-(1.2b) given by (4.44). It is well known that Y can be expressed asY(t)=G(t)0tr(ts)G(s)ds,where the function r satisfies r+F*r=F and r+r*F=F. We refer the reader to  for details. Consider the first term on the right hand side of (4.46). As (3.1) holds it is clear that the function G is integrable. Now consider the second term. Since (3.3) and (4.4) hold we may apply Lemma 4.9 to obtain (4.11). Now we may apply a result of Paley and Wiener (see ) to see that r is integrable. The convolution of an integrable function with an integrable function is itself integrable. Now combining the arguments for the first and second terms we see that (4.2) must hold.

5. On the Necessity of (<xref ref-type="disp-formula" rid="eq3.5">3.5</xref>) for Exponential Convergence of Solutions of (<xref ref-type="disp-formula" rid="eq1.4a">1.4a</xref>)-(<xref ref-type="disp-formula" rid="eq1.4b">1.4b</xref>)

In this section, the necessity of condition (3.5) for exponential convergence in the almost sure and pth mean senses is shown. Proposition 5.1 concerns the necessity of the condition in the almost sure case while Proposition 5.2 deals with the pth mean case.

Proposition 5.1.

Let K satisfy (2.4) and (4.4) and Σ satisfy (2.6). If there exists a constant α>0 such that (3.4) holds, and if for all X0 there is a constant vector X(X0,Σ) such that the solution tX(t;X0,Σ) of (1.4a)-(1.4b) satisfies statement (iii) of Theorem 3.2, then there exists a constant γ>0, independent of X0, such that (3.5) holds.

Proposition 5.2.

Let K satisfy (2.4) and (4.4) and Σ satisfy (2.6). If there exists a constant α>0 such that (3.4) holds, and if for all X0 there is a constant vector X(X0,Σ) such that the solution tX(t;X0,Σ) of (1.4a)-(1.4b) satisfies statement (ii) of Theorem 3.2, then there exists a constant γ>0, independent of X0, such that (3.5) holds.

In order to prove these propositions the integral version of (1.4a)-(1.4b) is considered. By reformulating this version of the equation an expression for a term related to the exponential integrability of the perturbation is found. Using various arguments, including the Martingale Convergence Theorem in the almost sure case, this term is used to show that (3.5) holds.

Some supporting results are now stated. Lemma 5.3 is the analogue of Lemma 4.7 in the mean square case. It was proved in .

Lemma 5.3.

Let K satisfy (2.4) and (4.4). Suppose that for all initial conditions X0 there is a B()-measurable and almost surely finite random variable X(X0,Σ) with 𝔼X2< such that the solution tX(t;X0,Σ) of (1.4a)-(1.4b) satisfieslimtEX(t;X0,Σ)X(X0,Σ)2=0,EX(;X0,Σ)X(X0,Σ)2L1([0,),).Then X obeys(A+0K(s)ds)X=0a.s.

Lemma 5.4 may be extracted from ; it is required in the proof of Proposition 5.2.

Lemma 5.4.

Let N=(N1,,Nn) where Ni𝒩(0,vi2) for i=1,,n. Then there exists a {vi}i=1n-independent constant d1>0 such thatE[N2]d1E[N]2.

Proof of Proposition <xref ref-type="statement" rid="prop5.1">5.1</xref>.

In order to prove this result we follow the argument used in [4, Theorem 4.1]. Let 0<γ<αβ0. By defining the process Z(t)=eγtX(t) and the matrix κ(t)=eγtK(t) we can rewrite (1.4a)-(1.4b) asdZ(t)=((γI+A)Z(t)+0tκ(ts)Z(s)ds)dt+eγtΣ(t)dB(t),the integral form of which isZ(t)Z(0)=(γI+A)0tZ(s)ds+0t0sκ(su)Z(u)duds+0teγsΣ(s)dB(s).Using Z(t)=eγtX(t) and rearranging this becomes0teγsΣ(s)dB(s)=eγtX(t)X0(γI+A)0teγsX(s)ds0teγs0sK(su)X(u)duds.Adding and subtracting X from the right hand side and applying Lemma 4.7 we obtain:0teγsΣ(s)dB(s)=eγt(X(t)X)(X0X)(γI+A)0teγs(X(s)X)ds0teγs0sK(su)(X(u)X)duds+0teγsK1(s)dsX.Consider each term on the right hand side of (5.7). We see that the first term tends to zero as (3.10) holds and γ<β0. The second term is finite by hypothesis. Again, using the fact that γ<β0 and that assumption (3.10) holds we see that eγ(XX)L1[0,), so the third term tends to a limit as t. Now consider the fourth term. Since 0<γ<αβ0, we can choose γ1>0 such that γ<γ1<αβ0. So the functions teγ1tK(t) and teγ1t(X(t)X) are both integrable. The convolution of these two integrable functions is itself an integrable function, so0sK(su)(X(u)X)duceγ1s.Thus, it is clear that the fourth term has a finite limit as t. Finally, the fifth term on the right hand side of (5.7) has a finite limit at infinity, using (4.41).

Each term on the right hand side of the inequality has a finite limit as t, so thereforelimt0teγsΣ(s)dB(s)existsandisalmostsurelyfinite.The Martingale Convergence Theorem [12, Proposition 5.1.8] may now be applied component by component to obtain (3.5).

Proof of Proposition <xref ref-type="statement" rid="prop5.2">5.2</xref>.

By Lemma 5.3, (5.7) still holds. Define γ<αβ1, take norms and expectations across (5.7) to obtainE[0teγsΣ(s)dB(s)]E[eγtX(t)X]+E[X0X]+γI+A0tE[eγsX(s)X]ds+0teγs0sK(u)E[X(su)X]duds+0teγsK1(s)dsEX.There exists m1 such thatE[eγtX(t)X]m1e(β1γ)t,thus the first, second and third terms on the right hand side of (5.10) are uniformly bounded on [0,). Now consider the fourth term. Since 0<γ<αβ1, we can choose γ1>0 such that γ<γ1<αβ1 so that the functions teγ1tK(t) and teγ1t𝔼X(t)X are both integrable. The convolution of two integrable functions is itself an integrable function, so0sK(su)EX(u)Xduceγ1s,so it is clear that the fourth term is uniformly bounded on [0,). Finally, we consider the final term on the right hand side of (5.10). Using (4.41) we obtain0teγsK1(s)dsEXK¯αEX0te(αγ)sds<,since γ<α. Thus there is a constant c>0 such thatE[0teγsΣ(s)dB(s)]c.The proof now follows the line of reasoning found in [4, Theorem 4.3]: observe that0teγsΣ(s)dB(s)2=i=1nNi(t)2,whereNi(t)=j=1d0teγsΣij(s)dBj(s).It is clear that Ni(t) is normally distributed with zero mean and variance given byvi(t)2=j=1d0te2γsΣij(s)2ds.Lemma 5.4 and (5.14) may now be applied to obtain:0te2γsΣ(s)2ds=i=1nj=1d0te2γs|Σij(s)|2ds=i=1nvi(t)2=E[0teγsΣ(s)dB(s)2]d1E[0teγsΣ(s)dB(s)]2d1c2.Allowing t on both sides of this inequality yields the desired result.

6. Sufficient Conditions for Exponential Convergence of Solutions of (<xref ref-type="disp-formula" rid="eq1.1a">1.1a</xref>)-(<xref ref-type="disp-formula" rid="eq1.1b">1.1b</xref>)

In this section, sufficient conditions for exponential convergence of solutions of (1.1a)-(1.1b) to a nontrivial limit are found. Proposition 6.1 concerns the pth mean sense while Proposition 6.2 deals with the almost sure case.

Proposition 6.1.

Let K satisfy (2.4) and (3.1), let Σ satisfy (2.6), let f satisfy (2.5), and let R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). If there exist constants α>0, ρ>0 and γ>0 such that (3.4), (3.6) and (3.5) hold, then there exist constants βp*>0, independent of X0, and mp*=mp*(X0)>0, such that statement (ii) of Theorem 3.1 holds.

Proposition 6.2.

Let K satisfy (2.4) and (3.1), let Σ satisfy (2.6), let f satisfy (2.5), and let R be a constant matrix such that the solution R of (1.2a)-(1.2b) satisfies (3.3). If there exist constants α>0,ρ>0 and γ>0 such that (3.4), (3.6) and (3.5) hold, then there exists constant β0*>0, independent of X0 such that statement (iii) of Theorem 3.1 holds.

As in the case where f:=0 we require an explicit formulation for X. The proof of this result follows the line of reasoning used in the proof of Theorem 4.6 and is therefore omitted.

Theorem 6.3.

Let K satisfy (2.4) and (4.4), let Σ satisfy (2.6) and (4.5), and let f satisfy (2.5). Suppose that the resolvent R of (1.2a)-(1.2b) satisfies (3.3), then the solution X of (1.1a)-(1.1b) satisfies XX(X0,Σ,f) almost surely, whereX(X0,Σ,f)=X(X0,Σ)+R0f(t)dt,a.s.and X(X0,Σ,f) is almost surely finite.

Proof of Proposition <xref ref-type="statement" rid="prop6.1">6.1</xref>.

We begin by showing that 𝔼X(X0,Σ,f)p is finite. Clearly, we see thatEX(X0,Σ,f)p2p1(EX(X0,Σ)p+0Rf(s)dsp)<.Now, consider the difference between the solution X(;X0,Σ,f) of (1.1a)-(1.1b) and its limit X(X0,Σ,f) given by (6.1):X(t;X0,Σ,f)X(X0,Σ,f)=(X(t;X0,Σ)X(X0,Σ))+0t(R(ts)R)f(s)dstRf(s)ds.Using integration by parts this expression becomesX(t;X0,Σ,f)X(X0,Σ,f)=(X(t;X0,Σ)X(X0,Σ))f1(t)+(R(t)R)f1(0)0tR(ts)f1(s)ds.Taking norms on both sides of equation (6.4), raising the power to p on both sides, and taking expectations across we obtainEX(t;X0,Σ,f)X(X0,Σ,f)p4p1(EX(t;X0,Σ)X(X0,Σ)p+f1(t)p+R(t)Rpf1(0)p+(0tR(ts)f1(s)ds)p).Now consider the right hand side of (6.5). The first term decays exponentially quickly due to Theorem 3.2. The second term decays exponentially quickly due to assumption (3.6). By applying Lemma 4.4 we see that (4.2) holds so we can apply Theorem 4.5 to show that the third term must decay exponentially. In the sequel, an argument is provided to show that R decays exponentially; thus the final term must decay exponentially. Combining these arguments we see that (3.7) holds, where βp*<min(βp,β,ρ).

It is now shown that R decays exponentially. It is clear from the resolvent equation (1.2a)-(1.2b) thatR(t)=A(R(t)R)+0tK(ts)(R(s)R)dsK1(t)R+(A+0K(s)ds)R.Consider each term on the right hand side of (6.6). We can apply Theorem 4.5 to obtain that R decays exponentially quickly to R. In order to show that the second term decays exponentially we proceed as follows: since RR decays exponentially and (3.4) holds it is possible to choose μ such that the functions teμtK(t) and teμt(R(t)R) are both in L1([0,),n×n). The convolution of two integrable functions is itself an integrable function, soeμt0tK(ts)(R(s)R)ds=0teμ(ts)K(ts)eμs(R(s)R)dsc.To see that the third term decays exponentially we use (4.41). Finally, we consider the fourth term. By Lemma 4.4 and (3.3) we have that (4.2) holds. In [1, Theorem 6.1] it was shown that (A+0K(s)ds)R=0 under this hypothesis and (3.1). Combining the above we see that R decays exponentially quickly to 0.

Proof of Proposition <xref ref-type="statement" rid="prop6.2">6.2</xref>.

Take norms across (6.4) to obtainX(t;X0,Σ,f)X(X0,Σ,f)X(t;X0,Σ)X(X0,Σ)+f1(t)+R(t)Rf1(0)+0tR(ts)f1(s)ds.Using Theorem 3.2, we see that the first term on the right hand side of (6.8) decays exponentially. The second term on the right hand side decays exponentially as (3.6) holds. We can apply Theorem 4.5 to show that the third term must decay exponentially. An argument was provided in Proposition 6.1 to show that R decays exponentially. Combining this with (3.6) enables us to show that the fourth term decays exponentially. Using the above arguments we obtain (3.8), where β*min(β0,β,ρ).

7. On the Necessity of (<xref ref-type="disp-formula" rid="eq3.6">3.6</xref>) and (<xref ref-type="disp-formula" rid="eq3.5">3.5</xref>) for Exponential Convergence of Solutions of (<xref ref-type="disp-formula" rid="eq1.1a">1.1a</xref>)-(<xref ref-type="disp-formula" rid="eq1.1b">1.1b</xref>)

In this section, the necessity of (3.6) and (3.5) for exponential convergence of solutions of (1.1a)-(1.1b) in the almost sure and pth mean senses is shown. Proposition 7.1 concerns the necessity of the conditions in the pth mean case while Proposition 7.2 deals with the almost sure case.

Proposition 7.1.

Let K satisfy (2.4) and (4.4), let Σ satisfy (2.6), and let f satisfy (2.5). If there exists constant α>0 such that (3.4) holds, and if for all X0 there is constant vector X(X0,Σ,f) such that the solution tX(t;X0,Σ,f) of (1.1a)-(1.1b) satisfies statement (ii) of Theorem 3.1, then there exist constants ρ>0 and γ>0, independent of X0, such that (3.6) and (3.5) hold.

Proposition 7.2.

Let K satisfy (2.4) and (4.4), let Σ satisfy (2.6), and let f satisfy (2.5). If there exists constant α>0 such that (3.4) holds, and if for all X0 there is a constant vector X(X0,Σ,f) such that the solution tX(t;X0,Σ,f) of (1.1a)-(1.1b) satisfies statement (iii) of Theorem 3.1, then there exist constants ρ>0 and γ>0, independent of X0, such that (3.6) and (3.5) hold.

The following lemma is used in the proof of Proposition 7.2. This lemma allows us to separate the behavior of the deterministic perturbation from the stochastic perturbation in the almost sure case. It is interesting to note that we can prove this lemma without any reference to the integro-differential equation.

Lemma 7.3.

Suppose c>0 is an almost surely finite random variable andf1(t)+μ1(t,ω)c(ω)eλt,where λ>0, ωΩ*, [Ω*]=1 and the functions f1 and μ1 are defined by (2.8) andμ1(t)=tΣ(s)dB(s),t0,respectively. Then (3.5) and (3.6) hold.

In order to prove Lemma 7.3 we require Lemmas 7.4 and 7.5 below. Lemma 7.5 was proved in . The proof of Lemma 7.4 makes use of Kolmogorov's Zero-One Law. It follows the proof of Theorem 2 in [14, Chapter IV, Section 1] and so is omitted.

Lemma 7.4.

Let {ξi}i=1 be a sequence of independent Gaussian random variables with 𝔼[ξi]=0 and 𝔼[ξi2]=vi21. Thenlimsupmi=1mξi=,liminfmi=1mξi=,a.s.

Lemma 7.5.

If there is a γ>0 such that σC([0,),) and0σ(s)2e2γsds<,thenlimsupt1tlog|tσ(s)dB(s)|γ,a.s.where {B(t)}t0 is a one-dimensional standard Brownian motion.

Lemmas 7.6 and 7.7 are used in the proofs of Propositions 7.1 and 7.2, respectively and are the analogues of Lemmas 5.3 and 4.7. Their proofs are identical in all important aspects and so are omitted.

Lemma 7.6.

Let K satisfy (2.4) and (4.4). Suppose that for all initial conditions X0 there is an B()-measurable and almost surely finite random variable X(X0,Σ) with 𝔼X2< such that the solution tX(t;X0,Σ) of (1.1a)-(1.1b) satisfieslimtEX(t;X0,Σ,f)X(X0,Σ,f)2=0,EX(;X0,Σ,f)X(X0,Σ,f)2L1([0,),).Then X obeys(A+0K(s)ds)X=0a.s.

Lemma 7.7.

Let K satisfy (2.4) and (4.4). Suppose that for all initial conditions X0 there is an B()-measurable and almost surely finite random variable X(X0,Σ,f) such that the solution tX(t;X0,Σ,f) of (1.1a)-(1.1b) satisfieslimtX(t;X0,Σ,f)=X(X0,Σ,f)a.s.,X(;X0,Σ,f)X(X0,Σ,f)L2([0,),n)a.s.Then X obeys (7.7).

Proof of Proposition <xref ref-type="statement" rid="prop7.1">7.1</xref>.

Since (3.7) holds for every initial condition we can choose X0=0: this simplifies calculations. Moreover using (3.7) in Lemma 7.6 it is clear that assumption (7.7) holds. Consider the integral form of (1.1a)-(1.1b). Adding and subtracting X from both sides and applying Lemma 7.6 we obtainΔ(t)=X+0tδ(s)ds+0tf(s)ds+μ(t)0tK1(s)dsX,where Δ(t)=X(t)X, the function δ is defined byδ(t)=AΔ(t)+(K*Δ)(t),and μ(t)=0tΣ(s)dB(s). Taking expectations across (7.9) and allowing t we obtainE[X]=0E[δ(s)]ds0f(s)ds+0K1(s)dsE[X],where 𝔼[δ(t)]=A𝔼[Δ(t)]+(K*𝔼[Δ])(t). Using this expression for 𝔼[X] we obtainf1(t)=E[Δ(t)]tE[δ(s)]ds+tK1(s)dsE[X],The first term on the right-hand side of (7.12) decays exponentially due to (3.7). Assumptions (3.4) and (3.7) imply that 𝔼[δ()] decays exponentially so the second term decays exponentially. The third term on the right-hand side of (7.12) decays exponentially due to the argument given by (4.41). Hence, f1 decays exponentially.

Proving that (3.5) holds breaks into two steps. We begin by showing that0eγtf(t)dt<,where γ>0. By choosing γ<αβ1 we can obtain the following reformulation of (1.1a)-(1.1b) using methods applied in [15, Proposition 5.1] eγtΔ(t)=Δ(0)+(γI+A)0teγsΔ(s)ds+0teγs0sK(su)Δ(u)duds0teγsK1(s)dudsX+0teγsf(s)ds+0teγsΣ(s)dB(s).Rearranging (7.14), taking expectations and then norms on both sides we can obtain0teγsf(s)dsEeγtΔ(t)+EΔ(0)+γI+A0teγsEΔ(s)ds+0teγs0sK(su)EΔ(u)duds+0teγsK1(s)dsEX.Since (3.7) holds this implies that both the first and third terms on the right-hand side of (7.15) are bounded. The second term is bounded due to our assumptions. Since 0<γ<αβ1, we can choose γ1>0 such that γ<γ1<αβ1. It can easily be shown that0sK(su)E[Δ(u)]duceγ1s.Finally, we see that the fifth term is bounded using (4.41). So, (7.13) holds.

We now return to (7.14). Again rearranging the equation and taking norms and then expectations across both sides, we obtainE0teγsΣ(s)dB(s)EeγtΔ(t)+EΔ(0)+γI+A0teγsEΔ(s)ds+0teγs0sK(su)EΔ(u)duds+0teγsK1(u)dudsEX+0teγsf(s)ds.We already provided an argument above to show that the first five terms on the right hand side of this expression are bounded. Also, we know that (7.13) holds. Thus,E0teγsΣ(s)dB(s)C.The proof is now identical to Proposition 5.2.

Proof of Proposition <xref ref-type="statement" rid="prop7.2">7.2</xref>.

Since Lemma 7.7 holds we can obtain (7.9). Thus, as t, we obtainX=0δ(s)ds0f(s)dsμ()+0K1(s)dsX,where δ is defined by (7.10). Using this expression for X, (7.9) becomesΔ(t)=tδ(s)dsf1(t)μ1(t)+tK1(s)dsX,where μ1(t)=tΣ(s)dB(s). Rearranging the equation and taking norms yieldsf1(t)+μ1(t)Δ(t)+tδ(s)ds+tK1(s)dsX.The first term on the right hand side of (7.21) decays exponentially due to (3.8). Using the argument given in (4.41) we see that the third term on the right hand side of (7.21) decays exponentially. Finally, we consider the second term. Clearly tAΔ(s)ds decays exponentially due to (3.8). In order to show that t(K*Δ)(s)ds decays exponentially we use an argument similar to that applied in the proof of Proposition 7.1. So there is an almost surely finite random variable c>0 such thatf1(t)+μ1(t)ceλtt0,a.s.,where λ<min(β0*,α). We can now apply Lemma 7.3 to obtain (3.6) and (3.5).

Proof of Lemma <xref ref-type="statement" rid="lem7.1">7.3</xref>.

We suppose that there exists a constant γ such that (3.5) holds. Using the equivalence of norms we see that for all 1in and 1jd assumption (3.5) implies that0Σij(s)2e2γsds<.Applying Lemma 7.5 we obtainlimsupt1tlog|tΣij(s)dBj(s)|γ,ωΩij,[Ωij]=1.Choose any ϵ(0,γ). For each ωΩij we can choose a constant cij(ω,ϵ)1 such that|tΣij(s)dBj(s)|cij(ω,ϵ)e(γϵ)t.Now, summing over j we see that|μ1i(t)|ci(ω,ϵ)e(γϵ)t,where ωΩi=j=1dΩij, ci=j=1dcij and μ1i(t)=j=1dtΣij(s)dBj(s).

Now, since|f1i(t)+μ1i(t)|2i=1n|f1i(t)+μ1i(t)|2=f1(t)+μ1(t)2we see that|f1i(t)+μ1i(t)|c(ω)eλt,ωΩ*.So for ωΩiΩ* we see that |f1i(t)||f1i(t)+μ1i(t)|+|μ1i(t)|c(ω)eλt+ci(ω,ϵ)e(γϵ)t. This gives|f1i(t)|c¯i(ω)eρt,ωΩ*Ωi,where c¯i>0 is finite and ρmax(λ,γϵ). Now summing over i we obtain (3.6), by picking out any ωi(Ω*Ωi). This concludes the case when (3.5) holds.

Now, consider the case where assumption (3.5) fails to hold. We choose a constant γ such that 0<γ<λ and define the function d asd(t)=0t1γ(eγs1)f(s)ds,and the vector martingale M asM(t)=0t1γ(eγs1)Σ(s)dB(s).We let Mi denote the ith component of M and Mi denote the quadratic variation of Mi given byMi(t)=j=1d0t1γ2(eγs1)2Σij(s)2ds.We show at the end of this proof thatd(t)+M(t)<c*(ω),ωΩ*,[Ω*]=1,and therefore assume it for the time being.

Since (3.5) fails to hold there exists an entry i,1in, of the martingale M such thatlimtMi(t)=.It follows that liminftMi(t)= and limsuptMi(t)= a.s. Consider the corresponding ith entry of d, denoted di; it is either bounded or unbounded. If di is bounded then Mi is bounded and so, by the Martingale Convergence Theorem, Mi(t) is bounded: this contradicts (7.34). So, we suppose the latter, that di is unbounded, and proceed to show this is also contradictory. Since |di(t)+Mi(t)|<c*(ω), for ωΩ* it is clear that c*Mi(t)<di(t). Taking the limit superior on both sides of the inequality yields=c*liminftMi(t)limsuptdi(t).As d is deterministic, there exists an increasing sequence of deterministic times {tm}m=0 with t0=0 such that di(tm) as m. Consequently, Mi(tm) as m. We choose a subsequence of these times {τm}m=0 with τ0=t0 such thatvl2:=j=1dτl1τl1γ2(eγs1)2Σij2(s)ds1.Define Si(m)=Mi(τm). ObviouslySi(m)=l=1mξl(i)whereξl(i)=j=1dτl1τl1γ(eγs1)Σij(s)dBj(s).It is clear that {ξl(i)}l=1 is an indepenendent normally distributed sequence with the variance of each ξl(i) given by vl21 so we may apply Lemma 7.4.

We now show that assumption (7.33) holds. By changing the order of integration we can show thatd(t)=0teγs(f1(s)f1(t))ds,M(t)=0teγs(μ1(s)μ1(t))ds.Thus, as 0<γ<λ,d(t)+M(t)0teγs(f1(t)+μ1(t)+f1(s)+μ1(s))dsc(ω)0teγs(eλt+eλs)ds<c1(ω),ωΩ*.

8. On the Necessary and Sufficient Conditions for Exponential Convergence of Solutions of (<xref ref-type="disp-formula" rid="eq1.1a">1.1a</xref>)-(<xref ref-type="disp-formula" rid="eq1.1b">1.1b</xref>) and (<xref ref-type="disp-formula" rid="eq1.4a">1.4a</xref>)-(<xref ref-type="disp-formula" rid="eq1.4b">1.4b</xref>)

We now combine the results from Sections 4 and 5 to prove Theorem 3.2 and combine the results from Sections 6 and 7 to prove Theorem 3.1.

We showed the necessity of (3.5) for the exponential convergence of the solution of (1.4a)-(1.4b) in Section 5. In order to prove the necessity of the exponential integrability of the kernel we require the following result which was extracted from .

Theorem 8.1.

Let K satisfy (2.4) and (3.1). Suppose that there exists a constant matrix R and constants β>0 and c>0 such that the solution R of (1.2a)-(1.2b) satisfies (4.3). If the kernel K satisfies (3.2) then there exists a constant α>0 such that K satisfies (3.4).

Proof of Theorem <xref ref-type="statement" rid="thm3.2">3.2</xref>.

We begin by proving the equivalence between (i) and (ii). The implication (i) implies (ii) is the subject of Proposition 4.1. We can demonstrate that (ii) implies (i) as follows. We begin by proving that (3.9) implies (3.4). We consider the following n solutions of (1.4a)-(1.4b); {Xj(t)}j=1,,n, where Xj(0)=ej. Since (3.9) holds we obtainm1(ej)eβ1tEXj(t)Xj()E[Xj(t)Xj()]=(R(t)R())ej.for each j=1,,n. Thus, the resolvent R of (1.2a)-(1.2b) decays exponentially to R. We can apply Theorem 8.1 to obtain (3.4) after which Proposition 5.2 can be applied to obtain (3.5). As (8.1) holds it is clear that (3.3) holds.

We now prove the equivalence between (i) and (iii). The implication (i) implies (iii) is the subject of Proposition 4.2. We now demonstrate that (iii) implies (i). We begin by proving that (3.10) implies (3.4). As (3.10) holds for all X0 we can consider the following n+1 solutions of (1.4a)-(1.4b); Xj(t)j=1,,n+1 whereXj(0)=ejforj=1,,n,Xn+1(0)=0.We know that Xj(t) approaches Xj() exponentially quickly in the almost sure sense. IntroduceSj(t)=Xj(t)Xn+1(t),and notice Sj(0)=ej. Let S=[S1,,Sn]Rn×n. ThenS(t)=AS(t)+(K*S)(t),t>0,S(0)=I.If we define Sj()=Xj()Xn+1() then S(t)S exponentially quickly so we can apply Theorem 8.1 to obtain (3.4). As (3.4) and (3.10) hold we can apply Proposition 5.1 to obtain (3.5). Also evident from this argument is that (3.3) holds. This proves that (iii) implies (i).

Proof of Theorem <xref ref-type="statement" rid="thm3.1">3.1</xref>.

We begin by proving the equivalence between (i) and (ii). The implication that (i) implies (ii) is the subject of Proposition 6.1. Now consider the implication (ii) implies (i). Using (3.7) we see thatE[X(t)X]EX(t)Xm*eβ1*t.Consider the n+1 solutions Xj of (1.1a)-(1.1b) with initial conditions Xj(0)=ej for j=1,,n and Xn+1(0)=0. Since R(t)ej=Xj(t)Xn+1(t) we see thatE[Xj(t)Xj()]+E[Xn+1(t)Xn+1()]=R(t)ejE[cj],where cj=Xj()Xn+1() is an almost surely finite constant. As both terms on the left hand side of this expression are decaying exponentially to zero, tR(t)ej must decay exponentially to 𝔼[cj] as t. Thus R must satisfy (4.3). Now, apply Theorem 8.1 to obtain (3.4) and Proposition 7.1 to obtain (3.6) and (3.5).

We now prove the equivalence between (i) and (iii). The implication (i) implies (iii) is the subject of Proposition 6.2. Once again, consider the n+1 solutions Xj(t) with initial conditions Xj(0)=ej for j=1,,n and Xn+1(0)=0. Since R(t)ej=Xj(t)Xn+1(t) for j=1,,n, we can write(Xj(t)Xj())(Xn+1(t)Xn+1())=R(t)ejcj,where cj=Xj()Xn+1() is an almost surely finite random variable. From (3.8) we know that Xj decays exponentially quickly to Xj(), similarly Xn+1 decays exponentially quickly to Xn+1(). Thus, R decays exponentially to a limit. As a result (4.3) must hold. Now apply Theorem 8.1 to obtain (3.4) and Proposition 7.2 to obtain (3.6) and (3.5).

Acknowledgments

The authors are pleased to acknowledge the referees for their careful scrutiny of and suggested corrections to the manuscript. The first author was partially funded by an Albert College Fellowship, awarded by Dublin City University’s Research Advisory Panel. The second author was funded by The Embark Initiative operated by the Irish Research Council for Science, Engineering and Technology (IRCSET).

ApplebyJ. A. D.DevinS.ReynoldsD. W.On the exponential convergence to a limit of solutions of perturbed linear Volterra equationsElectronic Journal of Qualitative Theory of Differential Equations2005916MR2151538ZBL1078.45005MurakamiS.Exponential asymptotic stability for scalar linear Volterra equationsDifferential and Integral Equations199143519525MR1097915ZBL0726.45013KrisztinT.TerjékiJ.On the rate of convergence of solutions of linear Volterra equationsBollettino della Unione Matemàtica Italiana. Serie VII. B198822427444MR946080ZBL0648.45003ApplebyJ. A. D.FreemanA.Exponential asymptotic stability of linear Itô-Volterra equations with damped stochastic perturbationsElectronic Journal of Probability2003822223234MR2041823ZBL1065.60060ApplebyJ. A. D.RiedleM.Almost sure asymptotic stability of stochastic Volterra integro-differential equations with fading perturbationsStochastic Analysis and Applications2006244813826MR224109410.1080/07362990600753536ZBL1121.60070MaoX.Stability of stochastic integro-differential equationsStochastic Analysis and Applications200018610051017MR179407610.1080/07362990008809708ZBL0969.60068MaoX.RiedleM.Mean square stability of stochastic Volterra integro-differential equationsSystems Control Letters2006556459465MR2216754ZBL1129.3433210.1016/j.sysconle.2005.09.009ApplebyJ. A. D.DevinS.ReynoldsD. W.Mean square convergence of solutions of linear stochastic Volterra equations to non-equilibrium limitsDynamics of Continuous, Discrete Impulsive Systems. Series A200613Bsupplement515534MR2268816ApplebyJ. A. D.DevinS.ReynoldsD. W.Almost sure convergence of solutions of linear stochastic Volterra equations to nonequilibrium limitsJournal of Integral Equations and Applications2007194405437MR237416210.1216/jiea/1192628616MaoX.Stochastic Differential Equations and Their Applications1997Chichester, UKHorwoodxii+366Horwood Publishing Series in Mathematics ApplicationsMR1475218ZBL0892.60057GripenbergG.LondenS.-O.StaffansO.Volterra Integral and Functional Equations199034Cambridge, UKCambridge University Pressxxii+701Encyclopedia of Mathematics and Its ApplicationsMR1050319ZBL0695.45002RevuzD.YorM.Continuous Martingales and Brownian Motion19992933rdBerlin, GermanySpringerxiv+602Fundamental Principles of Mathematical SciencesMR1725357ZBL0917.60006ApplebyJ. A. D.Exponential asymptotic stability of nonlinear Itô-Volterra equations with damped stochastic perturbationsFunctional Differential Equations2005121-2734MR2137197ZBL1078.60052ShiryaevA. N.Probability1996952ndNew York, NY, USASpringerxvi+623Graduate Texts in MathematicsMR1368405ApplebyJ. A. D.DevinS.ReynoldsD. W.On the exponential convergence to a non-equilibruim limit of solutions of linear stochastic Volterra equationspreprint, 2007