This paper is dedicated to the study of a nonlinear SPDE on a bounded domain in Rd, with zero initial conditions and Dirichlet boundary, driven by an α-stable Lévy noise Z with α∈(0,2), α≠1, and possibly nonsymmetric tails. To give a meaning to the concept of solution, we develop a theory of stochastic integration with respect to this noise. The idea is to first solve the equation with “truncated” noise (obtained by removing from Z the jumps which exceed a fixed value K), yielding a solution uK, and then show that the solutions uL,L>K coincide on the event t≤τK, for some stopping times τK converging to infinity. A similar idea was used in the setting of Hilbert-space valued processes. A major step is to show that the stochastic integral with respect to ZK satisfies a pth moment inequality. This inequality plays the same role as the Burkholder-Davis-Gundy inequality in the theory of integration with respect to continuous martingales.
1. Introduction
Modeling phenomena which evolve in time or space-time and are subject to random perturbations are a fundamental problem in stochastic analysis. When these perturbations are known to exhibit an extreme behavior, as seen frequently in finance or environmental studies, a model relying on the Gaussian distribution is not appropriate. A suitable alternative could be a model based on a heavy-tailed distribution, like the stable distribution. In such a model, these perturbations are allowed to have extreme values with a probability which is significantly higher than in a Gaussian-based model.
In the present paper, we introduce precisely such a model, given rigorously by a stochastic partial differential equation (SPDE) driven by a noise term which has a stable distribution over any space-time region and has independent values over disjoint space-time regions (i.e., it is a Lévy noise). More precisely, we consider the SPDE:
(1)Lu(t,x)=σ(u(t,x))Z˙(t,x),t>0,x∈𝒪
with zero initial conditions and Dirichlet boundary conditions, where σ is a Lipschitz function, L is a second-order pseudo-differential operator on a bounded domain 𝒪⊂ℝd, and Z˙(t,x)=∂d+1Z/∂t∂x1,…,∂xd is the formal derivative of an α-stable Lévy noise with α∈(0,2), α≠1. The goal is to find sufficient conditions on the fundamental solution G(t,x,y) of the equation Lu=0 on ℝ+×𝒪, which will ensure the existence of a mild solution of (1). We say that a predictable process u={u(t,x);t≥0,x∈𝒪} is a mild solution of (1) if for any t>0, x∈𝒪,
(2)u(t,x)=∫0t∫𝒪G(t-s,x,y)σ(u(s,y))Z(ds,dy)a.s.
We assume that G(t,x,y) is a function in t, which excludes from our analysis the case of the wave equation with d≥3.
To explain the connections with other works, we describe briefly the construction of the noise (the details are given in Section 2). This construction is similar to that of a classical α-stable Lévy process and is based on a Poisson random measure (PRM) N on ℝ+×ℝd×(ℝ∖{0}) of intensity dtdxνα(dz), where
(3)να(dz)=[pαz-α-11(0,∞)(z)+qα(-z)-α-11(-∞,0)(z)]dz
for some p, q≥0 with p+q=1. More precisely, for any set B∈ℬb(ℝ+×ℝd),
(4)Z(B)=∫B×{|z|≤1}zN^(ds,dx,dz)+∫B×{|z|>1}zN(ds,dx,dz)-μ|B|,
where N^(B×·)=N(B×·)-|B|να(·) is the compensated process and μ is a constant (specified by Lemma 3). Here, ℬb(ℝ+×ℝd) is the class of bounded Borel sets in ℝ+×ℝd and |B| is the Lebesgue measure of B.
As the term on the right-hand side of (2) is a stochastic integral with respect to Z, such an integral should be constructed first. Our construction of the integral is an extension to random fields of the construction provided by Giné and Marcus in [1] in the case of an α-stable Lévy process {Z(t)}t∈[0,1]. Unlike these authors, we do not assume that the measure να is symmetric.
Since any Lévy noise is related to a PRM, in a broad sense, one could say that this problem originates in Itô’s papers [2, 3] regarding the stochastic integral with respect to a Poisson noise. SPDEs driven by a compensated PRM were considered for the first time in [4], using the approach based on Hilbert-space-valued solutions. This study was motivated by an application to neurophysiology leading to the cable equation. In the case of the heat equation, a similar problem was considered in [5–7] using the approach based on random-field solutions. One of the results of [6] shows that the heat equation:
(5)∂u∂t(t,x)=12Δu(t,x)+∫Uf(t,x,u(t,x);z)N^(t,x,dz)+g(t,x,u(t,x))
has a unique solution in the space of predictable processes u satisfying sup(t,x)∈[0,T]×ℝdE|u(t,x)|p<∞, for any p∈(1+2/d,2]. In this equation, N^ is the compensated process corresponding to a PRM N on ℝ+×ℝd×U of intensity dtdxν(dz), for an arbitrary σ-finite measure space (U,ℬ(U),ν) with measure ν satisfying ∫U|z|pν(dz)<∞. Because of this later condition, this result cannot be used in our case with U=ℝ∖{0} and ν=να. For similar reasons, the results of [7] also do not cover the case of an α-stable noise. However, in the case α>1, we will be able to exploit successfully some ideas of [6] for treating the equation with “truncated” noise ZK, obtained by removing from Z the jumps exceeding a value K (see Section 5.2).
The heat equation with the same type of noise as in the present paper was examined in [8, 9] in the cases α<1 and α>1, respectively, assuming that the noise has only positive jumps (i.e., q=0). The methods used by these authors are different from those presented here, since they investigate the more difficult case of a non-Lipschitz function σ(u)=uδ with δ>0. In [8], Mueller removes the atoms of Z of mass smaller than 2-n and solves the equation driven by the noise obtained in this way; here we remove the atoms of Z of mass larger than K and solve the resulting equation. In [9], Mytnik uses a martingale problem approach and gives the existence of a pair (u,Z) which satisfies the equation (the so-called “weak solution”), whereas in the present paper we obtain the existence of a solution u for a given noise Z (the so-called “strong solution”). In particular, when α>1 and δ=1/α, the existence of a “weak solution” of the heat equation with α-stable Lévy noise is obtained in [9] under the condition
(6)α<1+2d
which we encounter here as well. It is interesting to note that (6) is the necessary and sufficient condition for the existence of the density of the super-Brownian motion with “α-1”-stable branching (see [10]). Reference [11] examines the heat equation with multiplicative noise (i.e., σ(u)=u), driven by an α-stable Lévy noise Z which does not depend on time.
To conclude the literature review, we should point out that there are many references related to stochastic differential equations with α-stable Lévy noise, using the approach based on Hilbert-space valued solutions. We refer the reader to Section 12.5 of the monograph [12] and to [13–16] for a sample of relevant references. See also the survey article [17] for an approach based on the white noise theory for Lévy processes.
This paper is organized as follows.
In Section 2, we review the construction of the α-stable Lévy noise Z, and we show that this can be viewed as an independently scattered random measure with jointly α-stable distributions.
In Section 3, we consider the linear equation (1) (with σ(u)=1) and we identify the necessary and sufficient condition for the existence of the solution. This condition is verified in the case of some examples.
Section 4 contains the construction of the stochastic integral with respect to the α-stable noise Z, for α∈(0,2). The main effort is dedicated to proving a maximal inequality for the tail of the integral process, when the integrand is a simple process. This extends the construction of [1] to the case random fields and nonsymmetric measure να.
In Section 5, we introduce the process ZK obtained by removing from Z the jumps exceeding a fixed value K, and we develop a theory of integration with respect to this process. For this, we need to treat separately the cases α<1 and α>1. In both cases, we obtain a pth moment inequality for the integral process for p∈(α,1) if α<1 and p∈(α,2) if α>1. This inequality plays the same role as the Burkholder-Davis-Gundy inequality in the theory of integration with respect to continuous martingales.
In Section 6 we prove the main result about the existence of the mild solution of (1). For this, we first solve the equation with “truncated” noise ZK using a Picard iteration scheme, yielding a solution uK. We then introduce a sequence (τK)K≥1 of stopping times with τK↑∞ a.s. and we show that the solutions uL, L>K coincide on the event t≤τK. For the definition of the stopping times τK, we need again to consider separately the cases α<1 and α>1.
Appendix A contains some results about the tail of a nonsymmetric stable random variable and the tail of an infinite sum of random variables. Appendix B gives an estimate for the Green function associated with the fractional power of the Laplacian. Appendix C gives a local property of the stochastic integral with respect to Z (or ZK).
2. Definition of the Noise
In this section we review the construction of the α-stable Lévy noise on ℝ+×ℝd and investigate some of its properties.
Let N=∑i≥1δ(Ti,Xi,Zi) be a Poisson random measure on ℝ+×ℝd×(ℝ∖{0}), defined on a probability space (Ω,ℱ,P), with intensity measure dtdxνα(dz), where να is given by (3). Let (εj)j≥0 be a sequence of positive real numbers such that εj→0 as j→∞ and 1=ε0>ε1>ε2>⋯. Let
(7)Γj={z∈ℝ;εj<|z|≤εj-1},j≥1,Γ0={z∈ℝ;|z|>1}.
For any set B∈ℬb(ℝ+×ℝd), we define
(8)Lj(B)=∫B×ΓjzN(dt,dx,dz)=∑(Ti,Xi)∈BZi1{Zi∈Γj},j≥0.
Remark 1.
The variable L0(B) is finite since the sum above contains finitely many terms. To see this, we note that E[N(B×Γ0)]=|B|να(Γ0)<∞, and hence N(B×Γ0)=card{i≥1;(Ti,Xi,Zi)∈B×Γ0}<∞.
For any j≥0, the variable Lj(B) has a compound Poisson distribution with jump intensity measure |B|·να|Γj; that is,
(9)E[eiuLj(B)]=exp{|B|∫Γj(eiuz-1)να(dz)},u∈ℝ.
It follows that E(Lj(B))=|B|∫Γjzνα(dz) and Var(Lj(B))=|B|∫Γjz2να(dz) for any j≥0. Hence, Var(Lj(B))<∞ for any j≥1 and Var(L0(B))=∞. If α>1, then E(L0(B)) is finite. Define
(10)Y(B)=∑j≥1[Lj(B)-E(Lj(B))]+L0(B).
This sum converges a.s. by Kolmogorov’s criterion since {Lj(B)-E(Lj(B))}j≥1 are independent zero-mean random variables with ∑j≥1Var(Lj(B))<∞.
From (9) and (10), it follows that Y(B) is an infinitely divisible random variable with characteristic function:
(11)E(eiuY(B))hh=exp{|B|∫ℝ(eiuz-1-iuz1{|z|≤1})να(dz)},hhhhhhhhhhhhhhlllhhhhhhhhhhhhhhhhu∈ℝ.
Hence, E(Y(B))=|B|∫ℝz1{|z|>1}να(dz) and Var(Y(B))=|B|∫ℝz2να(dz).
Lemma 2.
The family {Y(B);B∈ℬb(ℝ+×ℝd)} defined by (10) is an independently scattered random measure; that is,
for any disjoint sets B1,…,Bn in ℬb(ℝ+×ℝd), Y(B1),…,Y(Bn) are independent;
for any sequence (Bn)n≥1 of disjoint sets in ℬb(ℝ+×ℝd) such that ⋃n≥1Bn is bounded, Y(⋃n≥1Bn)=∑n≥1Y(Bn) a.s.
Proof.
(a) Note that for any function φ∈L2(ℝ+×ℝd) with compact support K, we can define the random variable Y(φ)=∑j≥1[Lj(φ)-E(Lj(φ))]+L0(φ) where Lj(φ)=∫K×Γjφ(t,x)zN(dt,dx,dz). For any u∈ℝ, we have
(12)E(eiuY(φ))hh=exp{∫ℝ+×ℝd×ℝ(eiuzφ(t,x)-1hhhhhhhhhhhhhhhhh-iuzφ(t,x)1{|z|≤1}eiuzφ(t,x))dtdxνα(dz)∫ℝ+×ℝd×ℝ}.
For any disjoint sets B1,…,Bn and for any u1,…,un∈ℝ, we have
(13)E[exp(i∑k=1nukY(Bk))]=E[exp(iY(∑k=1nuk1Bk))]=exp{∫ℝ+×ℝd×ℝ(∑k=1nuk1Bk(t,x)eiz∑k=1nuk1Bk(t,x)∑k=1nuk1Bk(t,x)hhhhhhhhhhhhlhhhh-1-iz1{|z|≤1}hhhhhhhhhhhhlhhhh×∑k=1nuk1Bk(t,x))dtdxνα(dz)}=exp{∑k=1n|Bk|∫ℝ(iukz1{|z|≤1}eiukz-1hhhhhhhhhhhhhhhhhl-iukz1{|z|≤1}eiukz)να(dz)∑k=1n|Bk|}=∏k=1nE[exp(iukY(Bk))],
using (12) with φ=∑k=1nuk1Bk for the second equality and (9) for the last equality. This proves that Y(B1),…,Y(Bn) are independent.
(b) Let Sn=∑k=1nY(Bk) and S=Y(B), where B=⋃n≥1Bn. By Lévy’s equivalence theorem, (Sn)n≥1 converges a.s. if and only if it converges in distribution. By (13), with ui=u for all i=1,…,k, we have
(14)E(eiuSn)=exp{|⋃k=1nBk|∫ℝ(eiuz-1-iuz1{|z|≤1})να(dz)}.
This clearly converges to E(eiuS)=exp{|B|∫ℝ(eiuz-1-iuz1{|z|≤1})να(dz)}, and hence (Sn)n≥1 converges in distribution to S.
Recall that a random variable X has an α-stable distribution with parameters α∈(0,2), σ∈[0,∞), β∈[-1,1], and μ∈ℝ if, for any u∈ℝ,
(15)E(eiuX)=exp{-|u|ασα(1-isgn(u)βtanπα2)+iuμ},hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhlifα≠1,
or
(16)E(eiuX)=exp{-|u|σ(1+isgn(u)β2πln|u|)+iuμ},hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhlifα=1
(see Definition 1.1.6 of [18]). We denote this distribution by Sα(σ,β,μ).
Lemma 3.
Y(B) has a Sα(σ|B|1/α,β,μ|B|) distribution with β=p-q,
(17)σα=∫0∞sinxxαdx={Γ(2-α)1-αcosπα2,ifα≠1,π2,ifα=1,μ={βαα-1,ifα≠1,βc0,ifα=1,
and c0=∫0∞(sinz-z1{z≤1})z-2dz. If α>1, then E(Y(B))=μ|B|.
Proof.
We first express the characteristic function (11) of Y(B) in Feller’s canonical form (see Section XVII.2 of [19]):
(18)E(eiuY(B))=exp{iub|B|+|B|∫ℝeiuz-1-iusinzz2Mα(dz)}
with Mα(dz)=z2να(dz) and b=∫ℝ(sinz-z1{|z|≤1})να(dz). Then the result follows from the calculations done in Example XVII.3.(g) of [19].
From Lemmas 2 and 3, it follows that
(19)Z={Z(B)=Y(B)-μ|B|;B∈ℬb(ℝ+×ℝd)}
is an α-stable random measure, in the sense of Definition 3.3.1 of [18], with control measure m(B)=σα|B| and constant skewness intensity β. In particular, Z(B) has a Sα(σ|B|1/α,β,0) distribution.
We say that Z is an α-stable Lévy noise. Coming back to the original construction (10) of Y(B) and noticing that
(20)μ|B|=-|B|∫ℝz1{|z|≤1}να(dz)=-∑j≥1E(Lj(B)),hhhhhhhhhhhhhhhhhhhhhhhhhhlifα<1,μ|B|=|B|∫ℝz1{|z|>1}να(dz)=E(L0(B)),hhhhhhhhhhhhhhhhhhhhhllifα>1,
it follows that Z(B) can be represented as
(21)Z(B)=∑j≥0Lj(B)=:∫B×(ℝ∖{0})zN(dt,dx,dz),ifα<1,(22)Z(B)=∑j≥0[Lj(B)-E(Lj(B))]=:∫B×(ℝ∖{0})zN^(dt,dx,dz),ifα>1.
Here N^ is the compensated Poisson measure associated with N; that is, N^(A)=N(A)-E(N(A)) for any relatively compact set A in ℝ+×ℝd×(ℝ¯∖{0}).
In the case α=1, we will assume that p=q so that να is symmetric around 0, E(Lj(B))=0 for all j≥1, and Z(B) admits the same representation as in the case α<1.
3. The Linear Equation
As a preliminary investigation, we consider first equation (1) with σ=1:
(23)Lu(t,x)=Z˙(t,x),t>0,x∈𝒪
with zero initial conditions and Dirichlet boundary conditions. In this section 𝒪 is a bounded domain in ℝd or 𝒪=ℝd.
By definition, the process {u(t,x);t≥0,x∈𝒪} given by
(24)u(t,x)=∫0t∫𝒪G(t-s,x,y)Z(ds,dy)
is a mild solution of (23), provided that the stochastic integral on the right-hand side of (24) is well defined.
We define now the stochastic integral of a deterministic function φ:
(25)Z(φ)=∫0∞∫ℝdφ(t,x)Z(dt,dx).
If φ∈Lα(ℝ+×ℝd), this can be defined by approximation with simple functions, as explained in Section 3.4 of [18]. The process {Z(φ);φ∈Lα(ℝ+×ℝd)} has jointly α-stable finite dimensional distributions. In particular, each Z(φ) has a Sα(σφ,β,0)-distribution with scale parameter:
(26)σφ=σ(∫0∞∫ℝd|φ(t,x)|αdxdt)1/α.
More generally, a measurable function φ:ℝ+×ℝd→ℝ is integrable with respect to Z if there exists a sequence (φn)n≥1 of simple functions such that φn→φ a.e., and, for any B∈ℬb(ℝ+×ℝd), the sequence {Z(φn1B)}n converges in probability (see [20]).
The next results show that condition φ∈Lα(ℝ+×ℝd) is also necessary for the integrability of φ with respect to Z. Due to Lemma 2, this follows immediately from the general theory of stochastic integration with respect to independently scattered random measures developed in [20].
Lemma 4.
A deterministic function φ is integrable with respect to Z if and only if φ∈Lα(ℝ+×ℝd).
Proof.
We write the characteristic function of Z(B) in the form used in [20]:
(27)E(eiuZ(B))hh=exp{∫B[∫ℝ(eiuz-1-iuτ(z))να(dz)iuahhhhhhhhhhh+∫ℝ(eiuz-1-iuτ(z))να(dz)]dtdx}
with a=β-μ, τ(z)=z if |z|≤1 and τ(z)=sgn(z) if |z|>1. By Theorem 2.7 of [20], φ is integrable with respect to Z if and only if
(28)∫ℝ+×ℝd|U(φ(t,x))|dtdx<∞,∫ℝ+×ℝdV(φ(t,x))dtdx<∞,
where U(y)=ay+∫ℝ(τ(yz)-yτ(z))να(dz) and V(y)=∫ℝ(1∧|yz|2)να(dz). Direct calculations show that, in our case, U(y)=-(β/(α-1))yα if α≠1, U(y)=0 if α=1, and V(y)=(2/(2-α))yα.
The following result follows immediately from (24) and Lemma 4.
Proposition 5.
Equation (23) has a mild solution if and only if for any t>0, x∈𝒪(29)Iα(t)=∫0t∫𝒪G(s,x,y)αdyds<∞.
In this case, {u(t,x);t≥0,x∈𝒪} has jointly α-stable finite-dimensional distributions. In particular, u(t,x) has a Sα(σIα(t)1/α,β,0) distribution.
Condition (29) can be easily verified in the case of several examples.
Example 6 (heat equation).
Let L=∂/∂t-(1/2)Δ. Assume first that 𝒪=ℝd. Then G(t,x,y)=G¯(t,x-y), where
(30)G¯(t,x)=1(2πt)d/2exp(-|x|22t),
and condition (29) is equivalent to (6). In this case, Iα(t)=cα,dtd(1-α)/2+1. If 𝒪 is a bounded domain in ℝd, then G(t,x,y)≤G¯(t,x-y) (see page 74 of [11]) and condition (29) is implied by (6).
Example 7 (parabolic equation).
Let L=∂/∂t-ℒ where
(31)ℒf(x)=∑i,j=1daij(x)∂2f∂xi∂xj(x)+∑i=1dbi(x)∂f∂xi(x)
is the generator of a Markov process with values in ℝd, without jumps (a diffusion). Assume that 𝒪 is a bounded domain in ℝd or 𝒪=ℝd. By Aronson estimate (see, e.g., Theorem 2.6 of [12]), under some assumptions on the coefficients aij, bi, there exist some constants c1,c2>0 such that
(32)G(t,x,y)≤c1t-d/2exp(-|x-y|2c2t)
for all t>0 and x, y∈𝒪. In this case, condition (29) is implied by (6).
Example 8 (heat equation with fractional power of the Laplacian).
Let L=∂/∂t+(-Δ)γ for some γ>0. Assume that 𝒪 is a bounded domain in ℝd or 𝒪=ℝd. Then (see, e.g., Appendix B.5 of [12])
(33)G(t,x,y)=∫0∞𝒢(s,x,y)gt,γ(s)ds=∫0∞𝒢(t1/γs,x,y)g1,γ(s)ds,
where 𝒢(t,x,y) is the fundamental solution of ∂u/∂t-Δu=0 on 𝒪 and gt,γ is the density of the measure μt,γ, (μt,γ)t≥0 being a convolution semigroup of measures on [0,∞) whose Laplace transform is given by
(34)∫0∞e-usgt,γ(s)ds=exp(-tuγ),∀u>0.
Note that if γ<1, gt,γ is the density of St, where (St)t≥0 is a γ-stable subordinator with Lévy measure ργ(dx)=(γ/Γ(1-γ))x-γ-11(0,∞)(x)dx.
Assume first that 𝒪=ℝd. Then G(t,x,y)=G¯(t,x-y), where
(35)G¯(t,x)=∫ℝdeiξ·xe-t|ξ|2γdξ.
If γ<1, then G¯(t,·) is the density of Xt, with (Xt)t≥0 being a symmetric (2γ)-stable Lévy process with values in ℝd defined by Xt=WSt, with (Wt)t≥0 a Brownian motion in ℝd with variance 2. By Lemma B.1 (Appendix B), if α>1, then (29) holds if and only if
(36)α<1+2γd.
If 𝒪 is a bounded domain in ℝd, then G(t,x,y)≤G¯(t,x-y) (by Lemma 2.1 of [8]). In this case, if α>1, then (29) is implied by (36).
Example 9 (cable equation in <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M348">
<mml:mrow>
<mml:mi>ℝ</mml:mi></mml:mrow>
</mml:math></inline-formula>).
Let Lu=∂u/∂t-∂2u/∂x2+u and 𝒪=ℝ. Then G(t,x,y)=G¯(t,x-y), where
(37)G¯(t,x)=14πtexp(-|x|24t-t),
and condition (29) holds for any α∈(0,2).
Example 10 (wave equation in <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M354">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mi>ℝ</mml:mi></mml:mrow>
<mml:mrow>
<mml:mi>d</mml:mi></mml:mrow>
</mml:msup></mml:mrow>
</mml:math></inline-formula> with <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M355">
<mml:mi>d</mml:mi>
<mml:mo>=</mml:mo>
<mml:mn mathvariant="normal">1,2</mml:mn></mml:math>
</inline-formula>).
Let L=∂2/∂t2-Δ and 𝒪=ℝd with d=1 or d=2. Then G(t,x,y)=G¯(t,x-y), where
(38)G¯(t,x)=121{|x|<t},ifd=1,G¯(t,x)=12π·1t2-|x|21{|x|<t},ifd=2.
Condition (29) holds for any α∈(0,2). In this case, Iα(t)=2-αt2 if d=1 and Iα(t)=((2π)1-α/(2-α)(3-α))t3-α if d=2.
4. Stochastic Integration
In this section we construct a stochastic integral with respect to Z by generalizing the ideas of [1] to the case of random fields. Unlike these authors, we do not assume that Z(B) has a symmetric distribution, unless α=1.
Let ℱt=ℱtN∨𝒩 where 𝒩 is the σ-field of negligible sets in (Ω,ℱ,P) and ℱtN is the σ-field generated by N([0,s]×A×Γ) for all s∈[0,t], A∈ℬb(ℝd) and for all Borel sets Γ⊂ℝ∖{0} bounded away from 0. Note that ℱtZ⊂ℱtN where ℱtZ is the σ-field generated by Z([0,s]×A), s∈[0,t], and A∈ℬb(ℝd).
A process X={X(t,x)}t≥0,x∈ℝd is called elementary if it is of the form
(39)X(t,x)=1(a,b](t)1A(x)Y,
where 0≤a<b, A∈ℬb(ℝd), and Y is ℱa-measurable and bounded. A simple process is a linear combination of elementary processes. Note that any simple process X can be written as
(40)X(t,x)=1{0}(t)Y0(x)+∑i=0N-11(ti,ti+1](t)Yi(x)
with 0=t0<t1<⋯<tN<∞ and Yi(x)=∑j=1mi1Aij(x)Yij, where (Yij)j=1,…,mi are ℱti-measurable and (Aij)j=1,…,mj are disjoint sets in ℬb(ℝd). Without loss of generality, we assume that Y0=0.
We denote by 𝒫 the predictable σ-field on Ω×ℝ+×ℝd, that is, the σ-field generated by all simple processes. We say that a process X={X(t,x)}t≥0,x∈ℝd is predictable if the map (ω,t,x)↦X(ω,t,x) is 𝒫-measurable.
Remark 11.
One can show that the predictable σ-field 𝒫 is the σ-field generated by the class 𝒞 of processes X such that t↦X(ω,t,x) is left continuous for any ω∈Ω, x∈ℝd and (ω,x)↦X(ω,t,x) is ℱt×ℬ(ℝd)-measurable for any t>0.
Let ℒα be the class of all predictable processes X such that
(41)∥X∥α,T,Bα:=E∫0T∫B|X(t,x)|αdxdt<∞,
for all T>0 and B∈ℬb(ℝd). Note that ℒα is a linear space.
Let (Ek)k≥1 be an increasing sequence of sets in ℬb(ℝd) such that ⋃kEk=ℝd. We define
(42)∥X∥α=∑k≥11∧∥X∥α,k,Ek2k,ifα>1,∥X∥αα=∑k≥11∧∥X∥α,k,Ekα2k,ifα≤1.
We identify two processes X and Y for which ∥X-Y∥α=0; that is, X=Yν a.e., where ν=Pdtdx. In particular, we identify two processes X and Y if X is a modification of Y; that is, X(t,x)=Y(t,x) a.s. for all (t,x)∈ℝ+×ℝd.
The space ℒα becomes a metric space endowed with the metric dα:
(43)dα(X,Y)=∥X-Y∥α,ifα>1,dα(X,Y)=∥X-Y∥αα,ifα≤1.
This follows using Minkowski’s inequality if α>1 and the inequality |a+b|α≤|a|α+|b|α if α≤1.
The following result can be proved similarly to Proposition 2.3 of [21].
Proposition 12.
For any X∈ℒα there exists a sequence (Xn)n≥1 of bounded simple processes such that ∥Xn-X∥α→0 as n→∞.
By Proposition 5.7 of [22], the α-stable Lévy process {Z(t,B)=Z([0,t]×B);t≥0} has a càdlàg modification, for any B∈ℬb(ℝd). We work with these modifications. If X is a simple process given by (40), we define
(44)I(X)(t,B)=∑i=0N-1∑j=1miYijZ((ti∧t,ti+1∧t]×(Aij∩B)).
Note that, for any B∈ℬb(ℝd), I(X)(t,B) is ℱt-measurable for any t≥0, and {I(X)(t,B)}t≥0 is càdlàg. We write
(45)I(X)(t,B)=∫0t∫BX(s,x)Z(ds,dx).
The following result will be used for the construction of the integral. This result generalizes Lemma 3.3 of [1] to the case of random fields and nonsymmetric measures να.
Theorem 13.
If X is a bounded simple process then
(46)supλ>0λαP(supt∈[0,T]|I(X)(t,B)|>λ)hhhh≤cαE∫0T∫B|X(t,x)|αdxdt,
for any T>0 and B∈ℬb(ℝd), where cα is a constant depending only on α.
Proof.
Suppose that X is of the form (40). Since {I(X)(t,B)}t∈[0,T] is càdlàg, it is separable. Without loss of generality, we assume that its separating set D can be written as D=∪nFn where (Fn)n is an increasing sequence of finite sets containing the points (tk)k=0,…,N. Hence,
(47)P(supt∈[0,T]|I(X)(t,B)|>λ)=limn→∞P(maxt∈Fn|I(X)(t,B)|>λ).
Fix n≥1. Denote by 0=s0<s1<⋯<sm=T the points of the set Fn. Say tk=sik for some 0=i0<i1<⋯<iN. Then each interval (tk,tk+1] can be written as the union of some intervals of the form (si,si+1]:
(48)(tk,tk+1]=⋃i∈Ik(si,si+1],
where Ik={i;ik≤i<ik+1}. By (44), for any k=0,…,N-1 and i∈Ik,
(49)I(X)(si+1,B)-I(X)(si,B)=∑j=1mkYkjZ((si,si+1]×(Akj∩B)).
For any i∈Ik, let Ni=mk, and, for any j=1,…,Ni, define βij=Ykj, Hij=Akj, and Zij=Z((si,si+1]×(Hij∩B)). With this notation, we have
(50)I(X)(si+1,B)-I(X)(si,B)=∑j=1NiβijZij,∀i=0,…,m.
Consequently, for any l=1,…,m(51)I(X)(sl,B)=∑i=0l-1(I(X)(si+1,B)-I(X)(si,B))=∑i=0l-1∑j=1NiβijZij.
Using (47) and (51), it is enough to prove that for any λ>0,
(52)P(maxl=0,…,m-1|∑i=0l∑j=1NiβijZij|>λ)≤cαλ-αE∫0T∫B|X(s,x)|αdxds.
First, note that
(53)E∫0T∫B|X(s,x)|αdxds=∑i=0m-1(si+1-si)∑j=1NiE|βij|α|Hij∩B|.
This follows from the definition (40) of X and (48), since X(t,x)=∑i=0N-1∑i∈Ik1(si,si+1](t)∑j=1Niβij1Hij(x).
We now prove (52). Let Wi=∑j=1NiβijZij. For the event on the left-hand side, we consider its intersection with the event {max0≤i≤m-1|Wi|>λ} and its complement. Hence, the probability of this event can be bounded by
(54)∑i=0m-1P(|Wi|>λ)+P(max0≤l≤m-1|∑i=0lWi1{|Wi|≤λ}|>λ)=:I+II.
We treat separately the two terms.
For the first term, we note that β¯i=(βij)1≤j≤Ni is ℱsi-measurable and Z¯i=(Zij)1≤j≤Ni is independent of ℱsi. By Fubini’s theorem
(55)I=∑i=0m-1∫ℝNiP(|∑j=1NixjZij|>λ)Pβ¯i(dx¯),
where x¯=(xj)1≤j≤Ni and Pβ¯i is the law of β¯i.
We examine the tail of Ui=∑j=1NixjZij for a fixed x¯∈ℝNi. By Lemma 3, Zij has a Sα(σ(si+1-si)1/α|Hij∩B|1/α,β,0) distribution. Since the sets (Hij)1≤j≤Ni are disjoint, the variables (Zij)1≤j≤Ni are independent. Using elementary properties of the stable distribution (Properties 1.2.1 and 1.2.3 of [18]), it follows that Ui has a Sα(σi,βi*,0) distribution with parameters:
(56)σiα=σα(si+1-si)∑j=1Ni|xj|α|Hij∩B|,βi*=β∑j=1Ni|xj|α|Hij∩B|∑j=1Nisgn(xj)|xj|α|Hij∩B|.
By Lemma A.1 (Appendix A), there exists a constant cα*>0 such that
(57)P(|Ui|>λ)≤cα*λ-ασα(si+1-si)∑j=1Ni|xj|α|Hij∩B|
for any λ>0. Hence,
(58)I≤cα*λ-ασα∑i=0m-1(si+1-si)∑j=1NiE|βij|α|Hij∩B|=cα*λ-ασαE∫0T∫B|X(s,x)|αdxds.
We now treat II. We consider three cases. For the first two cases we deviate from the original argument of [1] since we do not require that β=0.
Case 1 (α<1). Note that
(59)II≤P(max0≤l≤m-1Ml>λ),
where {Ml=∑i=0l|Wi|1{|Wi|≤λ},ℱsl+1;0≤l≤m-1} is a submartingale. By the submartingale maximal inequality (Theorem 35.3 of [23]),
(60)P(max0≤l≤m-1Ml>λ)≤1λE(Mm-1)=1λ∑i=0m-1E(|Wi|1|Wi|≤λ).
Using the independence between β¯i and Z¯i it follows that
(61)E[|Wi|1|Wi|≤λ]=∫ℝNiE[|∑j=1NixjZij|1{|∑j=1NixjZij|≤λ}]Pβ¯i(dx¯).
Let Ui=∑j=1NixjZij. Using (57) and Remark A.2 (Appendix A), we get
(62)E[|Ui|1{|Ui|≤λ}]≤cα*σα11-αλ1-α(si+1-si)×∑j=1Ni|xj|α|Hij∩B|.
Hence,
(63)E[|Wi|1|Wi|≤λ]≤cα*σα11-αλ1-α(si+1-si)×∑j=1NiE|βij|α|Hij∩B|.
From (59), (60), and (63), it follows that
(64)II≤cα*σα11-αλ-αE∫0T∫B|X(s,x)|αdxds.
Case 2(α>1). We have
(65)II≤P(max0≤l≤m-1|∑i=0lXi|>λ2)+P(max0≤l≤m-1Yi>λ2)=:II′+II′′,
where Xi=Wi1{|Wi|≤λ}-E[Wi1{|Wi|≤λ}∣ℱsi] and Yi=|E[Wi1{|Wi|≤λ}∣ℱsi]|.
We first treat the term II′. Note that {Ml=∑i=0lXi,ℱsl+1;0≤l≤m-1} is a zero-mean square integrable martingale, and
(66)II′=P(max0≤l≤m-1|Ml|>λ2)≤4λ2∑i=0m-1E(Xi2)≤4λ2∑i=0m-1E[Wi21{|Wi|≤λ}].
Let Ui=∑j=1NixjZij. Using (57) and Remark A.2 (Appendix A), we get
(67)E[Ui21{|Ui|≤λ}]≤2cα*σα12-αλ2-α(si+1-si)×∑j=1Ni|xj|α|Hij∩B|.
As in Case 1, we obtain that
(68)E[Wi21{|Wi|≤λ}]≤cα*σα22-αλ2-α(si+1-si)×∑j=1NiE|βij|α|Hij∩B|,
and hence
(69)II′≤8cα*σα12-αλ-αE∫0T∫B|X(s,x)|αdxds.
We now treat II′′. Note that {Nl=∑i=0lYi,ℱsl+1;0≤l≤m-1} is a semimartingale and hence, by the submartingale inequality,
(70)II′′≤2λE(Nm-1)=2λ∑i=0m-1E(Yi).
To evaluate E(Yi), we note that, for almost all ω∈Ω,
(71)E[Wi1{|Wi|≤λ}∣ℱsi](ω)=E[∑j=1Niβij(ω)Zij1{|∑j=1Niβij(ω)Zij|≤λ}],
due to the independence between β¯i and Z¯i. We let Ui=∑j=1NixjZij with xj=βij(ω). Since α>1, E(Ui)=0. Using (57) and Remark A.2, we obtain
(72)|E[Ui1{|Ui|≤λ}]|=|E[Ui1{|Ui|>λ}]|≤E[|Ui|1{|Ui|>λ}]≤cα*σααα-1λ1-α(si+1-si)×∑j=1Ni|xj|α|Hij∩B|.
Hence, E(Yi)≤cα*σα(α/(α-1))λ1-α(si+1-si)∑j=1NiE|βij|α|Hij∩B| and
(73)II′′≤cα*σα2αα-1λ-αE∫0T∫B|X(t,x)|αdxdt.Case 3(α=1). In this case we assume that β=0. Hence, Ui=∑j=1NixjZij has a symmetric distribution for any x¯∈ℝNi. Using (71), it follows that E[Wi1{|Wi|≤λ}∣ℱsi]=0 a.s. for all i=0,…,m-1. Hence, {Ml=∑i=0lWi1{|Wi|≤λ},ℱsl+1;0≤l≤m-1} is a zero-mean square integrable martingale. By the martingale maximal inequality,
(74)II≤1λ2E[Mm-12]=1λ2∑i=0m-1E[Wi21{|Wi|≤λ}].
The result follows using (68).
We now proceed to the construction of the stochastic integral. If Y={Y(t)}t≥0 is a jointly measurable random process, we define
(75)∥Y∥α,Tα=supλ>0λαP(supt∈[0,T]|Y(t)|>λ).
Let X∈ℒα be arbitrary. By Proposition 12, there exists a sequence (Xn)n≥1 of simple functions such that ∥Xn-X∥α→0 as n→∞. Let T>0 and B∈ℬb(ℝd) be fixed. By linearity of the integral and Theorem 13,
(76)∥I(Xn)(·,B)-I(Xm)(·,B)∥α,Tα≤cα∥Xn-Xm∥α,T,Bα⟶0,
as n,m→∞. In particular, the sequence {I(Xn)(·,B)}n is Cauchy in probability in the space D[0,T] equipped with the sup-norm. Therefore, there exists a random element Y(·,B) in D[0,T] such that, for any λ>0,
(77)P(supt∈[0,T]|I(Xn)(t,B)-Y(t,B)|>λ)⟶0.
Moreover, there exists a subsequence (nk)k such that
(78)supt∈[0,T]|I(Xnk)(t,B)-Y(t,B)|⟶0a.s.
as k→∞. Hence, Y(t,B) is ℱt-measurable for any t∈[0,T]. The process Y(·,B) does not depend on the sequence (Xn)n and can be extended to a càdlàg process on [0,∞), which is unique up to indistinguishability. We denote this extension by I(X)(·,B) and we write
(79)I(X)(t,B)=∫0t∫BX(s,x)Z(ds,dx).
If A and B are disjoint sets in ℬb(ℝd), then
(80)I(X)(t,A∪B)=I(X)(t,A)+I(X)(t,B)a.s.
Lemma 14.
Inequality (46) holds for any X∈ℒα.
Proof.
Let (Xn)n be a sequence of simple functions such that ∥Xn-X∥α→0. For fixed B, we denote I(X)=I(X)(·,B). We let ∥·∥∞ be the sup-norm on D[0,T]. For any ɛ>0, we have
(81)P(∥I(X)∥∞>λ)≤P(∥I(X)-I(Xn)∥∞>λɛ)+P(∥I(Xn)∥∞>λ(1-ɛ)).
Multiplying by λα and using Theorem 13, we obtain
(82)supλ>0λαP(∥I(X)∥∞>λ)≤ε-αsupλ>0λαP(∥I(X)-I(Xn)∥∞>λ)+(1-ɛ)-αcα∥Xn∥α,T,Bα.
Let n→∞. Using (76) one can prove that supλ>0λαP(∥I(Xn)-I(X)∥∞>λ)→0. We obtain that supλ>0λαP(∥I(X)∥∞>λ)≤(1-ɛ)-αcα∥X∥α,T,Bα. The conclusion follows letting ɛ→0.
For an arbitrary Borel set 𝒪⊂ℝd (possibly 𝒪=ℝd), we assume, in addition, that X∈ℒα satisfies the condition:
(83)E∫0T∫𝒪|X(t,x)|αdxdt<∞,∀T>0.
Then we can define I(X)(·,𝒪) as follows. Let 𝒪k=𝒪∩Ek where (Ek)k is an increasing sequence of sets in ℬb(ℝd) such that ⋃kEk=ℝd. By (80), Lemma 14, and (83),
(84)supλ>0λαP(supt≤T|I(X)(t,𝒪k)-I(X)(t,𝒪l)|>λ)≤cαE∫0T∫𝒪k∖𝒪l|X(t,x)|αdxdt⟶0,
as k,l→∞. This shows that {I(X)(·,𝒪k)}k is a Cauchy sequence in probability in the space D[0,T] equipped with the sup-norm. We denote by I(X)(·,𝒪) its limit. As above, this process can be extended to [0,∞) and I(X)(t,𝒪) is ℱt-measurable for any t>0. We denote
(85)I(X)(t,𝒪)=∫0t∫𝒪X(s,x)Z(ds,dx).
Similarly, to Lemma 14, one can prove that, for any X∈ℒα satisfying (83),
(86)supλ>0λαP(supt≤T|I(X)(t,𝒪)|>λ)≤cαE∫0T∫𝒪|X(t,x)|αdxdt.
5. The Truncated Noise
For the study of nonlinear equations, we need to develop a theory of stochastic integration with respect to another process ZK which is defined by removing from Z the jumps whose modulus exceeds a fixed value K>0. More precisely, for any B∈ℬb(ℝ+×ℝd), we define
(87)ZK(B)=∫B×{0<|z|≤K}zN(ds,dx,dz),ifα≤1,(88)ZK(B)=∫B×{0<|z|≤K}zN^(ds,dx,dz),ifα>1.
We treat separately the cases α≤1 and α>1.
5.1. The Case <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M648">
<mml:mi>α</mml:mi>
<mml:mo mathvariant="bold">≤</mml:mo>
<mml:mn>1</mml:mn></mml:math>
</inline-formula>
Note that {ZK(B);B∈ℬb(ℝ+×ℝd)} is an independently scattered random measure on ℝ+×ℝd with characteristic function given by
(89)E(eiuZK(B))=exp{|B|∫|z|≤K(eiuz-1)να(dz)},∀u∈ℝ.
We first examine the tail of ZK(B).
Lemma 15.
For any set B∈ℬb(ℝ+×ℝd),
(90)supλ>0λαP(|ZK(B)|>λ)≤rα|B|,
where rα>0 is a constant depending only on α (given by Lemma A.3).
Proof.
This follows from Example 3.7 of [1]. We denote by να,K the restriction of να to {z∈ℝ;0<|z|≤K}. Note that
(91)να,K({z∈ℝ;|z|>t})={t-α-K-α,if0<t≤K,0,ift>K,
and hence supt>0tανα,K({z∈ℝ;|z|>t})=1. Next we observe that we do not need to assume that the measure να,K is symmetric since we use a modified version of Lemma 2.1 of [24] given by Lemma A.3 (Appendix A).
In fact, since the tail of να,K vanishes if t>K, we can obtain another estimate for the tail of ZK(B) which, together with (90), will allow us to control its pth moment for p∈(α,1). This new estimate is given below.
Lemma 16.
If α<1, then
(92)P(|ZK(B)|>u)≤α1-αK1-α|B|u-1,∀u>K.
If α=1, then P(|ZK(B)|>u)≤K|B|u-2 for all u>K.
Proof.
We use the same idea as in Example 3.7 of [1]. For each k≥1, let Zk,K(B) be a random variable with characteristic function:
(93)E(eiuZk,K(B))=exp{|B|∫{k-1<|z|≤K}(eiuz-1)να(dz)}.
Since {Zk,K(B)}k converges in distribution to ZK(B), it suffices to prove the lemma for Zk,K(B). Let μk be the restriction of να to {z;k-1<|z|≤K}. Since μk is finite, Zk,K(B) has a compound Poisson distribution with
(94)P(|Zk,K(B)|>u)=e-|B|μk(ℝ)∑n≥0|B|nn!μk*n({z;|z|>u}),
where μk*n denotes the n-fold convolution. Note that
(95)μk*n({z;|z|>u})=[μk(ℝ)]nP(|∑i=1nηi|>u),
where (ηi)i≥1 are i.i.d. random variables with law μk/μk(ℝ).
Assume first that α<1. To compute P(|∑i=1nηi|>u) we consider the intersection with the event {max1≤i≤n|ηi|>u} and its complement. Note that P(|ηi|>u)=0 for any u>K. Using this fact and Markov’s inequality, we obtain that, for any u>K,
(96)P(|∑i=1nηi|>u)≤P(|∑i=1nηi1{|ηi|≤u}|>u)≤1u∑i=1nE(|ηi|1{|ηi|≤u}).
Note that P(|ηi|>s)≤(s-α-K-α)/μk(ℝ) if s≤K. Hence, for any u>K(97)E(|ηi|1{|ηi|≤u})≤∫0uP(|ηi|>s)ds=∫0KP(|ηi|>s)ds≤1μk(ℝ)α1-αK1-α.
Combining all these facts, we get that for any u>K(98)μk*n({z;|z|>u})≤[μk(ℝ)]n-1α1-αK1-αnu-1,
and the conclusion follows from (94).
Assume now that α=1. In this case, E(ηi1{|ηi|≤u})=0 since ηi has a symmetric distribution. Using Chebyshev’s inequality this time, we obtain
(99)P(|∑i=1nηi|>u)≤P(|∑i=1nηi1{|ηi|≤u}|>u)≤1u2∑i=1nE(ηi21{|ηi|≤u}).
The result follows as above using the fact that, for any u>K,
(100)E(ηi21{|ηi|≤u})≤2∫0usP(|ηi|>s)ds=2∫0KsP(|ηi|>s)ds≤1μk(ℝ)K.
Lemma 17.
If α<1 then
(101)E|ZK(B)|p≤Cα,pKp-α|B|foranyp∈(α,1),
where Cα,p is a constant depending on α and p. If α=1, then
(102)E|ZK(B)|p≤CpKp-1|B|foranyp∈(1,2),
where Cp is a constant depending on p.
Proof.
Note that
(103)E|ZK(B)|p=∫0∞P(|ZK(B)|p>t)dt=p∫0∞P(|ZK(B)|>u)up-1du.
We consider separately the integrals for u≤K and u>K. For the first integral we use (90):
(104)∫0KP(|ZK(B)|>u)up-1du≤rα|B|∫0Ku-α+p-1du=rα|B|1p-αKp-α.
For the second one we use Lemma 16: if α<1 then
(105)∫K∞P(|ZK(B)|>u)up-1du≤α1-αK1-α|B|∫K∞up-2du=α(1-α)(1-p)|B|Kp-α,
and if α=1, then
(106)∫K∞P(|ZK(B)|>u)up-1du≤K|B|∫K∞up-3du=|B|12-pKp-1.
We now proceed to the construction of the stochastic integral with respect to ZK. For this, we use the same method as for Z. Note that ℱtZK⊂ℱt, where ℱtZK is the σ-field generated by ZK([0,s]×A) for all s∈[0,t] and A∈ℬb(ℝd). For any B∈ℬb(ℝd), we will work with a càdlàg modification of the Lévy process {ZK(t,B)=ZK([0,t]×B);t≥0}.
If X is a simple process given by (40), we define
(107)IK(X)(t,B)=∫0t∫BX(s,x)ZK(ds,dx)
by the same formula (44) with Z replaced by ZK. The following result shows that IK(X)(t,B) has the same tail behavior as I(X)(t,B).
Proposition 18.
If X is a bounded simple process then
(108)supλ>0λαP(supt∈[0,T]|IK(X)(t,B)|>λ)≤dαE∫0T∫B|X(t,x)|αdxdt,
for any T>0 and B∈ℬb(ℝd), where dα is a constant depending only on α.
Proof.
As in the proof of Theorem 13, it is enough to prove that
(109)P(maxl=0,…,m-1|∑i=0l∑j=1NiβijZij*|>λ)≤dαλ-α∑i=0m-1(si+1-si)∑j=1NiE|βij|α|Hij∩B|,
where Zij*=ZK((si,si+1]×(Hij∩B)). This reduces to showing that Ui*=∑j=1NixjZij* satisfies an inequality similar to (57) for any x¯∈ℝNi; that is,
(110)P(|Ui*|>λ)≤dα*λ-α(si+1-si)∑j=1Ni|xj|α|Hij∩B|,
for any λ>0, for some dα*>0. We first examine the tail of Zij*. By (90),
(111)P(|Zij*|>λ)≤rα(si+1-si)Kijλ-α,
where Kij=|Hij∩B|. Letting ηij=Kij-1/αZij*, we obtain that, for any u>0,
(112)P(|ηij|>u)≤rα(si+1-si)u-α,∀j=1,…,Ni.
By Lemma A.3 (Appendix A), it follows that, for any λ>0,
(113)P(|∑j=1Nibjηij|>λ)≤rα2(si+1-si)∑j=1Ni|bj|αλ-α,
for any sequence (bj)j=1,…,Ni of real numbers. Inequality (110) (with dα*=rα2) follows by applying this to bj=xjKij1/α.
In view of the previous result and Proposition 12, for any process X∈ℒα, we can construct the integral
(114)IK(X)(t,B)=∫0t∫BX(s,x)ZK(ds,dx)
in the same manner as I(X)(t,B), and this integral satisfies (108). If in addition the process X∈ℒα satisfies (83), then we can define the integral IK(X)(t,𝒪) for an arbitrary Borel set 𝒪⊂ℝd (possibly 𝒪=ℝd). This integral will satisfy an inequality similar to (108) with B replaced by 𝒪.
The appealing feature of IK(X)(t,B) is that we can control its moments, as shown by the next result.
Theorem 19.
If α<1, then for any p∈(α,1) and for any X∈ℒp,
(115)E|IK(X)(t,B)|p≤Cα,pKp-αE∫0t∫B|X(s,x)|pdxds,
for any t>0 and B∈ℬb(ℝd), where Cα,p is a constant depending on α, p. If 𝒪⊂ℝd is an arbitrary Borel set and we assume, in addition, that the process X∈ℒp satisfies
(116)E∫0T∫𝒪|X(s,x)|pdxds<∞,∀T>0,
then inequality (115) holds with B replaced by 𝒪.
Proof.
Consider the following steps.
Step 1. Suppose that X is an elementary process of the form (39). Then IK(X)(t,B)=YZK(H) where H=(t∧a,t∧b]×(A∩B). Note that ZK(H) is independent of ℱa. Hence, ZK(H) is independent of Y. Let PY denote the law of Y. By Fubini’s theorem,
(117)E|YZK(H)|p=p∫0∞P(|YZK(H)|>u)up-1du=p∫ℝ(∫0∞P(|yZK(H)|>u)up-1du)PY(dy).
We evaluate the inner integral. We split this integral into two parts, for u≤K|y| and u>K|y|, respectively. For the first integral, we use (90). For the second one, we use Lemma 16. Therefore, the inner integral is bounded by
(118)rα|y|α|H|∫0K|y|u-α+p-1du+α1-α|y|K1-α|H|×∫K|y|∞up-2du=Cα,p′Kp-α|y|p|H|,E|YZK(H)|p≤pCα,p′Kp-α|H|E|Y|p=Cα,pKp-αE∫0t∫B|X(s,x)|pdxds.Step 2. Suppose now that X is a simple process of the form (40). Then X(t,x)=∑i=0N-1∑j=1miXij(t,x) where Xij(t,x)=1(ti,ti+1](t)1Aij(x)Yij.
Using the linearity of the integral, the inequality |a+b|p≤|a|p+|b|p, and the result obtained in Step 1 for the elementary processes Xij, we get
(119)E|IK(X)(t,B)|p≤E∑i=0N-1∑j=1mi|IK(Xij)(t,B)|p≤Cα,pKp-αE∑i=0N-1∑j=1mi∫0t∫B|Xij(s,x)|pdxds=Cα,pKp-αE∫0t∫B|X(s,x)|pdxds.Step 3. Let X∈ℒp be arbitrary. By Proposition 12, there exists a sequence (Xn)n of bounded simple processes such that ∥Xn-X∥p→0. Since α<p, it follows that ∥Xn-X∥α→0. By the definition of IK(X)(t,B) there exists a subsequence {nk}k such that {IK(Xnk)(t,B)}k converges to IK(X)(t,B) a.s. Using Fatou’s lemma and the result obtained in Step 2 (for the simple processes Xnk), we get
(120)E|IK(X)(t,B)|p≤liminfk→∞E|IK(Xnk)(t,B)|p≤Cα,pKp-αliminfk→∞E∫0t∫B|Xnk(s,x)|pdxds=Cα,pKp-αE∫0t∫B|X(s,x)|pdxds.Step 4. Suppose that X∈ℒp satisfies (116). Let 𝒪k=𝒪∩Ek where (Ek)k is an increasing sequence of sets in ℬb(ℝd) such that ⋃k≥1Ek=ℝd. By the definition of IK(X)(t,𝒪), there exists a subsequence (ki)i such that {IK(X)(t,𝒪ki)}i converges to IK(X)(t,𝒪) a.s. Using Fatou’s lemma, the result obtained in Step 3 (for B=𝒪ki) and the monotone convergence theorem, we get
(121)E|IK(X)(t,𝒪)|p≤liminfi→∞E|IK(X)(t,𝒪ki)|p≤Cα,pKp-αliminfi→∞E∫0t∫𝒪ki|X(s,x)|pdxds=Cα,pKp-αE∫0t∫𝒪|X(s,x)|pdxds.
Remark 20.
Finding a similar moment inequality for the cases α=1 and p∈(1,2) remains an open problem. The argument used in Step 2 above relies on the fact that p<1. Unfortunately, we could not find another argument to cover the case p>1.
5.2. The Case <inline-formula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M835">
<mml:mi>α</mml:mi>
<mml:mo mathvariant="bold">></mml:mo>
<mml:mn>1</mml:mn></mml:math>
</inline-formula>
In this case, the construction of the integral with respect to ZK relies on an integral with respect to N^ which exists in the literature. We recall briefly the definition of this integral. For more details, see Section 1.2.2 of [6], Section 24.2 of [25], or Section 8.7 of [12].
Let 𝔼=ℝd×(ℝ∖{0}) endowed with the measure μ(dx,dz)=dxνα(dz) and let ℬb(𝔼) be the class of bounded Borel sets in 𝔼. For a simple process Y={Y(t,x,z);t≥0,(x,z)∈𝔼}, the integral IN^(Y)(t,B) is defined in the usual way, for any t>0, B∈ℬb(𝔼). The process IN^(Y)(·,B) is a (càdlàg) zero-mean square-integrable martingale with quadratic variation
(122)[IN^(Y)(·,B)]t=∫0t∫B|Y(s,x,z)|2N(ds,dx,dz)
and predictable quadratic variation
(123)〈IN^(Y)(·,B)〉t=∫0t∫B|Y(s,x,z)|2να(dz)dxds.
By approximation, this integral can be extended to the class of all 𝒫×ℬ(ℝ∖{0})-measurable processes Y such that for any T>0 and B∈ℬb(𝔼)(124)∥Y∥2,T,B2≔E∫0T∫B|Y(s,x,z)|2να(dz)dxds<∞.
The integral is a martingale with the same quadratic variations as above and has the isometry property: E|IN^(Y)(t,B)|2=∥Y∥2,T,B2. If, in addition, ∥Y∥2,T,𝔼<∞, then the integral can be extended to 𝔼. By the Burkholder-Davis-Gundy inequality for discontinuous martingales, for any p≥1,
(125)Esupt≤T|IN^(Y)(t,𝔼)|p≤CpE[IN^(Y)(·,𝔼)]Tp/2.
The previous inequality is not suitable for our purposes. A more convenient inequality can be obtained for another stochastic integral, constructed for p∈[1,2] fixed, as suggested on page 293 of [6]. More precisely, one can show that, for any bounded simple process Y,
(126)Esupt≤T|IN^(Y)(t,𝔼)|p≤CpE∫0T∫ℝd∫ℝ∖{0}|Y(t,x,z)|pνα(dz)dxdt=:|Y|p,T,𝔼p,
where Cp is the constant appearing in (125) (see Lemma 8.22 of [12]).
By the usual procedure, the integral can be extended to the class of all 𝒫×ℬ(ℝ∖{0})-measurable processes Y such that [Y]p,T,𝔼<∞. The integral is defined as an element in the space Lp(Ω;D[0,T]) and will be denoted by
(127)IN^,p(Y)(t,𝔼)=∫0t∫ℝd∫ℝ∖{0}Y(s,x,z)N^(ds,dx,dz).
Its appealing feature is that it satisfies inequality (126).
From now on, we fix p∈[1,2]. Based on (88), for any B∈ℬb(ℝd), we let
(128)IK(X)(t,B)=∫0t∫BX(s,x)ZK(ds,dx)=∫0t∫B∫{|z|≤K}X(s,x)zN^(ds,dx,dz),
for any predictable process X={X(t,x);t≥0,x∈ℝd} for which the rightmost integral is well defined. Letting Y(t,x,z)=X(t,x)z1{0<|z|≤K}, we see that this is equivalent to saying that p>α and X∈ℒp. By (126),
(129)Esupt≤T|IK(X)(t,B)|p≤Cα,pKp-αE∫0T∫B|X(s,x)|pdxds,
where Cα,p=Cpα/(p-α). If, in addition, the process X∈ℒp satisfies (116) then (129) holds with B replaced by 𝒪, for an arbitrary Borel set 𝒪⊂ℝd.
Note that (129) is the counterpart of (115) for the case α>1. Together, these two inequalities will play a crucial role in Section 6.
Table 1 summarizes all the conditions.
Conditions for IK(X)(t,B) to be well defined.
α<1
α>1
B is bounded
X∈ℒα
X∈ℒp
for some p∈(α,2]
B=𝒪 is unbounded
X∈ℒα and X satisfies (83)
X∈ℒp and
X satisfies (116)
for some p∈(α,2]
6. The Main Result
In this section, we state and prove the main result regarding the existence of a mild solution of (1). For this result, 𝒪 is a bounded domain in ℝd. For any t>0, we denote
(130)Jp(t)=supx∈𝒪∫𝒪G(t,x,y)pdy.
Theorem 21.
Let α∈(0,2), α≠1. Assume that for any T>0(131)limh→0∫0T∫𝒪|G(t,x,y)-G(t+h,x,y)|pdydt=0,∀x∈𝒪,(132)lim|h|→0∫0T∫𝒪|G(t,x,y)-G(t,x+h,y)|pdydt=0,∀x∈𝒪,(133)∫0TJp(t)dt<∞,
for some p∈(α,1) if α<1, or for some p∈(α,2] if α>1. Then (1) has a mild solution. Moreover, there exists a sequence (τK)K≥1 of stopping times with τK↑∞ a.s. such that, for any T>0 and K≥1,
(134)sup(t,x)∈[0,T]×𝒪E(|u(t,x)|p1{t≤τK})<∞.
Example 22 (heat equation).
Let L=∂/∂t-(1/2)Δ. Then G(t,x,y)≤G¯(t,x-y) where G¯(t,x) is the fundamental solution of Lu=0 on ℝd. Condition (133) holds if p<1+2/d. If α<1, this condition holds for any p∈(α,1). If α>1, this condition holds for any p∈(α,1+2/d], as long as α satisfies (6). Conditions (131) and (132) hold by the continuity of the function G in t and x, by applying the dominated convergence theorem. To justify the application of this theorem, we use the trivial bound (2πt)-dp/2 for both G(t+h,x,y)p and G(t,x+h,y)p, which introduces the extra condition dp<2. Unfortunately, we could not find another argument for proving these two conditions (In the case of the heat equation on ℝd, Lemmas A.2 and A.3 of [6] estimate the integrals appearing in (132) and (131), with p=1 in (131). These arguments rely on the structure of G¯ and cannot be used when 𝒪 is a bounded domain.).
Example 23 (parabolic equations).
Let L=∂/∂t-ℒ where ℒ is given by (31). Assuming (32), we see that (133) holds if p<1+2/d. The same comments as for the heat equation apply here as well (Although in a different framework, a condition similar to (131) was probably used in the proof of Theorem 12.11 of [12] (page 217) for the claim lims→tE|J3(X)(s)-J3(X)(t)|Lp(𝒪)p=0. We could not see how to justify this claim, unless dp<2.).
Example 24 (heat equation with fractional power of the Laplacian).
Let L=∂/∂t+(-Δ)γ for some γ>0. By Lemma B.23 of [12], if α>1, then condition (133) holds for any p∈(α,1+2γ/d), provided that α satisfies (36) (This condition is the same as in Theorem 12.19 of [12], which examines the same equation using the approach based on Hilbert-space valued solution.).
To verify conditions (131) and (132), we use the continuity of G in t and x and apply the dominated convergence theorem. To justify the application of this theorem, we use the trivial bound Cd,γt-dp/(2γ) for both G(t+h,x,y)p and G(t,x+h,y)p, which introduces the extra condition dp<2γ. This bound can be seen from (33), using the fact that 𝒢(t,x,y)≤𝒢¯(t,x-y) where 𝒢 and 𝒢¯ are the fundamental solutions of ∂u/∂t-Δu=0 on 𝒪 and ℝd, respectively. (In the case of the same equation on ℝd, elementary estimates for the time and space increments of G¯ can be obtained directly from (35), as on page 196 of [26]. These arguments cannot be used when 𝒪 is a bounded domain.)
The remaining part of this section is dedicated to the proof of Theorem 21. The idea is to solve first the equation with the truncated noise ZK (yielding a mild solution uK) and then identify a sequence (τK)K≥1 of stopping times with τK↑∞ a.s. such that, for any t>0, x∈𝒪, and L>K, uK(t,x)=uL(t,x) a.s. on the event {t≤τK}. The final step is to show that process u defined by u(t,x)=uK(t,x) on {t≤τK} is a mild solution of (1). A similar method can be found in Section 9.7 of [12] using an approach based on stochastic integration of operator-valued processes, with respect to Hilbert-space-valued processes, which is different from our approach.
Since σ is a Lipschitz function, there exists a constant Cσ>0 such that
(135)|σ(u)-σ(v)|≤Cσ|u-v|,∀u,v∈ℝ.
In particular, letting Dσ=Cσ∨|σ(0)|, we have
(136)|σ(u)|≤Dσ(1+|u|),∀u∈ℝ.
For the proof of Theorem 21, we need a specific construction of the Poisson random measure N, taken from [13]. We review briefly this construction.
Let (𝒪k)k≥1 be a partition of ℝd with sets in ℬb(ℝd) and let (Uj)j≥1 be a partition of ℝ∖{0} such that να(Uj)<∞ for all j≥1. We may take Uj=Γj-1 for all j≥1. Let (Eij,k,Xij,k,Zij,k)i,j,k≥1 be independent random variables defined on a probability space (Ω,ℱ,P), such that
(137)P(Eij,k>t)=e-λj,kt,P(Xij,k∈B)=|B∩𝒪k||𝒪k|,P(Zij,k∈Γ)=|Γ∩Uj||Uj|,
where λj,k=|𝒪k|να(Uj). Let Tij,k=∑l=1iElj,k for all i≥1. Then
(138)N=∑i,j,k≥1δ(Tij,k,Xij,k,Zij,k)
is a Poisson random measure on ℝ+×ℝd×(ℝ∖{0}) with intensity dtdxνα(dz).
This section is organized as follows. In Section 6.1 we prove the existence of the solution of the equation with truncated noise ZK. Sections 6.2 and 6.3 contain the proof of Theorem 21 when α<1 and α>1, respectively.
6.1. The Equation with Truncated Noise
In this section, we fix K>0 and we consider the equation:
(139)Lu(t,x)=σ(u(t,x))Z˙K(t,x),t>0,x∈𝒪
with zero initial conditions and Dirichlet boundary conditions. A mild solution of (139) is a predictable process u which satisfies (2) with Z replaced by ZK. For the next result, 𝒪 can be a bounded domain in ℝd or 𝒪=ℝd (with no boundary conditions).
Theorem 25.
Under the assumptions of Theorem 21, (139) has a unique mild solution u={u(t,x);t≥0,x∈𝒪}. For any T>0,
(140)sup(t,x)∈[0,T]×𝒪E|u(t,x)|p<∞,
and the map (t,x)↦u(t,x) is continuous from [0,T]×𝒪 into Lp(Ω).
Proof.
We use the same argument as in the proof of Theorem 13 of [27], based on a Picard iteration scheme. We define u0(t,x)=0 and
(141)un+1(t,x)=∫0t∫𝒪G(t-s,x,y)σ(un(s,y))ZK(ds,dy)
for any n≥0. We prove by induction on n≥0 that (i) un(t,x) is well defined; (ii) Kn(t):=sup(t,x)∈[0,T]×𝒪E|un(t,x)|p<∞ for any T>0; (iii) un(t,x) is ℱt-measurable for any t>0 and x∈𝒪; (iv) the map (t,x)↦un(t,x) is continuous from [0,T]×𝒪 into Lp(Ω) for any T>0.
The statement is trivial for n=0. For the induction step, assume that the statement is true for n. By an extension to random fields of Theorem 30, Chapter IV of [28], un has a jointly measurable modification. Since this modification is (ℱt)t-adapted (in the sense of (iii)), it has a predictable modification (using an extension of Proposition 3.21 of [12] to random fields). We work with this modification, that we call also un.
We prove that (i)–(iv) hold for un+1. To show (i), it suffices to prove that Xn∈ℒp, where Xn(s,y)=1[0,t](s)G(t-s,x,y)σ(un(s,y)). By (136) and (133),
(142)E∫0t∫𝒪|Xn(s,y)|pdyds≤Dσp2p-1(1+Kn(t))∫0tJp