Numerical Solutions of Stochastic Differential Delay Equations with Poisson Random Measure under the Generalized Khasminskii-Type Conditions

and Applied Analysis 3 for t > 0, where p̃φ dv×dt : pφ dv×dt −φ dv dt. Here x t− denotes lims↑tx s . The initial data of 2.1 is given by {x t : −τ ≤ t ≤ 0} ξ t ∈ C ( −τ, 0 ;R ) , 2.2 where x −τ− x −τ . The drift coefficient a : R × R → R, the diffusion coefficient b : R × R → Rd×m0 , and the jump coefficient c : R × R × ε → R are assumed to be Borel measurable functions and the coefficients are sufficiently smooth. The randomness in 2.1 is generated by the following see 8 . An m0-dimensional Wiener process W {W t W1 t , . . . ,W0 t } with independent scalar components is defined on a filtered probability space ΩW,FW, FW t t≥0,P . A Poisson random measure pφ ω, dv × dt is on Ω × ε × 0,∞ , where ε ⊆ R \ {0} with r ∈ N, and its deterministic compensated measure φ dv dt λf v dvdt. f v is a probability density, and we require finite intensity λ φ ε < ∞. The Poisson random measure is defined on a filtered probability space Ω , FJ , F t t≥0, P . The process x t is thus defined on a product space Ω, F, Ft t≥0, P , whereΩ Ω ×ΩJ , F FW ×FJ , Ft t≥0 FW t t≥0 × F J t t≥0, P P W ×PJ , and F0 contains all P-null sets. The Wiener process and the Poisson random measure are mutually independent. To state the generalized Khasminskii-type conditions, we define the operator LV : R × R → R by LV ( x, y ) Vx x a ( x, y ) 1 2 trace ( b ( x, y ) Vxx x b ( x, y )) ∫ ε ( V ( x c ( x, y, v )) − V x − Vx x c ( x, y, v )) φ dv , 2.3 where V ∈ C2 ( R;R ) , Vx ( ∂V x ∂x1 , . . . , ∂V x ∂xd ) , Vxx ( ∂2V x ∂xi∂xj )


Introduction
To take into consideration stochastic effects such as corporate defaults, operational failures, market crashes or central bank announcements in financial market, the research on stochastic differential equations SDEs with Poisson random measure see 1, 2 is important, since Merton initiated the model of such equations in 1976 see 3 .Due to the rate of change of financial dynamics system depending on its past history, SDDE with Poisson random measure see 4, 5 , the case we propose and consider in this work, is meaningful.
Since there is no explicit solution for an SDDE with Poisson random measure, one needs, in general, numerical methods which can be classified into strong approximations and weak approximations see 6-8 .We here give an overview of the results on the strong approximations of differential equation driven by Wiener process and Poisson random measure.Platen 9 presented a convergence theorem with order γ ∈ {0.5, 1, 1.5, . ..} and originally introduced the jumpadapted methods which are based on all the jump times.Moreover, Bruti-Liberati and Platen The randomness in 2.1 is generated by the following see 8 .An m 0 -dimensional Wiener process W {W t W 1 t , . . ., W m 0 t T } with independent scalar components is defined on a filtered probability space Ω W , F W , F W t t≥0 , P W .A Poisson random measure p φ ω, dv × dt is on Ω J × ε × 0, ∞ , where ε ⊆ R r \ {0} with r ∈ N, and its deterministic compensated measure φ dv dt λf v dvdt.f v is a probability density, and we require finite intensity λ φ ε < ∞.The Poisson random measure is defined on a filtered probability space Ω J , F J , F J t t≥0 , P J .The process x t is thus defined on a product space Ω, F, F t t≥0 , P , where

Problem's Setting
F W t t≥0 × F J t t≥0 , P P W × P J , and F 0 contains all P-null sets.The Wiener process and the Poisson random measure are mutually independent.
To state the generalized Khasminskii-type conditions, we define the operator LV : where Now the generalized Khasminskii-type conditions are given by the following assumptions.
Assumption 2.1.For each integer k ≥ 1, there exists a positive constant C k , dependent on k, such that And there exists a positive constant C such that Assumption 2.3.There exists a positive constant C such that Assumption 2.4.There exists a positive constant L such that the initial data 2.2 satisfies

The Existence of Global Solutions
In this section, we analyze the existence and the property of the global solution to 2.1 under Assumptions 2.1, 2.2, and 2.4.
In order to demonstrate the existence of the global solution to 2.1 , we redefine the following concepts mainly according to 17, 18 .
Definition 2.5.Let {x t } t≥−τ be an R d -valued stochastic process.The process is said to be càdlàg if it is right continuous and for almost all ω ∈ Ω the left limit lim s↑t x s exists and is finite for all t > −τ.Definition 2.6.Let σ ∞ be a stopping time such that 0 ≤ σ ∞ ≤ T a.s.An R d -valued, F t -adapted, and càdlàg process {x t : −τ ≤ t < σ ∞ } is called a local solution of 2.1 if x t ξ t on t ∈ −τ, 0 and, moreover, there is a nondecreasing sequence {σ k } k≥1 of stopping times such that 0 ≤ σ k ↑ σ ∞ a.s. and on t ∈ 0, T with initial data x k t ξ t on t ∈ −τ, 0 .Obviously, the equation satisfies the global Lipschitz conditions and the linear growth conditions.Therefore according to 4 , there is a unique global solution x k t to 2.16 and its solution is a càdlàg process see 17 .We define the stopping time where we set inf φ ∞ as usual φ denotes the empty set throughout our paper.We can easily get which means {σ k } k≥1 is a nondecreasing sequence and then let lim k → ∞ σ k σ ∞ a.s.Now, we define {x t : −τ ≤ t < σ ∞ } with x t ξ t on t ∈ −τ, 0 and where σ 0 0. And from 2.16 and 2.19 , we can also obtain To show the uniqueness of the solution to 2.1 , let {x t : −τ ≤ t < σ ∞ } be another maximal local solution.As the same proof as Theorem 2.8 in 17 , we infer that

2.23
Hence by k → ∞, we get Abstract and Applied Analysis 7 Our proof is divided into the following steps.
Step 1.For any integer k ≥ L √ τ |ξ 0 | 1 and 0 ≤ t ≤ τ, by taking integration and expectations and using Assumption 2.2 to 2.25 , we get where

2.29
by the Gronwall inequality see 18 , which leads to Therefore, from 2.30 , we have where 2.34 and 2.35 are used.

2.44
Moreover, by taking k → ∞ to 2.41 , we then get Therefore, from 2.38 , 2.44 , and 2.45 , we have

2.46
Step 3.So for any i ∈ N, we repeat the similar analysis as above and then obtain

2.48
So we can get P σ ∞ ∞ 1 and the required result follows.
In the following lemma, we show that the solution of 2.1 remains in a compact set with a large probability.Lemma 2.9.Under Assumptions 2.1, 2.2, and 2.4, for any pair of ∈ 0, 1 and T > 0, there exists a sufficiently large integer k * , dependent on and T , such that where σ k is defined in Lemma 2.7.
Proof.According to Theorem 2.8, we can get for i large enough to iτ ≥ T and k ≥ L √ τ |ξ 0 | 1.Therefore, we have where Under 2.7 in Assumption 2.2, there exists a sufficiently large integer k * such that

2.53
So we complete the proof.
The continuous-time Euler method X t on t ∈ −τ, ∞ is then defined by for t ≥ 0, where Actually, we can see in 11 that p φ {p φ t : p φ ε × 0, t } is a process that counts the number of jumps until some given time.The Poisson random measure p φ dv ×dt generates a sequence of pairs { ι i , ξ i , i ∈ {1, 2, . . ., p φ T }} for a given finite positive constant T if λ < ∞.

3.6
In order to analyze the Euler method, we will give two lemmas.The first lemma shows the close relation between the continuous-time Euler solution 3.4 and its step function Z t .Lemma 3.1.Suppose Assumptions 2.1 and 2.3 hold.Then for any T > 0, there exists a positive constant K 1 k , dependent on integer k and independent of Δt, such that for all Δt ∈ 0, 1 the continuous-time Euler method 3.4 satisfies

3.8
Therefore, by taking expectations and the Cauchy-Schwarz inequality and using the martingale properties of dW t and p φ dv × dt , we get where the inequality

Abstract and Applied Analysis 13
Hence by substituting 3.10 into 3.9 , we get So from Assumption 2.3, we can get the result 3.7 by choosing In the following lemma, we demonstrate that the solution of continuous-time Euler method 3.4 remains in a compact set with a large probability.Lemma 3.2.Under Assumptions 2.1, 2.2, 2.3, and 2.4, for any pair of ∈ 0, 1 and T > 0, there exist a sufficiently large integer k * and a sufficiently small Δt * 1 such that where ρ k * is defined in Lemma 3.1.
Proof.Our proof is completed by the following steps.
Step 1.Using Itô's formula see 1 to V X t , for t ≥ 0, we have

3.16
where Assumptions 2.1 and 2.2 are used and L k is a positive constant dependent on integer k, intensity λ and independent of Δt.Therefore, from 3.16 , Assumption 2.4, and 3.7 in Lemma 3.1, we obtain Hence by taking expectations and integration to 3.14 , applying the martingale properties of dW t and p φ dv × dt , and then using 3.17 and Assumption 2.2, we obtain

3.20
by the Gronwall inequality see 18 , which gives Moreover, from 3.19 and 3.21 , we have 16 Abstract and Applied Analysis Step 3.For 0 ≤ t ≤ 2τ and k ≥ L √ τ |ξ 0 | 1, it follows from 3.18 that

3.23
As the same way as Step 2, we can obtain

3.28
Step 4. By repeating the same way in Steps 2 and 3, we get where β 1 and β 2 are two constants dependent on μ 1 , μ 2 , τ, T and independent of k and Δt.Therefore, we have where

3.31
Now, for any ∈ 0, 1 , we can choose sufficiently large integer k * such that and sufficiently small Δt * 1 such that

3.33
So from 3.30 , we can obtain 3.34

Convergence in Probability
In this section, we show the convergence in probability of the Euler method to 2.1 over a finite time interval 0, T , which is based on the following lemma.
Lemma 4.1.Under Assumptions 2.1, 2.3, and 2.4, for any T > 0, there exists a positive constant K 2 k , dependent on k and independent of Δt, such that for all Δt ∈ 0, 1 the solution of 2.1 and the continuous-time Euler method 3.4 satisfy Proof.From 2.1 and 3.4 , for any 0 ≤ t ≤ T and k ≥ L √ τ |ξ 0 | 1, we have where the inequality Abstract and Applied Analysis 19

4.3
Moreover, by using the martingale properties of dW t and p φ dv × dt , Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3.1, we have Abstract and Applied Analysis

4.4
Therefore by substituting 4.3 and 4.4 into 4.2 , we get

4.5
So by using the Gronwall inequality see 18 , we have the result 4.1 by choosing

4.6
Now, we state our main theorem which shows the convergence in probability of the continuous-time Euler method 3.4 .for any T > 0.
Proof.For sufficiently small , ς ∈ 0, 1 , we define By Lemmas 2.9 and 3.2, there exists a pair of k and Δt 1 such that 4.9 We then have

4.15
As the same analysis as the proof in Theorem 4.2, we have

4.16
Recalling that 4.17 for all sufficiently small Δt.Hence we complete the result 4.14 .

Numerical Example
In this section, a numerical example is analyzed under Assumptions 2.
this paper, unless otherwise specified, we use the following notations.Let | • | be the Euclidean norm in R d , d ∈ N. Let u 1 ∨ u 2 max{u 1 , u 2 } and u 1 ∧ u 2 min{u 1 , u 2 }.If A is a vector or matrix, its transpose is denoted by A T .If A is a matrix, its trace norm is denoted by |A| trace A T A .Let τ > 0 and R 0, ∞ .Let C −τ, 0 ; R d denote the family of continuous functions from −τ, 0 to R d with the norm |ϕ| sup −τ≤θ≤0 |ϕ θ |.Denote by C R d ; R the family of continuous functions from R d to R .Let C 2 R d ; R denote the family of continuously two times differentiable R -valued functions from R d to R .z denotes the largest integer which is less than or equal to z in R. I A denotes the indicator function of a set A.The following d-dimensional SDDE with Poisson random measure is considered in our paper:dx t a x t − , x t − τ − dt b x t − , x t − τ − dW t ε c x t − , x t − τ − , v p φ dv × dt , 2.1for t > 0, where p φ dv × dt : p φ dv × dt − φ dv dt.Here x t − denotes lim s↑t x s .The initial data of 2.1 is given by{x t : −τ ≤ t ≤ 0} ξ t ∈ C −τ, 0 ; R d , 2.2 where x −τ − x −τ .The drift coefficient a : R d × R d → R d , the diffusion coefficient b : R d × R d → R d×m 0 ,and the jump coefficient c : R d × R d × ε → R d are assumed to be Borel measurable functions and the coefficients are sufficiently smooth.
used.Therefore, by using the Cauchy-Schwarz inequality, Assumptions 2.1 and 2.4, Fubini's Theorem, and Lemma 3− , x s − τ − − a Z s , Z s − τ ds − , x s − τ − − a Z s , Z s − τ 2 ds and using Lemmas 3.1 and 4.1, we obtainP Ω ∩ σ k ∧ ρ k > T ≤ 3 , 4.18for sufficiently small Δt.So the inequalities above demonstrate P Ω ≤ , 4.19 there is an integer n such that t ∈ t n , t n 1 .Thus it follows from 3.2 that

Theorem 4 . 2 .
Under Assumptions 2.1, 2.2, 2.3, and 2.4, for sufficiently small , ς ∈ 0, 1 , there is a Δt * such that for all Δt < Δt * Δt 2 }.We remark that the continuous-time Euler solution X t cannot be computed, since it requires knowledge of the entire Brownian motion and Poisson random measure paths.So the last theorem demonstrates the convergence in probability of the discrete Euler solution 3.2 .Under Assumptions 2.1, 2.2, 2.3, and 2.4, for sufficiently small , ς ∈ 0, 1 , there is a Δt * such that for all Δt < Δt * P |x t − Z t | 2 ≥ ς, 0 ≤ t ≤ T ≤ , 4.14 for any T > 0.
1, 2.2, 2.3, and 2.4 which cover many highly nonlinear SDDEs driven by Poisson random measure.− 0.05 − − 4x 3 t − dt 3x 2 t − 0.05 − dW t ε vx 2 t − 0.05 − p φ dv × dt , t > 0, , t ∈ −0.05, 0 , where d m r 1.The compensated measure of the Poisson random measure p φ dv × dt is given by φ dv dt λf v dv dt, where λ 5 and is the density function of a lognormal random variable.Clearly, the equation cannot satisfy the global Lipschitz conditions, the linear growth conditions and the classical Khasminskii-type conditions.But, the local Lipschitz conditions are satisfied.On the other hand, for V x |x| 2 , we have , μ 1 60, μ 2 8.In other words, the equation satisfies Assumptions 2.1, 2.2, 2.3, and 2.4.So according to Theorem 2.8, 5.1 has a unique global solution x t on t ∈ −0.05, ∞ .Given the stepsize Δt 0.0025, we can have the Euler method to 5.1 X n nΔt 2 , for n −20, −19, . . ., −1, 0, . .., where ΔW n W t n 1 − W t n .And in Matlab experiment, we actually obtain the discrete Euler i is distributed according to f v .Subsequently, we can get the result in Theorems 4.2 and 4.3.