Maximum Principle for Stochastic Recursive Optimal Control Problems Involving Impulse Controls

and Applied Analysis 3 p > 1, denote by S R the set of n-dimensional adapted processes {φt, 0 ≤ t ≤ T} such that E sup0≤t≤T |φt| < ∞, and denote byH R the set of n-dimensional adapted processes {ψt, 0 ≤ t ≤ T} such that E ∫T 0 |ψt|dt p/2 <∞. Let U be a nonempty subset of R and K a nonempty convex subset of R. Let {τi} be a given sequence of increasing Ft-stopping times such that τi ↑ ∞ as i → ∞. We denote by I the class of right continuous processes η · i≥1 ηi1 τi,T · such that each ηi is an Fτi-measurable random variable. It is worth noting that the assumption τi ↑ ∞ implies that at most finitely many impulses may occur on 0, T . Denote by U the class of adapted processes v : 0, T × Ω → U such that E sup0≤t≤T |vt| < ∞, and denote by K the class of K-valued impulse processes η · ∈ I such that E i≥1 |ηi| 3 < ∞. We call A : U × K the admissible control set. Inwhat follows, for a continuous function l · , the integration ∫T 0 l t dηt is understood as follows: ∫T 0 l t dηt ∑ 0≤τi≤T l τi ηi. 2.1 Given η · ∈ I and x ∈ R, we consider the following SDE with impulses: dXt b t, Xt dt σ t, Xt dBt Ctdηt, X0 x, 2.2 where b : 0, T × Ω × R → R, σ : 0, T × Ω × R → Rn×d, and C : 0, T → Rn×n are measurable mappings. Similar to 22, Proposition 2.1 , we have the following. Proposition 2.1. Let C be continuous and b, σ uniformly Lipschitz in x. Assume that b ·, 0 ∈ H R , σ ·, 0 ∈ H Rn×d , and E i≥1 |ηi| p < ∞ for some p ≥ 2. Then SDE 2.2 admits a unique solution X · ∈ S R . For η · ∈ I, let us consider the following BSDE with impulses: dYt −f t, Yt, Zt dt ZtdBt −Dtdηt, YT ζ, 2.3 where ζ ∈ FT , f : 0, T × Ω × R × Rm×d → R and D : 0, T → Rm×n are measurable mappings. Similar to 22, Proposition 2.2 , we have the following. Proposition 2.2. Let D be continuous and f Lipschitz in y, z . Assume that E|ζ| < ∞, E ∑ i≥1 |ηi| p < ∞, and f ·, 0, 0 ∈ H R for some p ≥ 2. Then BSDE 2.3 admits a unique solution Y · , Z · ∈ S R ×Hp Rm×d . The control system of our stochastic optimal control problem is subject to the following FBSDE: dx v,η t b ( t, x v,η t , vt ) dt σ ( t, x v,η t ) dBt Ctdηt, dy v,η t −f ( t, x v,η t , y v,η t , z v,η t , vt ) dt z t dBt −Dtdηt, x v,η 0 a ∈ R, y v,η T g ( x v,η T ) , 2.4 4 Abstract and Applied Analysis where b : 0, T ×Rn ×U → R, σ : 0, T ×Rn → Rn×d, f : 0, T ×Rn ×Rm ×Rm×d ×U → R, g : R → R are measurable mappings, and C : 0, T → Rn×n, D : 0, T → Rm×n are continuous functions. The objective is to minimize the following cost functional over the class A: J ( v · , η · ) E [ φ ( x v,η T ) γ ( y v,η 0 ) ∫T 0 h ( t, x v,η t , y v,η t , vt ) dt ∑ i≥1 l ( τi, ηi ) ] , 2.5 where φ : R → R, γ : R → R, h : 0, T × R × R ×U → R, and l : 0, T × R → R are measurable mappings. In what follows we assume the following. H1 b, σ, f , g are continuous, and they are continuously differentiable in x, y, z , with derivatives continuous and uniformly bounded. Moreover, assume that b and f have linear growth in x, y, z, v . H2 φ, γ , h, l are continuous, and they are continuously differentiable in x, y, η , with derivatives continuous and bounded by c 1 |x| , c 1 |y| , c 1 |x| |y| |v| , and c 1 |η| , respectively. Moreover, we assume |h t, 0, 0, v | ≤ c 1 |v| for any t, v . From Propositions 2.1 and 2.2, it follows that FBSDE 2.4 admits a unique solution x · , y · , z · ∈ S3 R × S3 R × H3 Rm×d for any v · , η · ∈ A, and the functional J is well defined. 3. Stochastic Maximum Principle for the Optimal Control Problem Let u · , ξ · i≥1 ξi1 τi,T · ∈ A be an optimal control and x · , y · , z · the corresponding trajectory. We introduce the spike variation with respect to u · as follows: ut { v, if τ ≤ t ≤ τ ε, ut, otherwise, 3.1 where τ ∈ 0, T is an arbitrarily fixed time, ε > 0 is a sufficiently small constant, and v is an arbitrary U-valued Fτ -measurable random variable such that E|v| < ∞. Let η · ∈ I be such that ξ · η · ∈ K. Then it is easy to check that ξ · : ξ · εη · , 0 ≤ ε ≤ 1 is also an element of K. Let us denote by x · , y · , z · the trajectory associated with u · , ξ · . For convenience, denote φ t φ t, x t , y u,ξ t , z u,ξ t , ut , φ u ε t φ t, x u,ξ t , y u,ξ t , z u,ξ t , u ε t for φ b, σ, f, h, bx, σx, fx, fy, fz, hx, hy. In what follows, we use c to denote a positive constant which can be different from line to line. Let us introduce the following FBSDE called the variational equation : dx1 t [ bx t x1 t b ( ut ) − b t ] dt σx t x1 t dBt εCtdηt, dy1 t − [ fx t x1 t fy t y 1 t fz t z 1 t f ( ut ) − f t ] dt zt dBt − εDtdηt, x1 0 0, y 1 T gx ( x u,ξ T ) x1 T . 3.2 Abstract and Applied Analysis 5 By Propositions 2.1 and 2.2, FBSDE 3.2 admits a unique solution x1 · , y1 · , z1 · ∈ S3 R × S3 R ×H3 Rm×d . Similar to 9, Lemma 1 , we can easily obtain the following.and Applied Analysis 5 By Propositions 2.1 and 2.2, FBSDE 3.2 admits a unique solution x1 · , y1 · , z1 · ∈ S3 R × S3 R ×H3 Rm×d . Similar to 9, Lemma 1 , we can easily obtain the following.


Introduction
The nonlinear backward stochastic differential equations BSDEs for short were first introduced by Pardoux and Peng 1 . Independently, Duffie and Epstein 2 introduced BSDEs under economic background. In 2 , they presented a stochastic recursive utility which is an extension of the standard additive utility with the instantaneous utility depending not only on the instantaneous consumption rate but also on the future utility. Actually, it corresponds to the solution of a particular BSDE whose generator does not depend on the variable z. And then, El Karoui et al. 3 gave the formulation of recursive utilities from the BSDE point of view. The problem that the cost function of the control system is described by the solution of BSDE is called the stochastic recursive optimal control problem. In this case, the control systems become forward-backward stochastic differential equations FBSDEs .
One fundamental research direction for optimal control problem is to establish the necessary optimality conditions-Pontryagin maximum principle. Stochastic maximum principle for forward, backward, and forward-backward systems has been studied by many authors, including Peng 4, 5 , Tang and Li 6 , Wang and Yu 7 , Wu 8 , and Xu 9 for full information and Huang et al. 10 , Wang and Wu 11 , Wang and Yu 12 , and Wu 13 for partial information case. However, in these papers, there are only regular controls in the control systems and impulse controls are not included.
Stochastic impulse control problems have received considerable research attention in recent years due to wide applicability in a number of different areas, especially in mathematical finance; see, for example, 14-17 . In most cases, the optimal impulse control problem was studied through dynamic programming principle. It was shown in particular that the value function is a solution of some quasi-variational inequalities.
The first result in stochastic maximum principle for singular control problem was obtained by Cadenillas and Haussmann 18 , in which linear dynamics, convex cost criterion, and convex state constraint are assumed. Bahlali and Chala 19 generalized 18 to the nonlinear dynamics case with a convex state constraint. Bahlali and Mezerdi 20 considered a stochastic singular control problem in which the control system is governed by a stochastic differential equation where the regular control enters the diffusion coefficient and the control domain is not necessarily convex. The stochastic maximum principle was obtained with the approach developed by Peng 4 . Dufour and Miller 21 studied a stochastic singular control problem in which the admissible control is of bounded variation. It is worth pointing out that the control systems in these works are stochastic differential equations with singular control, and few examples are given to illustrate the theoretical results. Wu and Zhang 22 were the first to study stochastic optimal control problems of forward-backward systems involving impulse controls, and they obtained both the maximum principle and sufficient optimality conditions for the optimal control problem.
In this paper, we continue to study stochastic optimal control problem involving impulse controls, in which the control system is described by a forward-backward stochastic differential equation and the control variable consists of regular control and impulse control. Different from 22 , it is assumed in this paper that the domain of the regular controls is not necessarily convex and the control variable does not enter the diffusion coefficient. Thus the result of this paper and that of 22 do not contain each other. We obtain the stochastic maximum principle by using a spike variation on the regular part of the control and a convex perturbation on the impulsive one. Sufficient optimality conditions are also obtained which can help to find the optimal control in applications.
The rest of this paper is organized as follows. In Section 2 we give some preliminary results and the formulation of our stochastic optimal control problem. In Section 3 we obtain the maximum principle for our stochastic optimal control problem. Sufficient optimality conditions for the optimal control problem is established in Section 4, and an example of linear quadratic optimization problem is also given to illustrate the applications of our theoretical results.

Formulation of the Stochastic Optimal Control Problem
Firstly we introduce some notations. Let Ω, F, P be a probability space and E the expectation with respect to P. Let T be a finite time horizon and F t the natural filtration of a d-dimensional standard Brownian motion {B t , 0 ≤ t ≤ T } augmented by the P-null sets of F. For n ∈ N and p > 1, denote by S p R n the set of n-dimensional adapted processes {ϕ t , 0 ≤ t ≤ T } such that E sup 0≤t≤T |ϕ t | p < ∞, and denote by H p R n the set of n-dimensional adapted processes Let U be a nonempty subset of R k and K a nonempty convex subset of R n . Let {τ i } be a given sequence of increasing F t -stopping times such that τ i ↑ ∞ as i → ∞. We denote by I the class of right continuous processes η · i≥1 η i 1 τ i ,T · such that each η i is an F τ i -measurable random variable. It is worth noting that the assumption τ i ↑ ∞ implies that at most finitely many impulses may occur on 0, T . Denote by U the class of adapted processes v : 0, T × Ω → U such that E sup 0≤t≤T |v t | 3 < ∞, and denote by K the class of K-valued impulse processes η · ∈ I such that E i≥1 |η i | 3 < ∞. We call A : U × K the admissible control set. In what follows, for a continuous function l · , the integration T 0 l t dη t is understood as follows: Given η · ∈ I and x ∈ R n , we consider the following SDE with impulses: For η · ∈ I, let us consider the following BSDE with impulses: The control system of our stochastic optimal control problem is subject to the following FBSDE: g : R n → R m are measurable mappings, and C : 0, T → R n×n , D : 0, T → R m×n are continuous functions. The objective is to minimize the following cost functional over the class A: In what follows we assume the following.
and the functional J is well defined.

Stochastic Maximum Principle for the Optimal Control Problem
Let u · , ξ · i≥1 ξ i 1 τ i ,T · ∈ A be an optimal control and x u,ξ · , y u,ξ · , z u,ξ · the corresponding trajectory. We introduce the spike variation with respect to u · as follows: where τ ∈ 0, T is an arbitrarily fixed time, ε > 0 is a sufficiently small constant, and v is an arbitrary U-valued F τ -measurable random variable such that E|v| 3 < ∞. Let η · ∈ I be such that ξ · η · ∈ K. Then it is easy to check that ξ ε · : ξ · εη · , 0 ≤ ε ≤ 1 is also an element of K. Let us denote by x ε · , y ε · , z ε · the trajectory associated with u ε · , ξ ε · . For convenience, denote ϕ t ϕ t, x u,ξ 0 0,

3.2
Abstract and Applied Analysis 5 By Propositions 2.1 and 2.2, FBSDE 3.2 admits a unique solution x 1 · , y 1 · , z 1 · ∈ S 3 R n × S 3 R m × H 3 R m×d . Similar to 9, Lemma 1 , we can easily obtain the following.
We proceed to give the following lemma.
Proof. It is easy to check that

3.10
From Hölder's inequality, Lemma 3.1, and the dominated convergence theorem, it follows that

3.11
Since b x is uniformly bounded, by Lemma 3.1 we get

3.12
Thus we obtain sup 0≤t≤T E t 0 A ε s ds 2 ≤ C ε ε 2 . In the same way we can get Hence, the estimation 3.4 is proved.
Abstract and Applied Analysis 7 Now we prove 3.5 and 3.6 . Set

3.14
It is easy to obtain

3.16
We have

3.17
Since g x is uniformly bounded, it follows from 3.4 that E|I| 2 ≤ cE|X ε T | 2 ≤ C ε ε 2 . Since g x is continuous and uniformly bounded, from Lemma 3.1 and the dominated convergence theorem it follows that

Abstract and Applied Analysis
Consequently, From Lemma 3.1 and the dominated convergence theorem, it follows that 3.20 Since f x , f y , and f z are uniformly bounded, we have Similar to the proof of Lemma 1 in 9 for the BSDE part, we can obtain 3.5 and 3.6 with the iterative method.
We are now ready to state the variational inequality.

3.23
From Lemmas 3.1 and 3.2, it follows that

3.25
Abstract and Applied Analysis 9 Similarly we get

3.27
Since h x , h y , h z have linear growth, it follows from Lemma 3.2 and Hölder's inequality that

3.28
By Lemma 3.1 and the dominated convergence theorem, we have Consequently,

3.32
The variational inequality follows from 3.25 -3.32 . Now we introduce the following FBSDE called the adjoint equation : dp t f * y t p t − h * y t dt f * z t p t dB t , dq t f * x t p t − b * x t q t − σ * x t k t − h * x t dt k t dB t , p 0 −γ * y y u,ξ 0 , q T −g * x x u,ξ T p T φ * x x u,ξ T .

3.33
It is easy to check that the adjoint equation admits a unique solution p · , q · , k · ∈ S 3 R m × S 3 R n × H 3 R m×d .
We are now in a position to state the stochastic maximum principle.