In this paper, we consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by a backward stochastic differential equation driven by Teugels martingales and an independent Brownian motion. A sufficient condition and a necessary one for the existence of the saddle point for the game are proved. As an application, a linear quadratic stochastic differential game problem is discussed.
National Natural Science Foundation of China1187112111701369Natural Science Foundation of Zhejiang ProvinceLR15A0100011. Introduction
Consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by the following nonlinear backward stochastic differential equation (BSDE), for any t∈0,T,(1)yt=ξ+∫tTfs,ys,qs,zs,u1t,u2tds−∑i=1d∫tTqisdWis−∑i=1∞∫tTzisdHis,with the cost functional(2)Ju⋅=Eϕy0+∫0Tlt,yt,qt,zt,u1t,u2tdt,where Wt,0≤t≤T is a standard d-dimensional Brownian motion and Ht=Hiti=1∞,0≤t≤T are Teugels martingales associated with a Lévy processes Lt,0≤t≤T (see Section 2, for more details). The filtration generated by the underlying Brownian motion W and the Lévy process Lt,0≤t≤T is denoted by ℱt0≤t≤T. The meaning of variables are given in Assumptions 1 and 2.
In the above, the processes u1⋅ and u2⋅ are open-loop control processes, which present the controls of the two players. Let U1⊂Rk1 and U2⊂Rk2 be two given nonempty convex sets. Under many situations, under which the full information ℱt is inaccessible for players, ones can only observe a partial information. For this, an admissible control process ui⋅ for the player i is defined as a Gt-predictable process with values in Ui s.t. E∫0Tuit2dt<+∞, where i=1,2. Here, Gt⊆ℱt for all t∈0,T is a given subfiltration representing the information available to the controller at time t. For example, we could choose Gt=ℱt−δ+, t∈0,T, where δ>0 is a fixed delay of information.
The set of all admissible open-loop controls ui⋅ for the player i is denoted by Ai,i=1,2. A1×A2 is called the set of open-loop admissible controls for the players. We denote the strong solution of (1) by yu1,u2⋅,qu1,u2⋅,zu1,u2⋅, or y⋅,q⋅,z⋅ if its dependence on admissible control u1⋅,u2⋅ is clear from context. Then, we call y⋅,q⋅,z⋅ the state process corresponding to the control process u¯1⋅,u¯2⋅ and call u1⋅,u2⋅;y⋅,q⋅,z⋅ the admissible quintuplet.
Roughly speaking, for the zero-sum differential game, Player I seeks control u¯1⋅ to minimize (2), while Player II seeks control u¯2⋅ to maximize (2). Let u¯1⋅,u¯2⋅ be an optimal open-loop control satisfying(3)Ju¯1⋅,u2⋅≤Ju¯1⋅,u¯2⋅≤Ju1⋅,u¯2⋅,for all admissible open-loop controls u1⋅,u2⋅∈G1×G2. We denote this partial stochastic differential game by Problem (P). We refer to u¯1⋅,u¯2⋅ as an open-loop saddle point of Problem (P). The corresponding strong solution y¯⋅,q¯⋅,z¯⋅ of (1) is called the saddle state process. Then, u¯1⋅,u¯2⋅;y¯⋅,q¯⋅,z¯⋅ is called a saddle quintuplet.
Game theory had been an active area of research and a useful tool in many applications, particularly in biology and economic. For the partial information two-person zero-sum stochastic differential games, the objective is to find a saddle point, for which the controller has less information than the complete information filtration ℱtt≥0. Recently, An and Øksendal [1] established a maximum principle for stochastic differential games of forward systems with Poisson jumps for the type of partial information in our paper. Moreover, we refer to [2, 3] and the references therein, for more related results on the partial information stochastic differential games.
In 2000, Nualart and Schoutens [4] got a martingale representation theorem for a type of Lévy processes through Teugels martingales, where Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, Nualart and Schoutens [5] proved the existence and uniqueness theory of BSDE driven by Teugels martingales. The above results are further extended to the case for one-dimensional BSDE driven by Teugels martingales and an independent multidimensional Brownian motion by Bahlali et al. [6].
Since the theory of BSDE driven by Teugels martingales and an independent Brownian motion is established, it is natural to apply the theory to the stochastic optimal control problem. Now, the full information stochastic optimal control problem related to Teugels martingales has been in many literatures. For example, the stochastic linear quadratic problem with Lévy processes was studied by Mitsui and Tabata [7]. Motivated by [7], Meng and Tang [8] studied the general full information stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multidimensional Brownian motion and proved the corresponding stochastic maximum principle. Furthermore, Tang and Zhang [9] extended [8] to the Backward stochastic systems and obtained the corresponding stochastic maximum principle. For the case of the partial information, in 2012, Bahlali et al. [10] studied the stochastic control problem for forward system and obtained the corresponding stochastic maximum principle. In the meantime, Meng et al. [11] extended [9] to the partial information stochastic optimal control problem of backward stochastic systems and obtained the corresponding optimality conditions. For the recent results about stichastic differential control problems or games, the readers are referred to [12–17] and the references therein.
However, to the best of our knowledge, there is little discussion on the partial information stochastic differential games for the system driven by Teugels martingales and an independent Brownian motion, which motives us to write this paper. The main purpose of this paper is to establish partial information necessary and sufficient conditions for optimality for Problem (P) by using the results in [9]. The results obtained in this paper can be considered as a generalized form of stochastic optimal control problem to the two-person zero-sum case. As an application, a two-person zero-sum stochastic differential game of linear backward stochastic differential equations with a quadratic cost criteria under partial information is discussed and the optimal control is characterized explicitly by the adjoint processes.
The rest of this paper is organized as follows. We introduce useful notations and give needed assumptions in Section 2. Section 3 is devoted to present the sufficient condition for the existence of the optimal control problem. In Section 4, we establish the necessary condition of optimality. In Section 5, a linear quadratic stochastic differential game problem is solved by applying the theoretical results.
2. Preliminaries and Assumptions
Let Ω,F,ℱt0≤t≤T,P be a complete probability space. The filtration ℱt0≤t≤T is right-continuous and generated by a d-dimensional standard Brownian motion Wt,0≤t≤T and a one-dimensional Lévy process Lt,0≤t≤T. It is known that Lt has a characteristic function of the form: EeiθLt=expiaθt−1/2σ2θ2t+t∫R1eiθx−1−iθxIx<1vdx, where a∈R1, σ>0 and v is a measure on R1 satisfying the following: (i) ∫R11∧x2vdx<∞ and (ii) there exists ε>0 and λ>0, s.t. ∫R1/−ε,εeλxvdx<∞. These settings imply that the random variables Lt have moments of all orders. We denote by Hit,0≤t≤Ti=1∞ the Teugels martingales associated with the Lévy process Lt,0≤t≤T. Here, Hit is given by(4)Hit=ci,iYit+ci,i−1Yi−1t+⋯+ci,1Y1t,where Yit=Lit−ELit for all i≥1, Lit are so called power-jump processes with L1t=Lt, Lit=∑0<s≤tΔLsi for i≥2, and the coefficients ci,j correspond to the orthonormalization of polynomials 1,x,x2,⋯ w.r.t. the measure μdx=x2vdx+σ2δ0dx. The Teugels martingales Hiti=1∞ are pathwise strongly orthogonal and their predictable quadratic variation processes are given by(5)Hit,Hjt=δijt.For more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [4, 5]. Denote by g the predictable sub-σ field of ℬ0,T×ℱ; then, we introduce the following notation used throughout this paper.
In the following, we introduce some basic spaces:
H: a Hilbert space with norm ⋅H.
α,β: the inner product in Rn,∀α,β∈Rn.
α=α,α: the norm of Rn,∀α∈Rn.
A,B=trABT: the inner product in Rn×m, ∀A,B∈Rn×m.
A=trAAT: the norm of Rn×m,∀A∈Rn×m.
l2: the space of all real-valued sequences x=xii≥1 satisfying
(6)xl2≤∑i=1∞xi2<+∞.
l2H: the space of all H-valued sequence f=fii≥1 satisfying
(7)fl2H≤∑i=1∞fiH2<+∞.
lℱ20,T;H: the space of all l2H-valued and ℱt-predictable processes f=fit,ω,t,ω∈0,T×Ωi≥1 satisfying
(8)flℱ20,T,H≤E∫0T∑i=1∞fitH2dt<∞.
Mℱ20,T;H: the space of all H-valued and ℱt-adapted processes f=ft,ω,t,ω∈0,T×Ω satisfying
(9)fMℱ20,T;H≤E∫0TftH2dt<∞.
Sℱ20,T;H: the space of all H-valued and ℱt-adapted càdlàg processes f=ft,ω,t,ω∈0,T×Ω satisfying
(10)fSℱ20,T;H≤Esup0≤t≤TftH2dt<+∞.
L2Ω,ℱ,P;H: the space of all H-valued random variables ξ on Ω,ℱ,P satisfying
(11)ξL2Ω,ℱ,P;H≤EξH2<∞.
The coefficients of the state equation (1) and the cost functional (2) are defined as follows:(12)ξ:Ω⟶Rn,f:0,T×Ω×Rn×Rn×d×l2Rn×U1×U2⟶Rn,l:0,T×Ω×Rn×Rn×d×l2Rn×U1×U2⟶R,(13)ϕ:Ω×Rn⟶R1.
Throughout this paper, we introduce the following basic assumptions on coefficients ξ,f,l,ϕ.
Assumption 1.
ξ∈L2Ω,ℱT,P;Rn and the random mapping f is predictable w.r.t. t, Borel measurable w.r.t. other variables, and f⋅,0,0,0,0,0∈Mℱ20,T;Rn. For almost all t,ω∈0,T×Ω, ft,ω,y,p,z,u1,u2 is Fréchet differentiable w.r.t. y,p,z,u1,u2 and the corresponding Fréchet derivatives fy,fp,fz,fu1,andfu2 are continuous and uniformly bounded.
Assumption 2.
The random mapping l is predictable w.r.t. t, Borel measurable w.r.t. other variables, and for almost all t,ω∈0,T×Ω, l is Fréchet differentiable w.r.t. y,p,z,u1,u2 with continuous Fréchet derivatives ly,lq,lz,lu1,andlu2. The random mapping ϕ is measurable, and for almost all t,ω∈0,T×Ω, ϕ is Fréchet differentiable w.r.t. y with continuous Fréchet derivative ϕy. Moreover, for almost all t,ω∈0,T×Ω, there exists a constant C s.t. for all p,q,z,u1,u2∈Rn×Rn×d×l2Rn×U1×U2:(14)l≤C1+y2+q2+z2+u12+u22,ϕ≤C1+y2,ly+lq+lz+lu1+lu2≤C1+y+q+z+u1+u2,(15)ϕy≤C1+y.
Under Assumption 1, we can get from Lemma 2.3 in [9] that, for each u1⋅,u2⋅∈A1×A2, system (1) admits a unique strong solution. Furthermore, by Assumption 2 and a priori estimate for BSDE driven by Teugels martingales (see Lemma 3.2 in [9]), it is easy to check that(16)Ju1⋅,u2⋅<∞.
So, Problem (P) is well defined.
3. A Partial Information Sufficient Maximum Principle
In this section, we want to study the sufficient maximum principle for Problem (P).
In our setting, the Hamiltonian function H:0,T×Rn×Rn×d×l2Rn×U1×U2×Rn⟶R1 is of the following form:(17)Ht,y,q,z,u1,u2,k=k,−ft,y,q,z,u1,u2+lt,y,q,z,u1,u2.
The adjoint equation, which fits into system (1) and (2) corresponding to the given admissible quintuplet u1⋅,u2⋅;y⋅,q⋅,z⋅, is given by the following forward stochastic differential equation driven by multidimensional Brownian motion W and Teugels martingales Hii=1∞:(18)dk=−Hyt,y,q,z,u1,u2,kdt−∑i=1dHqit,y,q¯,z,u1,u2,kdWit−∑i=1∞Hzit,y,q,z,u1,u2,kdHitk0=−ϕyy0.
Under Assumptions 1 and 2, the forward stochastic differential equation (18) has a unique solution k⋅∈Gℱ20,T;Rn by Lemma 2.1 in [9].
We now come to a verification theorem for Problem (P).
Theorem 1 (partial information sufficient maximum principle).
Let Assumptions 1 and 2 hold. Let u¯1⋅,u¯2⋅;y¯⋅,q¯⋅,z¯⋅ be an admissible quintuplet and k¯⋅ the unique strong solution of the corresponding adjoint equation (18). Suppose that the Hamiltonian function H satisfies the following conditional maximum principle:(19)infu1∈U1EHt,y¯t,q¯t,z¯t,u1,u¯2t,k¯tGt=EHt,y¯t,q¯t,z¯t,u¯1t,u¯2t,k¯tGt=supu2∈U2EHt,y¯t,q¯t,z¯t,u¯1t,u2,k¯tGt.
Suppose that, for all t∈0,T, ϕy is convex in y, and
Proof. (i) In the following, we consider a stochastic optimal control problem over G1, where the system is(27)yt=ξ+∫tTfs,ys,qs,zs,u1t,u¯2tds−∑i=1d∫tTqisdWis−∑i=1∞∫tTzisdHis,with the cost functional(28)Ju1⋅,u¯2⋅=Eϕy0+∫oTlt,yt,qt,zt,u1t,u¯2tdt.
Our optimal control problem is to minimize Ju1⋅,u¯2⋅ over u1⋅∈A1, i.e., find u¯1⋅∈A1 such that(29)Ju¯1⋅,u¯2⋅=infu1⋅∈A1Ju1⋅,u¯2⋅.
Then, for this case, it is easy to check that the Hamilton is Ht,y,q,z,u1,u¯2t,k, and for the admissible control u¯1⋅∈A1, the corresponding sate process and the adjoint process is still y¯t,q¯t,z¯t and k¯t, respectively. And the optimality condition is(30)infu1∈U1EHt,y¯t,q¯t,z¯t,u1,u¯2t,k¯tGt=EHt,y¯t,q¯t,z¯t,u¯1t,u¯2tGt.
Thus, from the partial information sufficient maximum principle for optimal control (see Theorem 1 in [9]), we conclude that u¯1⋅ is the optimal control of the optimal control problem, i.e.,(31)Ju¯1⋅,u¯2⋅=infu1⋅∈A1Ju1⋅,u¯2⋅.
The proof of (i) is complete.
(ii) This statement can be proved in a similar way as (i).
(iii) If both (i) and (ii) hold, then(32)Ju¯1⋅,u2⋅≤Ju¯1⋅,u¯2⋅≤Ju1⋅,u¯2⋅,for any u1⋅,u2⋅∈A1×A2, i.e.,(33)Ju¯1⋅,u¯2⋅≤infu1⋅∈A1Ju1⋅,u¯2⋅≤supu2⋅∈A2infu1⋅∈A1Ju1⋅,u2⋅.
On the contrary,(34)Ju¯1⋅,u¯2⋅≥supu2⋅∈A2Ju¯1⋅,u2⋅≥infu1⋅∈A1supu2⋅∈A2Ju1⋅,u2⋅.
Now, due to the inequality,(35)infu1⋅∈A1supu2⋅∈A2Ju1⋅,u2⋅≥supu2⋅∈A2infu1⋅∈A1Ju1⋅,u2⋅,we have(36)Ju¯1⋅,u¯2⋅=infu1⋅∈A1supu2⋅∈A2Ju1⋅,u2⋅=supu2⋅∈A2infu1⋅∈A1Ju1⋅,u2⋅.
The proof of the theorem is completed.
If the control process u1⋅,u2⋅ is admissible adopted to the filtration ℱt, we have the following full information sufficient maximum principle.
Corollary 1.
Suppose that Gt=ℱt. Moreover, suppose that, for all t∈0,T, the following maximum principle holds:(37)infu1∈U1Ht,x¯t,y¯t,z¯t,u1,u¯2t,p¯t,q¯t,k¯t=Ht,x¯t,y¯t,z¯t,u¯1t,u¯2t,p¯t,q¯t,k¯t=supu2∈U2Ht,x¯t,y¯t,z¯t,u¯1t,u2,p¯t,q¯t,k¯t.
Suppose that, for all t∈0,T, ϕy is convex in y and
If both cases (i) and (ii) hold (which implies, in particular, that ϕy is an affine function), then u¯1⋅,u¯2⋅ is an open-loop saddle point based on the information flow ℱt and
4. Partial Information Necessary Maximum Principle
In this section, we give a necessary maximum principle for Problem (P).
Theorem 2 (a partial information necessary maximum principle).
Under Assumptions 1 and 2, let u¯1⋅,u¯2⋅ be an optimal control of Problem (P). Suppose that y¯⋅,q¯⋅,z¯⋅ is the state process of system (1) corresponding to the admissible control u¯1⋅,u¯2⋅. Let k¯⋅ be the unique solution of the adjoint equation (18) corresponding u¯1⋅,u¯2⋅;y¯⋅,q¯⋅,z¯⋅. Then, for i=1,2, we have, for all ui∈Ui(45)EHuitGt,ui−u¯it≥0,a.s. a.e,(46)Huit≔Huit,y¯t,q¯t,z¯t,u¯1t,u¯2t,k¯t.
Proof. Since u¯1⋅,u¯2⋅ is a saddle open-loop control, then u¯1⋅,u¯2⋅ is an open-loop saddle point, i.e.,(47)Ju¯1⋅,u2⋅≤Ju¯1⋅,u¯2⋅≤Ju1⋅,u¯2⋅.
So, we have(48)J1u¯1⋅,u¯2⋅=minu1⋅∈A1J1u1⋅,u¯2⋅,(49)J2u¯1⋅,u¯2⋅=maxu2⋅A2J2u¯1⋅,u2⋅.
By (48), u¯1⋅ can be regarded as an optimal control of the optimal control problem, where the controlled system is (27) and the cost functional is (28). Then, for this case, it is easy to check that the Hamilton is Ht,y,q,z,u1,u¯2t,k, and for the optimal control u¯1⋅∈G1, the corresponding optimal sate process and the adjoint process is still y¯t,q¯t,z¯t and k¯t, respectively. Thus, applying the partial necessary stochastic maximum principle for optimal control problems (see Theorem 2 in [9]), we can obtain (45) for i=1. Similarly, from (49), we can obtain (45) for i=2. The proof is complete.
5. Example: Linear Quadratic Problem
In this section, we will apply our stochastic maximum principles to a linear quadratic problem under partial information, i.e., consider the game problem to the following quadratic cost functional over u1,u2 valued in Rm1×Rm2:(50)Ju1⋅,u2⋅≔EM,y0+E∫0TEs,ys+∑i=1dFis,qis+∑i=1∞Gis,zis+N1su1s,u1s−N2su2s,u2sds,where the state process y⋅,q⋅,z⋅ is the solution to the controlled linear backward stochastic system below:(51)dyt=−Atyt+∑i=1dBitqit+∑i=1∞Citzit+D1tu1t+D2tu2tdt+∑i=1dqitdWit+∑i=1∞zitdWit,yT=ξ.
This problem is denoted by Problem (LQ). To study this problem, we need the assumptions on the coefficients as follows.
Assumption 3.
The matrix-valued functions A:0,T⟶Rn×n;Bi:0,T⟶Rn×n,i=1,2,⋯,d;Ci:0,T⟶Rn×n,i=1,2,⋯;Di:0,T⟶Rn×mi,i=1,2;E:0,T⟶Rn×n;Fi:0,T⟶Rn×m;Rn,i=1,2,⋯,d,Gi:0,T⟶Rn,i=1,2,⋯;Ni:0,T⟶Rmi×mi,i=1,2 and the matrix M∈Rn are uniformly bounded. Moreover, Ni is uniformly positive, i.e., Ni≥δI,i=1,2 for some positive constant δ.
Assumption 4.
There is no further constraint imposed on the control processes; the set all admissible control processes is(52)A1×A2=u1⋅,u2⋅:u1⋅,u2⋅isGt−predictable process with values inRm1×Rm2andE∫0Tut2dt<∞.
In what follows, we will utilize the stochastic maximum principle to study the dual representation of the game Problem (LQ).
We first define the Hamiltonian function H:Ω×0,T×Rn×Rn×d×l2Rn×Rm1×Rm2×Rn⟶R1 by(53)Ht,y,q,z,u1,u2,k=−k,Aty+∑i=1dBitqi+∑i=1∞Citzi+D1tu1+D2tu2+Et,y+∑i=1dFit,qi+N1tu1,u1+∑i=1∞Git,zi−N2tu1,u1.
Then, the adjoint equation corresponding to an admissible quintuplet u1⋅,u2⋅;y⋅,q⋅,z⋅ can be rewritten as(54)dkt=A⊤k−Edt−∑i=1dBi⊤k−FidWi−∑i=1∞Ci⊤k−GidHi,k0=−M.
Under Assumption 3, for any admissible quintuplet u1⋅,u2⋅;y⋅,q⋅,z⋅, the adjoint equation (54) has a unique solution k⋅ in view of Lemma 2.1 in [9].
It is time to give the dual characterization of the optimal control.
Theorem 3.
Let Assumptions 3 and 4 be satisfied. Then, a necessary and sufficient condition for an admissible quintuplet u1⋅,u2⋅;y⋅,q⋅,z⋅ to be a saddle quintuplet of Problem (LQ) is that the control u1⋅,u2⋅ has the representation(55)u1t=−12N1−1tD1∗tEktGt,u2t=12N2−1tD2∗tEktGt,where k⋅ is the unique solution of the adjoint equation (54) corresponding to the admissible quintuplet u1⋅,u2⋅;y⋅,q⋅,z⋅.
Proof. For the necessary part, let u1⋅,u2⋅;y⋅,q⋅,z⋅ be an saddle quintuplet; then, by the necessary optimality condition (45) and Ui=Rmii=1,2, we have, a.e., a.s.,(56)Huit,yt,qt,zt,u1t,u2t,kt=0.
Noticing the definition of H in (53), we obtain(57)2N1tu1t+D1∗tEktGt=0,a.e. a.s.,(58)−2N2tu2t+D2∗tEktGt=0,a.e.a.s.
So, the saddle point u1⋅,u2⋅ has the dual presentation (55).
For the sufficient part, let u1⋅,u2⋅;y⋅,q⋅,z⋅ be an admissible quintuplet satisfying (55). By the classical technique of completing squares, from (55), we can claim that u1⋅,u2⋅;y⋅,q⋅,z⋅ satisfies the optimality condition (19) in Theorem 1. Moreover, from Assumptions 3 and 4, it is easy to check that all other conditions in Theorem 1 are satisfied. Hence, u1⋅,u2⋅;y⋅,q⋅,z⋅ is a saddle quintuplet by Theorem 1.
Data Availability
No data were used to support this study.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (nos. 11871121 and 11701369) and Natural Science Foundation of Zhejiang Province for Distinguished Young Scholar (no. LR15A010001).
AnT. T. K.ØksendalB.Maximum principle for stochastic differential games with partial information2008139346348310.1007/s10957-008-9398-y2-s2.0-55649100769ØksendalB.SulemA.Forward-backward stochastic differential games and stochastic control under model uncertainty20141611225510.1007/s10957-012-0166-72-s2.0-84898548087WangG.YuZ.A partial information non-zero sum differential game of backward stochastic differential equations with applications201248234235210.1016/j.automatica.2011.11.0102-s2.0-84856222953NualartD.SchoutensW.Chaotic and predictable representations for lévy processes200090110912210.1016/s0304-4149(00)00035-12-s2.0-0003866690NualartD.SchoutensW.Backward stochastic differential equations and feynman-kac formula for levy processes, with applications in finance20017576177610.2307/33185412-s2.0-0141496649BahlaliK.EddahbiM.EssakyE.BSDE associated with lévy processes and application to pdie200316111710.1155/s10489533030000172-s2.0-33646517186MitsuiK.-I.TabataY.A stochastic linear-quadratic problem with lévy processes and its application to finance2008118112015210.1016/j.spa.2007.03.0112-s2.0-36248988049MengQ. X.TangM. N.Necessary and sufficient conditions for optimal control of stochastic systems associated with lévy processes2009521119821992TangM.ZhangQ.Optimal variational principle for backward stochastic control systems associated with lévy processes201255474576110.1007/s11425-012-4370-62-s2.0-84862829234BahlaliK.KhelfallahN.MezerdiB.Optimality conditions for partial information stochastic control problems driven by lévy processes201261111079108410.1016/j.sysconle.2012.08.0052-s2.0-84866519567MengQ.ZhangF.TangM.Maximum principle for backward stochastic systems associated with lévy processes under partial informationProceedings of the 31th Chinese Control ConferenceJuly 2012Hefei, China16DuK.WuZ.Linear-quadratic stackelberg game for mean-field backward stochastic differential system and application2019201917179858510.1155/2019/17985852-s2.0-85062644595ShiJ.WangG.XiongJ.Leader-follower stochastic differential game with asymmetric information and applications201663607310.1016/j.automatica.2015.10.0112-s2.0-84949685871WangG.XiaoH.XiongJ.A kind of lq non-zero sum differential game of backward stochastic differential equation with asymmetric information20189734635210.1016/j.automatica.2018.08.0192-s2.0-85053062294WuJ.LiuZ.Maximum principle for mean-field zero-sum stochastic differential game with partial information and its applications to finance20173781510.1016/j.ejcon.2017.04.0062-s2.0-85019765348WuJ.LiuZ.Optimal control of mean-field backward doubly stochastic systems driven by itô-lévy processes202093495397010.1080/00207179.2018.15024732-s2.0-85052157362YuW.WangF.HuangY.LiuH.Social optimal mean field control problem for population growth model20191810.1002/asjc.21642-s2.0-85069860404