MPEMathematical Problems in Engineering1563-51471024-123XHindawi10.1155/2020/85637908563790Research ArticlePartial Information Stochastic Differential Games for Backward Stochastic Systems Driven by Lévy Processeshttps://orcid.org/0000-0002-3968-0870ZhangFu1https://orcid.org/0000-0003-4020-5679MengQingXin2https://orcid.org/0000-0002-7244-4183TangMaoNing2YuWenguang1College of ScienceUniversity of Shanghai for Science and TechnologyShanghai 200433Chinausst.edu.cn2Department of MathematicsHuzhou UniversityZhejiang 313000Chinazjhu.edu.cn202018920202020200520201506202018920202020Copyright © 2020 Fu Zhang et al.This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, we consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by a backward stochastic differential equation driven by Teugels martingales and an independent Brownian motion. A sufficient condition and a necessary one for the existence of the saddle point for the game are proved. As an application, a linear quadratic stochastic differential game problem is discussed.

National Natural Science Foundation of China1187112111701369Natural Science Foundation of Zhejiang ProvinceLR15A010001
1. Introduction

Consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by the following nonlinear backward stochastic differential equation (BSDE), for any t0,T,(1)yt=ξ+tTfs,ys,qs,zs,u1t,u2tdsi=1dtTqisdWisi=1tTzisdHis,with the cost functional(2)Ju=Eϕy0+0Tlt,yt,qt,zt,u1t,u2tdt,where Wt,0tT is a standard d-dimensional Brownian motion and Ht=Hiti=1,0tT are Teugels martingales associated with a Lévy processes Lt,0tT (see Section 2, for more details). The filtration generated by the underlying Brownian motion W and the Lévy process Lt,0tT is denoted by t0tT. The meaning of variables are given in Assumptions 1 and 2.

In the above, the processes u1 and u2 are open-loop control processes, which present the controls of the two players. Let U1Rk1 and U2Rk2 be two given nonempty convex sets. Under many situations, under which the full information t is inaccessible for players, ones can only observe a partial information. For this, an admissible control process ui for the player i is defined as a Gt-predictable process with values in Ui s.t. E0Tuit2dt<+, where i=1,2. Here, Gtt for all t0,T is a given subfiltration representing the information available to the controller at time t. For example, we could choose Gt=tδ+, t0,T, where δ>0 is a fixed delay of information.

The set of all admissible open-loop controls ui for the player i is denoted by Ai,i=1,2. A1×A2 is called the set of open-loop admissible controls for the players. We denote the strong solution of (1) by yu1,u2,qu1,u2,zu1,u2, or y,q,z if its dependence on admissible control u1,u2 is clear from context. Then, we call y,q,z the state process corresponding to the control process u¯1,u¯2 and call u1,u2;y,q,z the admissible quintuplet.

Roughly speaking, for the zero-sum differential game, Player I seeks control u¯1 to minimize (2), while Player II seeks control u¯2 to maximize (2). Let u¯1,u¯2 be an optimal open-loop control satisfying(3)Ju¯1,u2Ju¯1,u¯2Ju1,u¯2,for all admissible open-loop controls u1,u2G1×G2. We denote this partial stochastic differential game by Problem (P). We refer to u¯1,u¯2 as an open-loop saddle point of Problem (P). The corresponding strong solution y¯,q¯,z¯ of (1) is called the saddle state process. Then, u¯1,u¯2;y¯,q¯,z¯ is called a saddle quintuplet.

Game theory had been an active area of research and a useful tool in many applications, particularly in biology and economic. For the partial information two-person zero-sum stochastic differential games, the objective is to find a saddle point, for which the controller has less information than the complete information filtration tt0. Recently, An and Øksendal  established a maximum principle for stochastic differential games of forward systems with Poisson jumps for the type of partial information in our paper. Moreover, we refer to [2, 3] and the references therein, for more related results on the partial information stochastic differential games.

In 2000, Nualart and Schoutens  got a martingale representation theorem for a type of Lévy processes through Teugels martingales, where Teugels martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes. Later, Nualart and Schoutens  proved the existence and uniqueness theory of BSDE driven by Teugels martingales. The above results are further extended to the case for one-dimensional BSDE driven by Teugels martingales and an independent multidimensional Brownian motion by Bahlali et al. .

Since the theory of BSDE driven by Teugels martingales and an independent Brownian motion is established, it is natural to apply the theory to the stochastic optimal control problem. Now, the full information stochastic optimal control problem related to Teugels martingales has been in many literatures. For example, the stochastic linear quadratic problem with Lévy processes was studied by Mitsui and Tabata . Motivated by , Meng and Tang  studied the general full information stochastic optimal control problem for the forward stochastic systems driven by Teugels martingales and an independent multidimensional Brownian motion and proved the corresponding stochastic maximum principle. Furthermore, Tang and Zhang  extended  to the Backward stochastic systems and obtained the corresponding stochastic maximum principle. For the case of the partial information, in 2012, Bahlali et al.  studied the stochastic control problem for forward system and obtained the corresponding stochastic maximum principle. In the meantime, Meng et al.  extended  to the partial information stochastic optimal control problem of backward stochastic systems and obtained the corresponding optimality conditions. For the recent results about stichastic differential control problems or games, the readers are referred to  and the references therein.

However, to the best of our knowledge, there is little discussion on the partial information stochastic differential games for the system driven by Teugels martingales and an independent Brownian motion, which motives us to write this paper. The main purpose of this paper is to establish partial information necessary and sufficient conditions for optimality for Problem (P) by using the results in . The results obtained in this paper can be considered as a generalized form of stochastic optimal control problem to the two-person zero-sum case. As an application, a two-person zero-sum stochastic differential game of linear backward stochastic differential equations with a quadratic cost criteria under partial information is discussed and the optimal control is characterized explicitly by the adjoint processes.

The rest of this paper is organized as follows. We introduce useful notations and give needed assumptions in Section 2. Section 3 is devoted to present the sufficient condition for the existence of the optimal control problem. In Section 4, we establish the necessary condition of optimality. In Section 5, a linear quadratic stochastic differential game problem is solved by applying the theoretical results.

2. Preliminaries and Assumptions

Let Ω,F,t0tT,P be a complete probability space. The filtration t0tT is right-continuous and generated by a d-dimensional standard Brownian motion Wt,0tT and a one-dimensional Lévy process Lt,0tT. It is known that Lt has a characteristic function of the form: EeiθLt=expiaθt1/2σ2θ2t+tR1eiθx1iθxIx<1vdx, where aR1, σ>0 and v is a measure on R1 satisfying the following: (i) R11x2vdx< and (ii) there exists ε>0 and λ>0, s.t. R1/ε,εeλxvdx<. These settings imply that the random variables Lt have moments of all orders. We denote by Hit,0tTi=1 the Teugels martingales associated with the Lévy process Lt,0tT. Here, Hit is given by(4)Hit=ci,iYit+ci,i1Yi1t++ci,1Y1t,where Yit=LitELit for all i1, Lit are so called power-jump processes with L1t=Lt, Lit=0<stΔLsi for i2, and the coefficients ci,j correspond to the orthonormalization of polynomials 1,x,x2, w.r.t. the measure μdx=x2vdx+σ2δ0dx. The Teugels martingales Hiti=1 are pathwise strongly orthogonal and their predictable quadratic variation processes are given by(5)Hit,Hjt=δijt.For more details of Teugels martingales, we invite the reader to consult Nualart and Schoutens [4, 5]. Denote by g the predictable sub-σ field of 0,T×; then, we introduce the following notation used throughout this paper.

In the following, we introduce some basic spaces:

H: a Hilbert space with norm H.

α,β: the inner product in Rn,α,βRn.

α=α,α: the norm of Rn,αRn.

A,B=trABT: the inner product in Rn×m, A,BRn×m.

A=trAAT: the norm of Rn×m,ARn×m.

l2: the space of all real-valued sequences x=xii1 satisfying

(6)xl2i=1xi2<+.

l2H: the space of all H-valued sequence f=fii1 satisfying

(7)fl2Hi=1fiH2<+.

l20,T;H: the space of all l2H-valued and t-predictable processes f=fit,ω,t,ω0,T×Ωi1 satisfying

(8)fl20,T,HE0Ti=1fitH2dt<.

M20,T;H: the space of all H-valued and t-adapted processes f=ft,ω,t,ω0,T×Ω satisfying

(9)fM20,T;HE0TftH2dt<.

S20,T;H: the space of all H-valued and t-adapted càdlàg processes f=ft,ω,t,ω0,T×Ω satisfying

(10)fS20,T;HEsup0tTftH2dt<+.

L2Ω,,P;H: the space of all H-valued random variables ξ on Ω,,P satisfying

(11)ξL2Ω,,P;HEξH2<.

The coefficients of the state equation (1) and the cost functional (2) are defined as follows:(12)ξ:ΩRn,f:0,T×Ω×Rn×Rn×d×l2Rn×U1×U2Rn,l:0,T×Ω×Rn×Rn×d×l2Rn×U1×U2R,(13)ϕ:Ω×RnR1.

Throughout this paper, we introduce the following basic assumptions on coefficients ξ,f,l,ϕ.

Assumption 1.

ξL2Ω,T,P;Rn and the random mapping f is predictable w.r.t. t, Borel measurable w.r.t. other variables, and f,0,0,0,0,0M20,T;Rn. For almost all t,ω0,T×Ω, ft,ω,y,p,z,u1,u2 is Fréchet differentiable w.r.t. y,p,z,u1,u2 and the corresponding Fréchet derivatives fy,fp,fz,fu1,andfu2 are continuous and uniformly bounded.

Assumption 2.

The random mapping l is predictable w.r.t. t, Borel measurable w.r.t. other variables, and for almost all t,ω0,T×Ω, l is Fréchet differentiable w.r.t. y,p,z,u1,u2 with continuous Fréchet derivatives ly,lq,lz,lu1,andlu2. The random mapping ϕ is measurable, and for almost all t,ω0,T×Ω, ϕ is Fréchet differentiable w.r.t. y with continuous Fréchet derivative ϕy. Moreover, for almost all t,ω0,T×Ω, there exists a constant C s.t. for all p,q,z,u1,u2Rn×Rn×d×l2Rn×U1×U2:(14)lC1+y2+q2+z2+u12+u22,ϕC1+y2,ly+lq+lz+lu1+lu2C1+y+q+z+u1+u2,(15)ϕyC1+y.

Under Assumption 1, we can get from Lemma 2.3 in  that, for each u1,u2A1×A2, system (1) admits a unique strong solution. Furthermore, by Assumption 2 and a priori estimate for BSDE driven by Teugels martingales (see Lemma 3.2 in ), it is easy to check that(16)Ju1,u2<.

So, Problem (P) is well defined.

3. A Partial Information Sufficient Maximum Principle

In this section, we want to study the sufficient maximum principle for Problem (P).

In our setting, the Hamiltonian function H:0,T×Rn×Rn×d×l2Rn×U1×U2×RnR1 is of the following form:(17)Ht,y,q,z,u1,u2,k=k,ft,y,q,z,u1,u2+lt,y,q,z,u1,u2.

The adjoint equation, which fits into system (1) and (2) corresponding to the given admissible quintuplet u1,u2;y,q,z, is given by the following forward stochastic differential equation driven by multidimensional Brownian motion W and Teugels martingales Hii=1:(18)dk=Hyt,y,q,z,u1,u2,kdti=1dHqit,y,q¯,z,u1,u2,kdWiti=1Hzit,y,q,z,u1,u2,kdHitk0=ϕyy0.

Under Assumptions 1 and 2, the forward stochastic differential equation (18) has a unique solution kG20,T;Rn by Lemma 2.1 in .

We now come to a verification theorem for Problem (P).

Theorem 1 (partial information sufficient maximum principle).

Let Assumptions 1 and 2 hold. Let u¯1,u¯2;y¯,q¯,z¯ be an admissible quintuplet and k¯ the unique strong solution of the corresponding adjoint equation (18). Suppose that the Hamiltonian function H satisfies the following conditional maximum principle:(19)infu1U1EHt,y¯t,q¯t,z¯t,u1,u¯2t,k¯tGt=EHt,y¯t,q¯t,z¯t,u¯1t,u¯2t,k¯tGt=supu2U2EHt,y¯t,q¯t,z¯t,u¯1t,u2,k¯tGt.

Suppose that, for all t0,T, ϕy is convex in y, and

(20)y,q,z,u1Ht,y,q,z,u1,u¯2t,k¯t

is convex. Then, for all,

(21)Ju¯1,u¯2Ju1,u¯2,(22)Ju¯1,u¯2=infu1A1Ju1,u¯2.

Suppose that, for all t0,T, ϕy is concave in y, and

(23)y,q,z,u2Ht,y,q,z,u¯1t,u2,k¯t

is concave. Then, for all u2A2,

(24)Ju¯1,u¯2Ju¯1,u2,(25)Ju¯1,u¯2=supu2A2Ju¯1,u2.

If both cases (i) and (ii) hold (which implies, in particular, that ϕy is an affine function), then u¯1,u¯2 is an open-loop saddle point and

(26)Ju¯1,u¯2=supu2A2infu1A1Ju1,u2=infu1A1supu2A2Ju1,u2.

Proof. (i) In the following, we consider a stochastic optimal control problem over G1, where the system is(27)yt=ξ+tTfs,ys,qs,zs,u1t,u¯2tdsi=1dtTqisdWisi=1tTzisdHis,with the cost functional(28)Ju1,u¯2=Eϕy0+oTlt,yt,qt,zt,u1t,u¯2tdt.

Our optimal control problem is to minimize Ju1,u¯2 over u1A1, i.e., find u¯1A1 such that(29)Ju¯1,u¯2=infu1A1Ju1,u¯2.

Then, for this case, it is easy to check that the Hamilton is Ht,y,q,z,u1,u¯2t,k, and for the admissible control u¯1A1, the corresponding sate process and the adjoint process is still y¯t,q¯t,z¯t and k¯t, respectively. And the optimality condition is(30)infu1U1EHt,y¯t,q¯t,z¯t,u1,u¯2t,k¯tGt=EHt,y¯t,q¯t,z¯t,u¯1t,u¯2tGt.

Thus, from the partial information sufficient maximum principle for optimal control (see Theorem 1 in ), we conclude that u¯1 is the optimal control of the optimal control problem, i.e.,(31)Ju¯1,u¯2=infu1A1Ju1,u¯2.

The proof of (i) is complete.

(ii) This statement can be proved in a similar way as (i).

(iii) If both (i) and (ii) hold, then(32)Ju¯1,u2Ju¯1,u¯2Ju1,u¯2,for any u1,u2A1×A2, i.e.,(33)Ju¯1,u¯2infu1A1Ju1,u¯2supu2A2infu1A1Ju1,u2.

On the contrary,(34)Ju¯1,u¯2supu2A2Ju¯1,u2infu1A1supu2A2Ju1,u2.

Now, due to the inequality,(35)infu1A1supu2A2Ju1,u2supu2A2infu1A1Ju1,u2,we have(36)Ju¯1,u¯2=infu1A1supu2A2Ju1,u2=supu2A2infu1A1Ju1,u2.

The proof of the theorem is completed.

If the control process u1,u2 is admissible adopted to the filtration t, we have the following full information sufficient maximum principle.

Corollary 1.

Suppose that Gt=t. Moreover, suppose that, for all t0,T, the following maximum principle holds:(37)infu1U1Ht,x¯t,y¯t,z¯t,u1,u¯2t,p¯t,q¯t,k¯t=Ht,x¯t,y¯t,z¯t,u¯1t,u¯2t,p¯t,q¯t,k¯t=supu2U2Ht,x¯t,y¯t,z¯t,u¯1t,u2,p¯t,q¯t,k¯t.

Suppose that, for all t0,T, ϕy is convex in y and

(38)x,y,z,u1Ht,x,y,z,u1,u¯2t,p¯t,q¯t,k¯t

is convex. Then, for all u1A1,

(39)Ju¯1,u¯2Ju1,u¯2,(40)Ju¯1,u¯2=infu1A1Ju1,u¯2.

Suppose that, for all t0,T, ϕy is concave in y, and

(41)x,y,z,u2Ht,x,y,z,u¯1t,u2,p¯t,q¯t,k¯t

is concave. Then, for all u2A2,

(42)Ju¯1,u¯2Ju¯1,u2,(43)Ju¯1,u¯2=supu2A2Ju¯1,u2.

If both cases (i) and (ii) hold (which implies, in particular, that ϕy is an affine function), then u¯1,u¯2 is an open-loop saddle point based on the information flow t and

(44)Ju¯1,u¯2=supu2A2infu1A1Ju1,u2=infu1A1supu2A2Ju1,u2.

4. Partial Information Necessary Maximum Principle

In this section, we give a necessary maximum principle for Problem (P).

Theorem 2 (a partial information necessary maximum principle).

Under Assumptions 1 and 2, let u¯1,u¯2 be an optimal control of Problem (P). Suppose that y¯,q¯,z¯ is the state process of system (1) corresponding to the admissible control u¯1,u¯2. Let k¯ be the unique solution of the adjoint equation (18) corresponding u¯1,u¯2;y¯,q¯,z¯. Then, for i=1,2, we have, for all uiUi(45)EHuitGt,uiu¯it0,a.s. a.e,(46)HuitHuit,y¯t,q¯t,z¯t,u¯1t,u¯2t,k¯t.

Proof. Since u¯1,u¯2 is a saddle open-loop control, then u¯1,u¯2 is an open-loop saddle point, i.e.,(47)Ju¯1,u2Ju¯1,u¯2Ju1,u¯2.

So, we have(48)J1u¯1,u¯2=minu1A1J1u1,u¯2,(49)J2u¯1,u¯2=maxu2A2J2u¯1,u2.

By (48), u¯1 can be regarded as an optimal control of the optimal control problem, where the controlled system is (27) and the cost functional is (28). Then, for this case, it is easy to check that the Hamilton is Ht,y,q,z,u1,u¯2t,k, and for the optimal control u¯1G1, the corresponding optimal sate process and the adjoint process is still y¯t,q¯t,z¯t and k¯t, respectively. Thus, applying the partial necessary stochastic maximum principle for optimal control problems (see Theorem 2 in ), we can obtain (45) for i=1. Similarly, from (49), we can obtain (45) for i=2. The proof is complete.

In this section, we will apply our stochastic maximum principles to a linear quadratic problem under partial information, i.e., consider the game problem to the following quadratic cost functional over u1,u2 valued in Rm1×Rm2:(50)Ju1,u2EM,y0+E0TEs,ys+i=1dFis,qis+i=1Gis,zis+N1su1s,u1sN2su2s,u2sds,where the state process y,q,z is the solution to the controlled linear backward stochastic system below:(51)dyt=Atyt+i=1dBitqit+i=1Citzit+D1tu1t+D2tu2tdt+i=1dqitdWit+i=1zitdWit,yT=ξ.

This problem is denoted by Problem (LQ). To study this problem, we need the assumptions on the coefficients as follows.

Assumption 3.

The matrix-valued functions A:0,TRn×n;Bi:0,TRn×n,i=1,2,,d;Ci:0,TRn×n,i=1,2,;Di:0,TRn×mi,i=1,2;E:0,TRn×n;Fi:0,TRn×m;Rn,i=1,2,,d,Gi:0,TRn,i=1,2,;Ni:0,TRmi×mi,i=1,2 and the matrix MRn are uniformly bounded. Moreover, Ni is uniformly positive, i.e., NiδI,i=1,2 for some positive constant δ.

Assumption 4.

There is no further constraint imposed on the control processes; the set all admissible control processes is(52)A1×A2=u1,u2:u1,u2isGtpredictable process with values inRm1×Rm2andE0Tut2dt<.

In what follows, we will utilize the stochastic maximum principle to study the dual representation of the game Problem (LQ).

We first define the Hamiltonian function H:Ω×0,T×Rn×Rn×d×l2Rn×Rm1×Rm2×RnR1 by(53)Ht,y,q,z,u1,u2,k=k,Aty+i=1dBitqi+i=1Citzi+D1tu1+D2tu2+Et,y+i=1dFit,qi+N1tu1,u1+i=1Git,ziN2tu1,u1.

Then, the adjoint equation corresponding to an admissible quintuplet u1,u2;y,q,z can be rewritten as(54)dkt=AkEdti=1dBikFidWii=1CikGidHi,k0=M.

Under Assumption 3, for any admissible quintuplet u1,u2;y,q,z, the adjoint equation (54) has a unique solution k in view of Lemma 2.1 in .

It is time to give the dual characterization of the optimal control.

Theorem 3.

Let Assumptions 3 and 4 be satisfied. Then, a necessary and sufficient condition for an admissible quintuplet u1,u2;y,q,z to be a saddle quintuplet of Problem (LQ) is that the control u1,u2 has the representation(55)u1t=12N11tD1tEktGt,u2t=12N21tD2tEktGt,where k is the unique solution of the adjoint equation (54) corresponding to the admissible quintuplet u1,u2;y,q,z.

Proof. For the necessary part, let u1,u2;y,q,z be an saddle quintuplet; then, by the necessary optimality condition (45) and Ui=Rmii=1,2, we have, a.e., a.s.,(56)Huit,yt,qt,zt,u1t,u2t,kt=0.

Noticing the definition of H in (53), we obtain(57)2N1tu1t+D1tEktGt=0,a.e. a.s.,(58)2N2tu2t+D2tEktGt=0,a.e.a.s.

So, the saddle point u1,u2 has the dual presentation (55).

For the sufficient part, let u1,u2;y,q,z be an admissible quintuplet satisfying (55). By the classical technique of completing squares, from (55), we can claim that u1,u2;y,q,z satisfies the optimality condition (19) in Theorem 1. Moreover, from Assumptions 3 and 4, it is easy to check that all other conditions in Theorem 1 are satisfied. Hence, u1,u2;y,q,z is a saddle quintuplet by Theorem 1.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (nos. 11871121 and 11701369) and Natural Science Foundation of Zhejiang Province for Distinguished Young Scholar (no. LR15A010001).

AnT. T. K.ØksendalB.Maximum principle for stochastic differential games with partial informationJournal of Optimization Theory and Applications2008139346348310.1007/s10957-008-9398-y2-s2.0-55649100769ØksendalB.SulemA.Forward-backward stochastic differential games and stochastic control under model uncertaintyJournal of Optimization Theory and Applications20141611225510.1007/s10957-012-0166-72-s2.0-84898548087WangG.YuZ.A partial information non-zero sum differential game of backward stochastic differential equations with applicationsAutomatica201248234235210.1016/j.automatica.2011.11.0102-s2.0-84856222953NualartD.SchoutensW.Chaotic and predictable representations for lévy processesStochastic Processes and their Applications200090110912210.1016/s0304-4149(00)00035-12-s2.0-0003866690NualartD.SchoutensW.Backward stochastic differential equations and feynman-kac formula for levy processes, with applications in financeBernoulli20017576177610.2307/33185412-s2.0-0141496649BahlaliK.EddahbiM.EssakyE.BSDE associated with lévy processes and application to pdieJournal of Applied Mathematics and Stochastic Analysis200316111710.1155/s10489533030000172-s2.0-33646517186MitsuiK.-I.TabataY.A stochastic linear-quadratic problem with lévy processes and its application to financeStochastic Processes and their Applications2008118112015210.1016/j.spa.2007.03.0112-s2.0-36248988049MengQ. X.TangM. N.Necessary and sufficient conditions for optimal control of stochastic systems associated with lévy processesScience China Information Sciences2009521119821992TangM.ZhangQ.Optimal variational principle for backward stochastic control systems associated with lévy processesScience China Mathematics201255474576110.1007/s11425-012-4370-62-s2.0-84862829234BahlaliK.KhelfallahN.MezerdiB.Optimality conditions for partial information stochastic control problems driven by lévy processesSystems & Control Letters201261111079108410.1016/j.sysconle.2012.08.0052-s2.0-84866519567MengQ.ZhangF.TangM.Maximum principle for backward stochastic systems associated with lévy processes under partial informationProceedings of the 31th Chinese Control ConferenceJuly 2012Hefei, China16DuK.WuZ.Linear-quadratic stackelberg game for mean-field backward stochastic differential system and applicationMathematical Problems in Engineering2019201917179858510.1155/2019/17985852-s2.0-85062644595ShiJ.WangG.XiongJ.Leader-follower stochastic differential game with asymmetric information and applicationsAutomatica201663607310.1016/j.automatica.2015.10.0112-s2.0-84949685871WangG.XiaoH.XiongJ.A kind of lq non-zero sum differential game of backward stochastic differential equation with asymmetric informationAutomatica20189734635210.1016/j.automatica.2018.08.0192-s2.0-85053062294WuJ.LiuZ.Maximum principle for mean-field zero-sum stochastic differential game with partial information and its applications to financeEuropean Journal of Control20173781510.1016/j.ejcon.2017.04.0062-s2.0-85019765348WuJ.LiuZ.Optimal control of mean-field backward doubly stochastic systems driven by itô-lévy processesInternational Journal of Control202093495397010.1080/00207179.2018.15024732-s2.0-85052157362YuW.WangF.HuangY.LiuH.Social optimal mean field control problem for population growth modelAsian Journal of Control20191810.1002/asjc.21642-s2.0-85069860404