^{1}

^{2}

^{2}

^{1}

^{2}

In this paper, we consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by a backward stochastic differential equation driven by Teugels martingales and an independent Brownian motion. A sufficient condition and a necessary one for the existence of the saddle point for the game are proved. As an application, a linear quadratic stochastic differential game problem is discussed.

Consider a partial information two-person zero-sum stochastic differential game problem, where the system is governed by the following nonlinear backward stochastic differential equation (BSDE), for any

In the above, the processes

The set of all admissible open-loop controls

Roughly speaking, for the zero-sum differential game, Player I seeks control

Game theory had been an active area of research and a useful tool in many applications, particularly in biology and economic. For the partial information two-person zero-sum stochastic differential games, the objective is to find a saddle point, for which the controller has less information than the complete information filtration

In 2000, Nualart and Schoutens [

Since the theory of BSDE driven by Teugels martingales and an independent Brownian motion is established, it is natural to apply the theory to the stochastic optimal control problem. Now, the full information stochastic optimal control problem related to Teugels martingales has been in many literatures. For example, the stochastic linear quadratic problem with Lévy processes was studied by Mitsui and Tabata [

However, to the best of our knowledge, there is little discussion on the partial information stochastic differential games for the system driven by Teugels martingales and an independent Brownian motion, which motives us to write this paper. The main purpose of this paper is to establish partial information necessary and sufficient conditions for optimality for Problem (

The rest of this paper is organized as follows. We introduce useful notations and give needed assumptions in Section

Let

In the following, we introduce some basic spaces:

The coefficients of the state equation (

Throughout this paper, we introduce the following basic assumptions on coefficients

The random mapping

Under Assumption

So, Problem (

In this section, we want to study the sufficient maximum principle for Problem (

In our setting, the Hamiltonian function

The adjoint equation, which fits into system (

Under Assumptions

We now come to a verification theorem for Problem (

Let Assumptions

Suppose that, for all

is convex. Then, for all,

Suppose that, for all

is concave. Then, for all

If both cases (i) and (ii) hold (which implies, in particular, that

Proof. (i) In the following, we consider a stochastic optimal control problem over

Our optimal control problem is to minimize

Then, for this case, it is easy to check that the Hamilton is

Thus, from the partial information sufficient maximum principle for optimal control (see Theorem

The proof of (i) is complete.

(ii) This statement can be proved in a similar way as (i).

(iii) If both (i) and (ii) hold, then

On the contrary,

Now, due to the inequality,

The proof of the theorem is completed.

If the control process

Suppose that

Suppose that, for all

is convex. Then, for all

Suppose that, for all

is concave. Then, for all

If both cases (i) and (ii) hold (which implies, in particular, that

In this section, we give a necessary maximum principle for Problem (

Under Assumptions

Proof. Since

So, we have

By (

In this section, we will apply our stochastic maximum principles to a linear quadratic problem under partial information, i.e., consider the game problem to the following quadratic cost functional over

This problem is denoted by Problem (LQ). To study this problem, we need the assumptions on the coefficients as follows.

The matrix-valued functions

There is no further constraint imposed on the control processes; the set all admissible control processes is

In what follows, we will utilize the stochastic maximum principle to study the dual representation of the game Problem (LQ).

We first define the Hamiltonian function

Then, the adjoint equation corresponding to an admissible quintuplet

Under Assumption

It is time to give the dual characterization of the optimal control.

Let Assumptions

Proof. For the necessary part, let

Noticing the definition of

So, the saddle point

For the sufficient part, let

No data were used to support this study.

The authors declare that they have no conflicts of interest.

This work was supported by the National Natural Science Foundation of China (nos. 11871121 and 11701369) and Natural Science Foundation of Zhejiang Province for Distinguished Young Scholar (no. LR15A010001).