MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 718714 10.1155/2012/718714 718714 Research Article Stochastic Recursive Zero-Sum Differential Game and Mixed Zero-Sum Differential Game Problem Wei Lifeng 1 Wu Zhen 2 Wang Guangchen 1 School of Mathematical Sciences Ocean University of China Qingdao 266003 China ouc.edu.cn 2 School of Mathematics Shandong University Jinan 250100 China sdu.edu.cn 2012 25 12 2012 2012 04 10 2012 10 12 2012 2012 Copyright © 2012 Lifeng Wei and Zhen Wu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Under the notable Issacs's condition on the Hamiltonian, the existence results of a saddle point are obtained for the stochastic recursive zero-sum differential game and mixed differential game problem, that is, the agents can also decide the optimal stopping time. The main tools are backward stochastic differential equations (BSDEs) and double-barrier reflected BSDEs. As the motivation and application background, when loan interest rate is higher than the deposit one, the American game option pricing problem can be formulated to stochastic recursive mixed zero-sum differential game problem. One example with explicit optimal solution of the saddle point is also given to illustrate the theoretical results.

1. Introduction

The nonlinear backward stochastic differential equations (BSDEs in short) had been introduced by Pardoux and Peng , who proved the existence and uniqueness of adapted solutions under suitable assumptions. Independently, Duffie and Epstein  introduced BSDE from economic background. In , they presented a stochastic differential recursive utility which is an extension of the standard additive utility with the instantaneous utility depending not only on the instantaneous consumption rate but also on the future utility. Actually, it corresponds to the solution of a particular BSDE whose generator does not depend on the variable Z. From mathematical point of view, the result in  is more general. Then, El Karoui et al.  and Cvitanic and Karatzas  generalized, respectively, the results to BSDEs with reflection at one barrier and two barriers (upper and lower).

BSDE plays an important role in the theory of stochastic differential game. Under the notable Isaacs's condition, Hamadène and Lepeltier  obtained the existence result of a saddle point for zero-sum stochastic differential game with payoff (1.1)J(u,v)=E(u,v)[tTf(s,xs,us,vs)ds+g(xT)]. Using a maximum principle approach, Wang and Yu [6, 7] proved the existence and uniqueness of an equilibrium point. We note that the cost function in  is not recursive, and the game system in [6, 7] is a BSDE. In , El Karoui et al. gave the formulation of recursive utilities and their properties from the BSDE's pointview. The problem that the cost function (payoff) of the game system is described by the solution of BSDE becomes the recursive differential game problem. In the following Section 2, we proved the existence of a saddle point for the stochastic recursive zero-sum differential game problem and also got the optimal payoff function by the solution of one specific BSDE. Here, the generator of the BSDE contains the main variable solution yt, and we extend the result in  to the recursive case which has much more significance in economics theory.

Then, in Section 3 we study the stochastic recursive mixed zero-sum differential game problem which is that the two agents have two actions, one is of control and the other is of stopping their strategies to maximize and minimize their payoffs. This kind of game problem without recursive variable and the American game option problem as this kind of mixed game problem can be seen in Hamadène . Using the result of reflected BSDEs with two barriers, we got the saddle point and optimal stopping strategy for the recursive mixed game problem which has more general significance than that in .

In fact, the recursive (mixed) zero-sum game problem has wide application background in practice. When the loan interest rate is higher than the deposit one. The American game option pricing problem can be formulated to the stochastic recursive mixed game problem in our Section 3. To show the application of this kind of problem and our motivation to study our recursive (mixed) game problem, we analyze the American game option pricing problem and let it be an example in Section 4. We notice that in [5, 9], they did not give the explicit saddle point to the game, and it is very difficult for the general case. However, in Section 4, we also give another example of the recursive mixed zero-sum game problem, for which the explicit saddle point and optimal payoff function to illustrate the theoretical results.

2. Stochastic Recursive Zero-Sum Differential Game

In this section, we will study the existence of the stochastic recursive zero-sum differential game problem using the result of BSDEs.

Let {Bt,0tT} be an m-dimensional standard Brownian motion defined on a probability space (Ω,,P). Let (t)t0 be the completed natural filtration of Bt. Moreover,

𝒞  is the space of continuous functions from [0,T]  to  Rm;

𝒫  is the σ-algebra on [0,T]×Ω  of t-progressively sets;

for any stopping time  ν,𝒯ν is the set of t-measurable stopping time  τ such that P-a.s.  ντT; 𝒯0 will simply be denoted by  𝒯;

2,k  is the set of 𝒫-measurable processes ω=(ωt)tT, Rk-valued, and square integrable with respect to  dtd𝒫;

𝒮2  is the set of 𝒫-measurable and continuous processes  ω'=(ω't)tT, such that E[suptT|ωt|2]<.

The m×m matrix σ=(σij) satisfies the following:

for any  1i, jm,σij  is progressively measurable;

for any  (t,x)[0,T]×𝒞, the matrix  σ(t,x)  is invertible;

there exists a constants  K  such that  |σ(t,x)-σ(t,x')|K|x-x'|t and |σ(t,x)|K(1+|x|t).

Then, the equation (2.1)xt=x0+0tσ(s,xs)dBs,tT has a unique solution (xt).

Now, we consider a compact metric space A (resp. B), and 𝒰 (resp. 𝒱) is the space of 𝒫-measurable processes u:=(ut)tT (resp. v:=(vt)tT) with values in A (resp. B). Let Φ:[0,T]×𝒞×𝒰×𝒱Rm be such that

for any (t,x)[0,T]×𝒞, the mapping (u,v)Φ(t,x,u,v)  is continuous;

for any  (u,v)A×B, the function  Φ(·,x(·),u,v)  is 𝒫-measurable;

there exists a constant  K  such that  |Φ(t,x,u,v)|K(1+|x|t)  for any t, x, u, and  v;

there exists a constant  M  such that  |σ-1(t,x)Φ(t,x,u,v)|M  for any t, x, u, and  v.

For (u,v)𝒰×𝒱, we define the measure Pu,v as (2.2)dPu,vdP=exp{0Tσ-1(s,xs)Φ(s,xs,us,vs)dBs-120T|σ-1(s,xs)Φ(s,xs,us,vs)|2ds}.

Thanks to Girsanov's theorem, under the probability Pu,v, the process (2.3)Btu,v=Bt-0tσ-1(s,xs)Φ(s,xs,us,vs)ds,tT, is a Brownian motion, and for this stochastic differential equation (2.4)xt=x0+0tΦ(s,xs,us,vs)ds+0tσ(s,xs)dBsu,v,tT,(xt)tT is a weak solution.

Suppose that we have a system whose evolution is described by the process (xt)tT. On that system, two agents c1 and c2 intervene. A control action for c1 (resp. c2) is a process u=(ut)tT (resp. v=(vt)tT) belonging to 𝒰 (resp. 𝒱). Thereby 𝒰 (resp. 𝒱) is called the set of admissible controls for c1 (resp. c2). When c1 and c2 act with, respectively, u and v, the law of the dynamics of the system is the same as the one of x under Pu,v. The two agents have no influence on the system, and they act to protect their advantages by means of u𝒰 and v𝒱 via the probability Pu,v.

In order to define the payoff, we introduce two functions C(t,x,y,u,v) and g(x) satisfying the following assumption: there exists L>0, for all x,x'2,m and Y,Y'𝒮2, such that (2.5)|C(t,xt,Yt,u,v)-C(t,xt,Yt,u,v)|L|xt-xt|,(Yt-Yt)(C(t,xt,Yt,u,v)-C(t,xt,Yt,u,v))L(Yt-Yt)2, and g(x) is measurable, Lipschitz continuous function with respect to x. The payoff J(x0,u,v) is given by J(x0,u,v)=Y0, where Y satisfies the following BSDE: (2.6)-dYs=C(s,xs,Ys,us,vs)ds-ZsdBsu,v,YT=g(xT). From the result in , there exists a unique solution (Y,Z) for u,v. The agent c1 wishes to minimize this payoff, and the agent c2 wishes to maximize the same payoff. We investigate the existence of a saddle point for the game, more precisely a pair (u*,v*) of strategies, such that J(x0,u*,v)J(x0,u*,v*)J(x0,u,v*) for each (u,v)𝒰×𝒱.

For (t,x,Y,Z,u,v)[0,T]×𝒞×R×Rm×𝒰×𝒱, we introduce the Hamiltonian by (2.7)H(t,x,Y,Z,u,v)=Zσ-1(t,x)Φ(t,x,u,v)+C(t,x,Y,u,v), and we say that the Isaacs' condition holds if for (t,x,Y,Z)[0,T]×𝒞×R×Rm, (2.8)maxv𝒱minu𝒰H(t,x,Y,Z,u,v)=minu𝒰maxv𝒱H(t,x,Y,Z,u,v).

We suppose now that the Isaacs' condition is satisfied. By a selection theorem (see Benes ), there exists u*:[0,T]×𝒞×R×Rm𝒰, v*:[0,T]×𝒞×R×Rm𝒱, such that (2.9)H(t,x,Y,Z,u*,v)H(t,x,Y,Z,u*,v*)H(t,x,Y,Z,u,v*).

Thanks to the assumption of σ, Φ, and C, the function H(t,x,Y,Z,u*(t,x,Y,Z),v*(t,x,Y,Z)) is Lipschitz in Z and monotone in Y like the function C.

Now we give the main result of this section.

Theorem 2.1.

( Y * , Z * ) is the solution of the following BSDE: (2.10)-dYs*=H(s,xs,Ys*,Zs*,u*(s,x,Y*,Z*),v*(s,x,Y*,Z*))ds-Zs*dBs,YT*=g(xT). Then, Y0* is the optimal payoff J(x0,u*,v*), and the pair (u*,v*) is the saddle point for this recursive game.

Proof.

We consider the following BSDE: (2.11)Yt*=g(xT)+tTH(s,xs,Ys*,Zs*,u*(t,x,Y*,Z*),v*(t,x,Y*,Z*))ds-tTZs*dBs. Thanks to Theorem 2.1 in , the equation has a unique solution (Y*,Z*). Because Y0* is deterministic, so (2.12)Y0*=Eu*,v*[Y0*]=Eu*,v*[g(xT)+0TH(s,xs,Ys*,Zs*,u*(t,x,Y*,Z*),v*(t,x,Y*,Z*))ds-0TZs*dBs]=Eu*,v*[g(xT)+0TC(s,xs,Ys*,us*,vs*)ds-0TZs*dBsu*,v*]. We can get Y0*=J(x0,u*,v*).

For any u𝒰,v𝒱, then we let (2.13)Yt=g(xT)+tTC(s,xs,Ys,us*,vs)ds-tTZsdBsu*,v=g(xT)+tTH(s,xs,Ys,Zs,us*,vs)ds-tTZsdBs,Yt=g(xT)+tTC(s,xs,Ys,us,vs*)ds-tTZsdBsu,v*=g(xT)+tTH(s,xs,Ys,Zs,us,vs*)ds-tTZsdBs. By the comparison theorem of the BSDEs and the inequality (2.9), we can compare the solutions of (2.11), and (2.13) and get YtYt*Yt,  0tT,  so Y0=J(x0,u*,v)J(x0,u*,v*)J(x0,u,v*)=Y0 and (u*,v*) is the saddle point.

3. Stochastic Recursive Mixed Zero-Sum Differential Game

Now, we study the stochastic recursive mixed zero-sum differential game problem. First, let us briefly describe the problem.

Suppose now that we have a system, whose evolution also is described by (xt)0tT, which has an effect on the wealth of two controllers C1 and C2. On the other hand, the controllers have no influence on the system, and they act so as to protect their advantages, which are antagonistic, by means of u𝒰 for C1 and v𝒱 for C2 via the probability Pu,v in (2.2). The couple (u,v)𝒰×𝒱 is called an admissible control for the game. Both controllers also have the possibility to stop controlling at τ for C1 and θ for C2; τ and θ are elements of 𝒯 which is the class of all t-stopping time. In such a case, the game stops. The controlling action is not free, and it corresponds to the actions of C1 and C2. A payoff is described by the following BSDE: (3.1)Ytu,τ;v,θ=Uτ1[τ<θ]+Lθ1[θ<τ<T]+Qτ1[τ=θ<T]+g(xT)1[τ=θ=T]+tτθC(s,xs,Ysu,τ;v,θ,us,vs)ds-tτθZsdBsu,v, and the payoff is given by (3.2)J(x0;u,τ;v,θ)=Y0u,τ;v,θ=E(u,v)[0τθC(s,xs,Ysu,τ;v,θ,us,vs)ds+Uτ1[τ<θ]+Lθ1[θ<τ<T]E(u,v)E+Qτ1[τ=θ<T]+g(xT)1[τ=θ=T]0τθ], where the (Ut)tT, (Lt)tT, and (Qt)tT are processes of 𝒮2 such that LtQtUt. The action of C1 is to minimize the payoff, and the action of C2 is to maximize the payoff. Their terms can be understood as

C(s,x,Y,u,v)  is the instantaneous reward for  C1  and cost for  C2;

Uτ  is the cost for  C1  and for  C2  if  C1  decides to stop first the game;

Lθ  is the reward for  C2 and cost for C1  if  C2  decides stop first the game.

The problem is to find a saddle point strategy (one should say a fair strategy) for the controllers, that is, a strategy (u*,τ*;v*,θ*) such that (3.3)J(x0;u*,τ*;v,θ)J(x0;u*,τ*;v*,θ*)J(x0;u,τ;v*,θ*), for any (u,τ;v,θ)𝒰×𝒯×𝒱×𝒯.

Like in Section 2, we also define the Hamiltonian associated with this mixed stochastic game problem by H(t,x,Y,Z,u,v), and thanks to the Benes's solution , there exist u*(t,x,Y,Z) and v*(t,x,Y,Z) satisfying (3.4)H(t,x,Y,Z,u*(t,x,Y,Z),v*(t,x,Y,Z))=maxv𝒱minu𝒰[Zσ-1(t,x)Φ(t,x,u,v)+C(t,x,Y,u,v)]=minu𝒰maxv𝒱[Zσ-1(t,x)Φ(t,x,u,v)+C(t,x,Y,u,v)]. It is easy to know that H(t,x,Y,Z,u,v) is Lipschitz in Z and monotone in Y.

From the result in , the stochastic mixed zero-sum differential game problem is possibly connected with BSDEs with two reflecting barriers. Now, we give the main result of this section.

Theorem 3.1.

( Y * , Z * , K * + , K * - ) is the solution of the following reflected BSDE: (3.5)Yt*=g(xT)+tTH(s,xs,Ys*,Zs*,us*,vs*)ds+(KT*+-Kt*+)-(KT*--Kt*-)-tTZs*dBs, satisfying for  all  0tT,LtYt*Ut, and 0T(Ys*-Ls)dKs*+=0T(Ys*-Us)dKs*-=0.

One defines τ*=inf{s[0,T],Ys*=Us} and θ*=inf{s[0,T],Ys*=Ls}.

Then Y0*=J(x0;u*,τ*;v*,θ*), (u*,τ*;v*,θ*) is the saddle point strategy.

Proof.

It is easy to know that the reflected BSDE (3.5) has a unique solution (Y*,Z*,K*+,K*-), then we have (3.6)Y0*=g(xT)+0TH(s,xs,Ys*,Zs*,us*,vs*)ds+K*+-KT*--0TZs*dBs=Yτ*θ**+0τ*θ*C(s,xs,Ys*,us*,vs*)ds+Kτ*θ**+-Kτ*θ**--0τ*θ*Zs*dBsu*,v*. Since K*+ and K*- increase only when Y reaches L and U, we have Kτ*θ**+=Kτ*θ**-=0. As (0tZrdBru*,v*)tT is an (t,Pu*,v*)-martingale, then we get (3.7)Y0*=Eu*,v*[Yτ*θ**+0τ*θ*C(s,xs,Ys*,us*,vs*)ds+Kτ*θ**+-Kτ*θ**--0τ*θ*Zs*dBsu*,v*]=Eu*,v*[Yτ*θ**+0τ*θ*C(s,xs,Ys*,us*,vs*)ds].

We know that Yτ*θ**=Yτ**1[τ*<θ*]+Yθ**1[θ*<τ*]+Yθ**1[θ*=τ*<T]+g(xT)1[θ*=τ*=T] and Yτ**1[τ*<θ*]=Uτ*1[τ*<θ*], Yθ**1[θ*<τ*]=Lθ*1[θ*<τ*], Yθ**1[θ*=τ*<T]=Qθ*1[θ*=τ*<T]. So, (3.8)Y0*=Eu*,v*[0τ*θ*Uτ*1[θ*<τ*]+Lθ*1[θ*<τ*]+Qθ*1[θ*=τ*<T]+g(xT)1[θ*=τ*=T]Eu*,v*+0τ*θ*C(s,xs,Ys*,us*,vs*)ds]=J(x0,u*,τ*;v*,θ*).

Next, let vt be an admissible control, and let θ𝒯. We desire to show that Y0*J(x0,u*,τ*;v,θ). We have (3.9)Y0*=Yτ*θ*+0τ*θH(s,xs,Ys*,Zs*,us*,vs*)ds+Kτ*θ*+-0τ*θZs*dBs=Uτ*1[τ*<θ]+Yθ*1[θ<τ*]+Qθ1[θ=τ*<T]+g(xT)1[θ=τ*=T]+0τ*θH(s,xs,Ys*,Zs*,us*,vs*)ds+Kτ*θ*+-0τ*θZs*dBs.

The payoff J(x0,u*,τ*;v,θ) can be described by the solution of following BSDE: (3.10)Y0=Uτ*1[τ*<θ]+Lθ1[θ<τ*<T]+Qθ1[τ*=θ<T]+g(xT)1[τ*=θ=T]+0τ*θC(s,xs,Ys,us*,vs)ds-0τ*θZsdBsu*,v=Uτ*1[τ*<θ]+Lθ1[θ<τ*<T]+Qθ1[τ*=θ<T]+g(xT)1[τ*=θ=T]+0τ*θH(s,xs,Ys,Zs,us*,vs)ds-0τ*θZsdBs, then (3.11)Y0=Eu*,v[0τ*θUτ*1[τ*<θ]+Lθ1[θ<τ*<T]+Qθ1[τ*=θ<T]+g(xT)1[τ*=θ=T]YEu*,v+0τ*θH(s,xs,Ys,Zs,us*,vs)ds-0τ*θZsdBs]=Eu*,v[Uτ*1[τ*<θ]+Lθ1[θ<τ*<T]+Qθ1[τ*=θ<T]+g(xT)1[τ*=θ=T]+0τ*θC(s,xs,Ys,us*,vs)ds], and J(x0;u*,τ*;v,θ)=Y0. Thanks to H(s,xs,Ys,Zs,us*,vs*)H(s,xs,Ys,Zs,us*,vs), Yθ*1[θ<τ*]Lθ1[θ<τ*<T], and Kτ*θ*+0 by the comparison theorem of BSDEs to compare (3.9) and (3.10) to get Y0*Y0=J(x0;u*,τ*;v,θ).

In the same way, we can show that Y0*=J(x0;u*,τ*;v*,θ*)J(x0;u,τ;v*,θ*) for any τ𝒯 and any admissible control u. It follows that (u*,τ*;v*,θ*) is a saddle point for the recursive game.

Finally, let us show that the value of the game is Y0*. We have proved that (3.12)J(x0;u*,τ*;v,θ)Y0*=J(x0;u*,τ*;v*,θ*)J(x0;u,τ;v*,θ*), for any (u,v)𝒰×𝒱 and τ,θ𝒯. Thereby, (3.13)Y0*infu𝒰,τ𝒯J(x0;u,τ;v*,θ*)supv𝒱,θ𝒯infu𝒰,τ𝒯J(x0;u,τ;v,θ). On the other hand, (3.14)Y0*supv𝒱,θ𝒯J(x0;u*,τ*;v,θ)infu𝒰,τ𝒯supv𝒱,θ𝒯J(x0;u,τ;v,θ). Now, due to the inequality (3.15)infu𝒰,τ𝒯supv𝒱,θ𝒯J(x0;u,τ;v,θ)supv𝒱,θ𝒯infu𝒰,τ𝒯J(x0;u,τ;v,θ), we have (3.16)Y0*=infu𝒰,τ𝒯supv𝒱,θ𝒯J(x0;u,τ;v,θ)=supv𝒱,θ𝒯infu𝒰,τ𝒯J(x0;u,τ;v,θ). The proof is now completed.

4. Application

In this section, we present two examples to show the applications of Section 3.

The first example is about the American game option pricing problem. We formulate it to be one stochastic recursive mixed game problem. This can be regarded as the application background of our stochastic game problem.

Example 4.1.

American game option when loan interest is higher than deposit interest is shown.

In El Karoui et al. , they proved that the price of an American option corresponds to the solution of a reflected BSDE. And Hamadène  proved that the price of American game option corresponds to the solution of a reflected BSDE with two barriers. Now, we will show that under some constraints in financial market such as when loan interest rate is higher than deposit interest rate, the price of an American game option corresponds to the value function of stochastic recursive mixed zero-sum differential game problem.

We suppose that the investor is allowed to borrow money at time t at an interest rate Rt>rt, where rt is the bond rate. Then, the wealth of the investor satisfies (4.1)-dXt=b(t,Xt,Zt)dt-dCt-ZtdWt,0tT,b(t,Xt,Zt):=-[rtXt+θtZt-(Rt-rt)(Xt-Ztσt)-], where Zt:=σtπt,θt:=σt-1(bt-rt). bt represents the instantaneous expected return rate in stock, σt which is invertible represents the instantaneous volatility of the stock, and Ct is interpreted as a cumulative consumption process. bt, rt, Rt, and σt are all deterministic bounded functions, and σt-1 is also bounded.

An American game is a contract between a broker c1 and a trader c2 who are, respectively, the seller and the buyer of the option. The trader pays an initial amount (the price of the option) which guarantees a payment of (Lt)tT. The trader can exercise whenever he decides before the maturity T of the option. Thus, if the trader decides to exercise at θ, he gets the amount Lθ. On the other hand, the broker is allowed to cancel the contract. Therefore, if he chooses τ as the contract cancellation time, he pays the amount Uτ, and UτLτ. The difference Uτ-Lτ is the premium that the broker pays for his decision to cancel the contract. If c1 and c2 decide together to stop the contract at the time τ, then c2 gets a reward equal to Qτ1[τ<T]+ξ1[τ=T]. Naturally, UτQτLτ. Ut, Lt, and Qt are stochastic processes which are related to the stock price in the market.

We consider the problem of pricing an American game contingent claim at each time t which consists of the selection of a stopping time ττ (or θθ) and a payoff Uτ (or Lθ) on exercise if τ<θ<T (or θ<τ<T) and ξ if τ=T. Set (4.2)S~τθ=ξ1{τ=θ=T}+Qτ1{τ=θ<T}+Lθ1{θ<τ<T}+Uτ1{τ<θ<T},0(τθ)T, then the price of American game contingent claim (S~τθ,0(τθ)T) at time t is given by (4.3)Xt=essinfττesssupθθXt(τθ,S~τθ), where Xt(τθ,S~τθ) noted by Xtτθ satisfies BSDE (4.4)-dXsτθ=b(s,Xsτθ,Zsτθ)ds-dCs-ZsτθdWs,Xτθτθ=S~τθ. For each (ω,t), b(t,x,z) is a convex function of (x,z). It follows from  that we have Xtτθ=esssuprtβtRtessinfCtXβ,C,τθ. Here, Xβ,C,τθ satisfies (4.5)-dXsβ,C,τθ=bβ(s,Xsβ,C,τθ,Zsβ,C,τθ)ds-dCs-ZsτθdWs,Xτθβ,C,τθ=S~τθ,bβ(s,Xt,Zt):=-βtXt-[θt+rt-βtσt]Zt, where βt is a bounded R-valued adapted process which can be regarded as an interest rate process in finance. So, (4.6)Xt:=essinfττesssupθθXt(τθ,S~τθ)=essinfτt,Ctesssupθt,rtβtRtXtβ,C,τθ=esssupθt,rtβtRtessinfτt,CtXtβ,C,τθ. Here, Xtβ,C:=esssupθt,rtβtRtessinfτt,CtXtβ,C,τθ. Then, from , there exist Ztβ,CH2 and Ktβ,C,+Ktβ,C,-, which are increasing adapted continuous processes with K0β,C,+=0 and K0β,C,-=0, such that (Xtβ,C,Ztβ,C,Ktβ,C,+,Ktβ,C,-) satisfies the following reflected BSDE: (4.7)-dXsβ,C=bβ(s,Xsβ,C,Zsβ,C)ds-dCs+dKsβ,C,+-dKsβ,C,--Zsβ,CdWs,XTβ,C=ξ,0sT, with UtXtβ,CLt, 0tT, and 0T(Xtβ,C-Lt)-dKtβ,C,+=0, 0T(Ut-Xtβ,C)-dKtβ,C,-=0. Then, the stopping time τ=inf{tsT;Xsβ,C=Us}, and θ=inf{tsT;Xsβ,C=Ls}.

We formulate the pricing problem of American game option to the stochastic recursive mixed zero-sum differential game problem which was studied in Section 3, so the previous example provides the practical background for our problem. This is also one of our motivations to study the recursive mixed game problem in this paper.

In the following, we give another example, where we obtain the explicit saddle point strategy and optimal value of the stochastic recursive game. The purpose of this example is to illustrate the application of our theoretical results.

Example 4.2.

We let the dynamics of the system (xt)tT satisfy (4.8)dxt=xtdBt,t1,where  the  initial  value  is  x0. The control action for c1 (resp. c2) is u (resp. v) which belongs to 𝒰 (resp. 𝒱). The 𝒰 is [0,1], and the 𝒱 is [0,1], while the function Φ=xt(ut+vt). Then, by the Girsanov's theorem, we can define the probability Pu,v by (4.9)dPu,vdP=exp{0T(us+vs)dBs-120T(us+vs)2ds}. Under the probability Pu,v, the process Btu,v=Bt-0t(us+vs)ds is a Brownian motion.

First, we consider the following stochastic recursive zero-sum differential game: (4.10)J(x0,u,v)=Y0=Eu,v[xT+0Tmin{|xt|,2}+Yt(ut+vt)dt].(Yt)0tT satisfies BSDE (4.11)-dYs=min{|xs|,2}+Ys(us+vs)ds-ZsdBsu,v,YT=xT. Therefore, (4.12)H(t,x,z,Y,u,v)=Z(u+v)+min{|xt|,2}+Y(u+v), and obviously, the Isaacs condition is satisfied with u*=1[Z+Y0],v*=1[Z+Y0]. It follows that (4.13)minu𝒰maxv𝒱H(t,x,Z,Y,u,v)=maxv𝒱minu𝒰H(t,x,Z,Y,u,v)=Z+min{|xt|,2}+Y,J(x0,u*,v*)=Y0=xT+0T(Zt+min{|xt|,2}+Yt)dtv-0TZtdBt=E[x0exp(2BT)+0Texp(Bt+12t)min{|x0exp(Bt-12t)|,2}dt]. We also can get the conclusion that the optimal game value Y0*=J(x0,u*,v*) is an increasing function with the initial value of the dynamics system x0 from the previous representation. Now, we give the numerical simulation and draw Figure 1 to show this point. Let T=2, when x0=1, the optimal game value Y0=147.8, Z0=147.8 and the saddle point strategy (u0*,v0*)=(0,1); when x0=2, Y0=295.6, Z0=295.6, (u0*,v0*)=(0,1); andx0=3, Y0=443.4, Z0=443.4, and (u0*,v0*)=(0,1). Y0 is increasing function of x0 which coincides with our conclusion.

Second, we consider the following stochastic recursive mixed zero-sum differential game: (4.14)J(x0;u,τ;v,θ)=Y0u,τ;v,θ=Eu,v[0τθ[min{|xt|,2}+Yt(ut+vt)]dtEu,v+(xτ+1)I[τ<θ]+(xθ-1)I[θ<τ<T]+xTI[θ=τ]0τθ]. Then, (Yt)0t(τθ) satisfies the following BSDE: (4.15)Yt=(xτ+1)I[τ<θ]+(xθ-1)I[θ<τ<T]+xTI[θ=τ]+tτθ[min{|xs|,2}+Ys(us+vs)]ds-tτθZsdBsu,v. Therefore, H(t,x,z,Y,u,v)=Z(u+v)+min{|xt|,2}+Y(u+v), and obviously, the Isaacs condition is satisfied with u*=1[Z+Y0],v*=1[Z+Y0]. It follows that (4.16)minu𝒰maxv𝒱H(t,x,Z,Y,u,v)=maxv𝒱minu𝒰H(t,x,Z,Y,u,v)=Z+min{|xt|,2}+Y,J(x0;u*,τ;v*,θ)=Y0u*,τ;v*,θ=Yτθ+0τθ(Zt+min{|xt|,2}+Yt)dt-0τθZtdBt=Yτθexp(12(τθ)+Bτθ)+0τθmin{|xt|,2}exp(12(t)+Bt)dt-0τθexp(12(t)+Bt)(Zt+Yt)dBt, where τ*=inf{t[0,T],Yt(xt+1)}, and θ*=inf{t[0,T],Yt(xt-1)}, while (u*,τ*;v*,θ*) is the saddle point. So, the optimal value is (4.17)J(x0;u*,τ*;v*,θ*)=Y0u*,τ*;v*,θ*=Yτ*θ*exp(12(τ*θ*)+Bτ*θ*)+0τ*θ*min{|xt|,2}exp(12t+Bt)dt-0τ*θ*exp(12t+Bt)(Zt+Yt)dBt=E[0τ*θ*x0exp(2Bτ*θ*)+1τ*<θ*exp(Bτ*+12τ*)-1θ*<τ*exp(Bθ*+12θ*)EE+0τ*θ*min{|x0exp(Bt-12t)|,2}exp(12t+Bt)dt].

We also can get the conclusion that the optimal game value Y0*=J(x0,u*,τ*;v*,θ*) is an increasing function with the initial value of the dynamics system x0 from the previous representation.

Y 0 stands for the optimal game value, and x0 stand for the initial value of the dynamics system.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (no. 10921101, 61174092), the National Science Fund for Distinguished Young Scholars of China (no. 11125102), and the Special Research Foundation for Young teachers of Ocean University of China (no. 201313006).

Pardoux E. Peng S. G. Adapted solution of a backward stochastic differential equation Systems & Control Letters 1990 14 1 55 61 10.1016/0167-6911(90)90082-6 MR1037747 Duffie D. Epstein L. G. Stochastic differential utility Econometrica 1992 60 2 353 394 10.2307/2951600 MR1162620 El Karoui N. Kapoudjian C. Pardoux E. Peng S. Quenez M. C. Reflected solutions of backward SDE's, and related obstacle problems for PDE's The Annals of Probability 1997 25 2 702 737 10.1214/aop/1024404416 MR1434123 Cvitanic J. Karatzas I. Backward SDE's with reflection and Dynkin games The Annals of Probability 1996 24 4 2024 2056 10.1214/aop/1041903216 MR1415239 Hamadène S. Lepeltier J.-P. Zero-sum stochastic differential games and backward equations Systems & Control Letters 1995 24 4 259 263 10.1016/0167-6911(94)00011-J MR1321134 Wang G. Yu Z. A Pontryagin's maximum principle for non-zero sum differential games of BSDEs with applications IEEE Transactions on Automatic Control 2010 55 7 1742 1747 MR2675843 Wang G. Yu Z. A partial information non-zero sum differential game of backward stochastic differential equations with applications Automatica 2012 48 2 342 352 10.1016/j.automatica.2011.11.010 MR2889426 El Karoui N. Peng S. Quenez M. C. Backward stochastic differential equations in finance Mathematical Finance 1997 7 1 1 71 10.1111/1467-9965.00022 MR1434407 Hamadène S. Mixed zero-sum stochastic differential game and American game options SIAM Journal on Control and Optimization 2006 45 2 496 518 10.1137/S036301290444280X MR2246087 Pardoux E. Clarke F. H. Stern R. J. BSDE's, weak convergence and homogenization of semilinear PDE's Nonlinear Analysis, Differential Equations and Control 1999 528 Dordrecht, The Netherlands Kluwer Academic Publishers 503 549 MR1695013 Benes V. E. Existence of optimal strategies based on specified information, for a class of stochastic decision problems SIAM Journal on Control and Optimization 1970 8 179 188 MR0265043 Lepeltier J. P. Matoussi A. Xu M. Reflected BSDEs under monotonicity and general increasing growth conditions Advanced in Applied Probability 2005 37 134 159 El Karoui N. Pardoux E. Quenez M. C. Rogers L. C. G. Talay D. Reflected backward SDEs and American options Numerical methods in Finance 1997 13 Cambridge, Mass, USA Cambridge University Press 215 231 MR1470516 Hamadène S. Hdhiri I. Backward stochastic differential equations with two distinct reflecting barriers and quadratic growth generator Journal of Applied Mathematics and Stochastic Analysis 2006 2006 28 95818 10.1155/JAMSA/2006/95818 MR2212582