Mean-Variance Hedging and Forward-Backward Stochastic Differential Filtering Equations

and Applied Analysis 3 objective is min J v · ;x0 subject to v · ∈ Uad, x · ;v · satisfies 1.3 or 1.4 . PIMV The above problem formulates a mean-variance hedging problem with partial information. For simplicity, hereinafter we denote it by the notation “Problem PIMV ”, short for the “partial information mean-variance hedging problem”. In particular, if we let Ft Zt, 0 ≤ t ≤ T , then Problem PIMV reduces to the case with complete information. See, for example, Kohlmann and Zhou 1 for more details. Because the contingent claim ξ in 1.5 is random and the initial endowment x0 in 1.3 may be a decision, our Problem PIMV is distinguished from the existing literature. See, for example, Pham 2 , Xiong and Zhou 3 , Hu and Øksendal 4 , and so forth. Motivated by Problem PIMV , we study a general linear-quadratic LQ optimal control problem with partial information in Section 2. By a combination of the martingale representation theorem, the technique of “completing the square”, and conditional expectation, we derive a corresponding optimal control which is denoted by a related optimal state equation, a Riccati differential equation and a backward stochastic differential equation BSDE . To demonstrate the applications of our results, we work out some partial information LQ examples and obtain some explicitly observable optimal controls by filtering for BSDEs. Also, we establish some backward and forward-backward stochastic differential filtering equations which are different from the classical ones. In Section 3, we use the result established in Section 2 to derive an optimal portfolio strategy of Problem PIMV , which is denoted by the sum of a replicating portfolio strategy for the contingent claim ξ and a Merton’s portfolio strategy. To explicitly illustrate Problem PIMV , we provide a special but nontrivial example in this section. In terms of filtering theory, we derive the corresponding risk measure. Furthermore, we use some numerical simulations and three figures to illustrate the risk measure and the optimal portfolio strategy. In Section 4, we compare our results with the existing ones. Finally, for the convenience of the reader, we state a classical filtering equation for SDEs which is used in Section 3 of this paper. 2. An LQ Optimal Control Problem with Partial Information In this section, we study a partial information LQ optimal control problem, which is a generalization of Problem PIMV . Let us now begin to formulate the LQ problem. Consider a stochastic control system dx t [ A t x t m ∑ i 1 Bi t vi t g t ] dt m ∑ i 1 vi t dWi t , x 0 x0. 2.1 Here x t , x0, vi t , g t ∈ R, A t and Bi t ∈ Rn×n; v · v1 · , . . . , vm · is a control process with values in Rn×m. We suppose v t is Zt-adapted, where Zt is a given 4 Abstract and Applied Analysis subfiltration of Ft representing the information available to a policymaker at time t. We say that the control v · is admissible and write v · ∈ Uad if v · ∈ LZ 0, T ;Rn×m , that is, v t is a Zt-adapted process with values in Rn×m and satisfies


Introduction and Problem Formulation
We begin with a finite time horizon 0, T for T > 0, a complete filtered probability space Ω, F, F t , P on which an R m -valued standard Brownian motion W • is defined.Moreover, we let the natural filtration F t σ{W s ; 0 ≤ s ≤ t}, 0 ≤ t ≤ T , and F F T .
Suppose there is a financial market in which m 1 securities can be continuously traded.One  where μ i t and σ i t are called the appreciation rate of return and volatility coefficient of the ith stock.

Abstract and Applied Analysis
Suppose there is an agent who invests in the bond and stocks, whose decision cannot influence the prices in the financial market.We assume that the trading of the agent is selffinanced, that is, there is no infusion or withdrawal of funds over 0, T .We denote by π i t the amount that the agent invests in the ith stock and by x π t the wealth of the agent with an initial endowment x 0 > 0. Then the agent has x π t − m i 1 π i t savings in a bank.Under the forgoing notations and interpretations, the wealth x π • is modeled by x π 0 x 0 .

1.3
Generally speaking, it is impossible for the agent to know all the events occurred in the financial market.For instance, if the agent has not enough time or great vigor to observe all the prices of the m 1 assets, then the agent will only observe some data of all the prices.Without loss of generality, we denote by Z t the information available to the agent at time t, which is a subfiltration of F t .Suppose a process only adapted to Z t is called observable.Therefore, the agent has to choose a portfolio strategy according to the observable filtration Z t .A portfolio strategy π • π 1 • , . . ., π m • is called admissible if π i t is a Z t -adapted, square-integrable process with values in R. The set of the admissible portfolio strategies is denoted by U ad .
We give the following hypothesis.
H1 The coefficients r • , μ i • , σ i • , and σ i • −1 are uniformly bounded and deterministic functions with values in R.
For any π • ∈ U ad , 1.3 admits a unique solution under Hypothesis H1 .If we define v i • σ i • π i • , then 1.3 is rewritten as x v 0 x 0 .

1.4
Let ξ > 0 be a given contingent claim, which is a Z T -measurable, square-integrable random variable.Furthermore, we suppose ξ is larger than or equal to x 0 e T 0 r t dt , where the value x 0 e T 0 r t dt coincides with the amount that the agent would earn when the initial wealth x 0 was invested in the bond at the interest rate r • for the entire investment period.
Define a cost functional Note that the above ξ can contain E x v T | Z T as a special case.For a priori given initial wealth x 0 , 1.5 measures the risk that the contingent claim ξ cannot be reached.The agent's objective is

PIMV
The above problem formulates a mean-variance hedging problem with partial information.For simplicity, hereinafter we denote it by the notation "Problem PIMV ", short for the "partial information mean-variance hedging problem".In particular, if we let F t Z t , 0 ≤ t ≤ T , then Problem PIMV reduces to the case with complete information.See, for example, Kohlmann and Zhou 1 for more details.
Because the contingent claim ξ in 1.5 is random and the initial endowment x 0 in 1.3 may be a decision, our Problem PIMV is distinguished from the existing literature.See, for example, Pham 2 , Xiong and Zhou 3 , Hu and Øksendal 4 , and so forth.Motivated by Problem PIMV , we study a general linear-quadratic LQ optimal control problem with partial information in Section 2. By a combination of the martingale representation theorem, the technique of "completing the square", and conditional expectation, we derive a corresponding optimal control which is denoted by a related optimal state equation, a Riccati differential equation and a backward stochastic differential equation BSDE .To demonstrate the applications of our results, we work out some partial information LQ examples and obtain some explicitly observable optimal controls by filtering for BSDEs.Also, we establish some backward and forward-backward stochastic differential filtering equations which are different from the classical ones.
In Section 3, we use the result established in Section 2 to derive an optimal portfolio strategy of Problem PIMV , which is denoted by the sum of a replicating portfolio strategy for the contingent claim ξ and a Merton's portfolio strategy.To explicitly illustrate Problem PIMV , we provide a special but nontrivial example in this section.In terms of filtering theory, we derive the corresponding risk measure.Furthermore, we use some numerical simulations and three figures to illustrate the risk measure and the optimal portfolio strategy.
In Section 4, we compare our results with the existing ones.Finally, for the convenience of the reader, we state a classical filtering equation for SDEs which is used in Section 3 of this paper.

An LQ Optimal Control Problem with Partial Information
In this section, we study a partial information LQ optimal control problem, which is a generalization of Problem PIMV .
Let us now begin to formulate the LQ problem.Consider a stochastic control system x v 0 x 0 .

2.1
Here • is a control process with values in R n×m .We suppose v t is Z t -adapted, where Z t is a given subfiltration of F t representing the information available to a policymaker at time t.We say that the control v • is admissible and write The following basic hypothesis will be in force throughout this section.
H2 A • , B i • are uniformly bounded and deterministic functions, x 0 is F 0 -adapted, and For any v • ∈ U ad , control system 2.1 admits a unique solution under Hypothesis H2 .The associated cost functional is where ξ is a given F T -measurable, square-integrable random variable.The LQ optimal control problem with partial information is The solution x • and cost functional 2.3 along with u • are called the optimal state and the value function, respectively.Problem PILQ is related to the recent work by Hu and Øksendal 4 , where an LQ control for jump diffusions with partial information is investigated.Due to some characteristic setup, our Problem PILQ is not covered by 4 .See, for example, Section 4 in this paper for some detailed comments.Since the nonhomogeneous term in the drift of 2.1 is random and the observable filtration Z t is very general, it is not easy to solve Problem PILQ .To overcome the resulting difficulty, we shall adopt a combination method of the martingale representation theorem, the technique of "completing the square", and conditional expectation.This method is inspired by Kohlmann and Zhou 1 , where an LQ control problem with complete information is studied.
To simplify the cost functional 2.3 , we define

2.5
Since E ξ | F t is an F t -martingale, by the martingale representation theorem see e.g., Liptser and Shiryasyev 5 , there is a unique Applying It ô's formula to 2.1 and 2.5 -2.6 , we have and cost functional 2.3 reduces to

2.9
Then Problem PILQ is equivalent to minimize 2.9 subject to 2.6 -2.7 and U ad .To solve the resulting problem, we first introduce a Riccati differential equation on R n×n

2.10
Note that 2.7 contains a nonhomogeneous term h • .For this, we also introduce a BSDE on R n ,

2.11
Assume that the following hypothesis holds.

6
Abstract and Applied Analysis H3 For any 0 ≤ t ≤ T ,

2.12
Under Hypotheses H2 and H3 , according to 1, Theorem 4.2 , it is easy to see that 2.10 admits a unique solution, and then 2.11 admits a unique For any admissible pair v • , y v • , using Itô's formula to 1/2 integrating from 0 to T , taking the expectations and trying to complete a square, then we have where 14

2.15
Since J F y 0 is independent of v i • , the integrand in 2.13 is quadratic with respect to v i • and P • > 0, then it follows from the property of conditional expectation that the minimum of over all Z t -adapted v i t is attained at

2.19
Now, we are in the position to derive an optimal feedback control in terms of the original optimal state variable x • .Substituting 2.5 into 2.17 , we get x 0 x 0 .

2.21
Furthermore, we define for any 0

2.22
Applying It ô's formula to 2.6 and 2.10 -2.11 , we can check that p • , q 1 • , . . ., q m • is the unique solution of the BSDE

2.23
Finally, substituting 2.17 into 2.13 , we get the value function where J F y 0 and L i • are defined by 2.14 and 2.18 , respectively.
Theorem 2.1.Let Hypotheses (H2) and (H3) hold.Then the optimal control of Problem PILQ is where x • and p • , q 1 • , . . ., q m • are the solutions of 2.21 and 2.23 , respectively; the corresponding value function is given by 2.24 .
Remark 2.2.Note that the dynamics of BSDE 2.23 is similar to control system 2.1 except for the state constraint, which shows a perfect relationship between stochastic control and BSDE.This interesting phenomenon is first found by 1 , to our best knowledge.Also, 1 finds that the solution p • , q 1 • , . . ., q m • of 2.23 can be regarded as the optimal statecontrol pair x • , u • of an LQ control problem with complete information, in which the initial state x 0 is an additional decision.That is, p However, this conclusion is not true in our partial information case.For clarity, we shall illustrate it by the following example.
Example 2.3.Without loss of generality, we let Hypothesis H2 hold and n 1 in Problem PILQ .
Since P • defined by 2.10 is a scalar, it is natural that 2.10 admits a unique solution.Consequently, 2.11 also admits a unique solution.Note that Hypothesis H3 is not used in this setup.Define

2.27
Hereinafter, we set where the signal Y t is an F t -adapted and square-integrable stochastic progress, while the observation is the component of the m-dimensional Brownian motion W • .Without loss of generality, we let the observable filtration Z t be

2.29
In this setting, we call 2.28 the optimal filtering of the signal Y t with respect to the observable filtration Z t in the sense of square error.See, for example, 5, 6 for more details.Note that W 1 • , . . ., W l • is independent of W l 1 • , . . ., W m • , x 0 and p 0 are deterministic.Taking the conditional expectations on both sides of 2.27 , we get the optimal filtering equation of Δ t with respect to

2.30
Note that Δ • satisfies a homogeneous linear SDE and hence must be identically zero if x 0 p 0 .Thereby, if the decision x 0 takes the value p 0 in Example 2.3, then the next corollary follows from Theorem 2.1.

2.31
In particular, if Z t F t , 0 ≤ t ≤ T , then it reduces to the case of [1], that is, From Theorem 2.1 and Corollary 2.4, we notice that the optimal control strongly depends on the conditional expectation of p t , q 1 t , . . ., q m t with respect to Z t , 0 ≤ t ≤ T , where p • , q 1 • , . . ., q m • is the solution of BSDE 2.23 .Since Z t is very general, the conditional expectation is, in general, infinite dimensional.Then it is very hard to find an explicitly observable optimal control by some usual methods.However, it is well known that such an optimal control plays an important role in theory and reality.For this, we desire to seek some new technique to further research the problem in the rest of this section.Recently, Wang and Wu 7 investigate the filtering of BSDEs and use a backward separation technique to explicitly solve an LQ optimal control problem with partial information.Please refer to Wang and Wu 8 and Huang et al. 9 for more details about BSDEs with partial information.Inspired by 7, 9 , we shall apply the filtering of BSDEs to study the conditional expectation mentioned above.Note that there is no general filtering result for BSDEs in the published literature.In the rest of this section, we shall present two examples of such filtering problems.Combining Theorem 2.1 with a property of conditional expectation, we get some explicitly observable optimal controls.As a byproduct, we establish two new kinds of filtering equations, which are called as backward and forward-backward stochastic differential filtering equations.The result enriches and develops the classical filtering-control theory see e.g., Liptser and Shiryayev 5 , Bensoussan 10 , Xiong 6 , and so on .
Example 2.5.Let Hypothesis H2 hold, n 1, and m 2 in Problem PILQ .Suppose the observable filtration is denoted by

2.32
From Theorem 2.1, the optimal control is x 0 x 0 .

2.35
We proceed to calculate the optimal filtering of p • , q 1 • , q 2 • .Recalling BSDE 2.34 and noting that the observable filtration is Z t , it follows that

2.36
As 2.36 is a non-Markov BSDE, we call it a backward stochastic differential filtering equation which is different from the classical filtering equation for SDEs.Since q 2 • is absent from the diffusion term in 2.36 , then we are uncertain that if 2.36 admits a unique solution p • , q 1 • , q 2 • .But, we are sure that it is true in some special cases.See the following example, in which we establish a forward-backward stochastic differential filtering equation and obtain a unique solution of this equation.
Example 2.6.Let all the assumptions hold and g • ≡ 0 in Example 2.5.For simplicity, we set the random variable ξ X T , where X • is the solution of

2.37
Assume that K • , M 1 • , and M 2 • are bounded and deterministic functions with values in R; X 0 is a constant.
Similar to Example 2.5, the optimal control is

2.41
It is remarkable that 2.35 together with 2.40 -2.41 is a forward-backward stochastic differential filtering equation.To our best knowledge, this is also a new kind of filtering equation.We now desire to give a more explicitly observable representation of u i • .Due to the terminal condition of 2.40 , we get by Itô's formula and the method of undetermined coefficients,

2.42
Here X • is the solution of 2.41 , and

2.43
Thus, the optimal control is where X • satisfies 2.41 and x • is the solution of x 0 x 0 .

2.45
Since X • is the solution of 2.41 , it is easy to see that the above equation admits a unique solution x • .Now u i t , 0 ≤ t ≤ T , defined by 2.44 is an explicitly observable optimal control.
Remark 2.7.BSDE theory plays an important role in many different fields.Then we usually treat some backward stochastic systems with partial information.For instance, to get an explicitly observable optimal control in Theorem 2.1, it is necessary to estimate p t , q 1 t , . . ., q m t depending on the observable filtration Z t .However, there are short of some effective methods to deal with these estimates.In this situation, although the filtering of BSDEs is very restricted, it can be regarded as an alternative technique just as we see in Examples 2.2-2.3 .By the way, the study of Problem PILQ motivates us to establish some general filtering theory of BSDEs in future work.To our best knowledge, this is a new and unexplored research field.

Solution to the Problem PIMV
We now regard Problem PIMV as a special case of Problem PILQ .Consequently, we can apply the result there to solve the Problem PIMV .From Theorem 2.1, we get the optimal portfolio strategy

3.4
So we have the following theorem.
Theorem 3.1.If Hypothesis (H1) holds, then the optimal portfolio strategy of Problem PIMV is given by 3.1 .
We now give a straightforward economic interpretation of 3.1 .Introduce an adjoint equation Note that Z t is a subfiltration of F t , 0 ≤ t ≤ T .Then we have which is the partial information option price for the contingent claim ξ.According to Corollary 2.4, π * i2 • is the partial information replicating portfolio strategy for the contingent claim ξ when the initial endowment x 0 is the initial option price p 0 .Then π * i1 • defined by 3.2 is exactly the partial information Merton's portfolio strategy for the terminal utility function U x x 2 see e.g., Merton 11 .That is, the optimal portfolio strategy 3.1 is the sum of the partial information replicating portfolio strategy for the contingent claim ξ and the partial information Merton's portfolio strategy.Consequently, if the initial endowment x 0 is different from the initial option price p 0 necessary to hedge the contingent claim ξ, then x 0 − p 0 should be invested according to Merton's portfolio strategy.
In particular, suppose the contingent claim ξ is a constant.In this case, it is easy to see that the solution p • , q 1 • , . . ., q m • of 3.3 is p t e − T t r s ds ξ, q i t 0, 0 ≤ t ≤ T.

3.8
So we have the following corollary.
Corollary 3.2.Let Hypothesis (H1) hold and ξ be a constant.Then the optimal portfolio strategy of Problem PIMV is 3.9 Remark 3.3.The solution p • , q 1 • , . . ., q m • defined by 3.8 has a straightforward interpretation in financial terms.That is, to achieve a deterministic wealth level ξ at the terminal time T , the agent should only invest a risk-free asset a bond and cannot invest any risky assets stocks .Therefore, the optimal portfolio strategy obtained in Corollary 3.2 is only the partial information Merton's portfolio strategy.
The left part of this section will focus on a special mean-variance hedging problem with partial information.By virtue of filtering theory, we get an explicitly observable optimal portfolio strategy as well as a risk measure.We also plot three figures and give numerical simulations to illustrate the theoretical result.
Example 3.4.Let m 2 and all the conditions in Corollary 3.2 hold.Suppose the observable filtration of an agent is 3.10 It implies that the agents can observe all the past prices of S 1 x * 0 2 x 2 0 .

3.17
Solving the above equation, Applying a property of conditional expectation and throughout integration by parts, we get

3.20
So we have the following proposition.Proposition 3.5.The optimal portfolio strategy and the risk measure are given by 3.13 and 3.19 , respectively.
In Figure 2, we let the investment goal of the agent ξ $ 1.2 million.The beeline shows that RM is a deceasing function of the initial endowment x 0 .In particular, when x 0 $ 1.13  million, we have RM $ 0. This means that to achieve the investment goal $ 1.2 million at the end of one year, the agent only needs to invest $ 1.13 million in the bond at the interest rate 6%; moreover, there is no risk for the investment strategy.
In Figure 3, we let x 0 $ 1 million.The beeline implies that RM is an increasing function of the investment goal ξ.Consider now the agent who has an initial endowment x 0 $ 1 million and wishes to obtain an expected return rate 20% in one year.Taking x 0 $ 1 million and ξ $ 1.2 million, we get RM $ 0.1112 million, meaning that the risk of the investment goal is as high as 11.12%.

3.22
In particular, at the initial time t 0, π * 1 0 $ 0.5422 million and π * 2 0 $ 0.2711 million, which implies that the agent needs to invest $ 0.5422 million and $ 0.2711 million in the stocks S 1 • and S 2 • , respectively, and invest in the bond for an amount of 1 − 0.5422 0.2711 $ 0.1867 million.
Consider the following 1-dimensional state and observation equations:

Figure 3 :
Figure 3: The relationship between RM and ξ.
of them is a bond whose price B • satisfies • , but due to some limit factors e.g., bad behavior of the stock S 2 • or time and energy of the investor they cannot do not want to observe S 2 • .• and σ 1 • are deterministic functions see Hypothesis H1 , the above filtration Z t is equivalently rewritten as Z t σ{S 1 s : 0 ≤ s ≤ t} σ{W 1 s : 0 ≤ s ≤ t}.
t r s ds ξ 2 − x * t 2 2 i 1 Here W • is a 1-dimensional standard Brownian motion defined on the complete filtered probability space Ω, F, F t , P equipped with a natural filtrationF t σ{W s ; 0 ≤ s ≤ t}, F F T , 0 ≤ t ≤ T ; x t is an F t -martingale; h t is an F t -adapted process with T 0 |h s |ds < ∞; the functional b t, y , y ∈ C T , 0 ≤ t ≤ T , is B t -measurable.We need the following hypothesis.Ea t, ω 2 dt < ∞, b t, y 2 ≥ C > 0.The following result is due to of 5, Theorem 8.1 .≤ s ≤ t}, 0 ≤ t ≤ T. If Hypothesis (H) holds, then the optimal nonlinear filtering equation is H For any y, y ∈ C T , 0 ≤ t ≤ T , the functional b t,• satisfies b t, y − b t, y 2 ≤ L 1 t 0 y s − y s 2 dK s L 2 y t − y t 2 , b t, y 2 ≤ L 1 t 0 1 y s 2 dK s L 2 1 y t 2 , A.2whereL1 and L 2 are two nonnegative constants and 0 ≤ K • ≤ 1 is a nondecreasing right continuous function.Moreover, sup 0≤t≤T Eθ t 2 < ∞, T 0 Eh t 2 dt < ∞, T 0 Lemma A.1.Define θ t E θ t | F ξ t .Here θ t can take θ t , h t , D t , a t, ω , and θ t a t, ω ; F ξ t σ{ξ s : 0