On a Class of Forward-Backward Stochastic Differential Systems in Infinite Dimensions

We prove that a class of fully coupled forward-backward systems in infinite dimensions has a local unique solution. After studying the regularity property of the solution, we prove that for a peculiar class of systems arising in the theory of stochastic optimal control, the solution exists in arbitrary large time interval. Finally, we investigate the connection between the solution to the systems and a stochastic optimal control problem.


Introduction
The object of our study will be the following system: ∀τ ∈ [t,T] ⊂ [0,T], (1. 1) In the above equation, X takes values in a real separable Hilbert space H, Y takes values in a real separable Hilbert space K, and T is a nonnegative real number.The operators A and B are generators of strongly continuous semigroups {e tA } and {e tB }, respectively, in H and K.The functions F, G, Φ, and Ψ have values, respectively, in H, L(Ξ,H), K, and K and they satisfy appropriate Lipschitz conditions.
It is very well known that already in the finite-dimensional case, the solvability of fully coupled systems, that is the case of systems (1.1), is particularly delicate.Indeed, see [1] and again [2], there are examples, when G is not invertible, in which there is no hope to get a solution in an arbitrarily large time interval.By the way, this class of systems has been widely studied (in the finite-dimensional case) in [3][4][5]; see also [2] for a systematic review on the subject and its applications to mathematical finance and stochastic control.
Coming back to our infinite-dimensional framework, we first recall that forward equations have been widely studied; see the books [6,7] and the bibliography therein.
More recently, also backward stochastic equation, in infinite dimension of the form where B is a linear unbounded operator, have been studied by several authors; see [8][9][10][11][12].In particular, we will exploit some techniques described in [10] where existence and uniqueness results for (1.2) are obtained.Equations of this kind arise in the theory of nonlinear filtering, stochastic control, (see [13]) and in mathematical finance, (see [14,15]).
The first part of the present paper is devoted to prove existence and uniqueness-for small enough time interval [T − δ,T]-of a mild solution {(X τ ,Y τ ,Z τ ) : τ ∈ [t,T]} of the fully coupled system (1.1) in every [t,T] ⊂ [T − δ,T].In proving such a result, we have separated the case when B is dissipative from the case when B is any generator of a C 0 -semigroup since different regularity results for the solution can be proved.In the case when B is dissipative, also the regular dependence on the initial state is studied.The main tool is a fixed-point technique performed in a suitable space of stochastic processes.To the author's knowledge, this is a first attempt to solve a fully coupled system in the infinitedimensional framework, allowing the unbounded operators to appear in both equations of the system.In Section 4.4, we provide an example in which our theory applies.
Although our main motivation is the novelty of the mathematical problem, we also give an example of an application to optimal control theory where the forward equation takes value in a Hilbert space H while the backward equation is one-dimensional.Note that even this simpler case was not covered by the existing literature.
We consider an infinite-dimensional stochastic control state equation of the form where r : [0,T] × H × U → Ξ, with U a real separable Hilbert space.The cost functional to be minimized is where l : [0,T] × H × U → R, over all admissible controls that will be processes {u τ ,τ ∈ [0,T]} taking values in U.In [16], authors solve the optimal control problem in its weak formulation, that is, when the probability space and the noise process are allowed to change.
Giuseppina Guatteri 3 In this paper, under suitable assumptions on the Hamiltonian function Ψ, we will solve the same optimal control problem considered in a strong formulation, that is, when the probability space and the noise are prescribed.
We stress on the fact that in the existing literature, optimal control in strong formulation was found only for constant and nondegenerate G (sometimes with more general Hamiltonian); see [6,7,[16][17][18][19][20][21][22][23][24].Since we assume the coefficients just Lipschitz continuous, following [21], we replace the approach based on the Hamilton-Jacobi-Bellman equation with the analysis of a fully coupled forward-backward system.
We define the Hamiltonian function and we assume that the infimum in (1.5) is attained at γ(t,x,z), then we introduce the coupled system dX τ = AX τ dτ +F τ,X τ dτ + G τ,X τ r τ,X τ ,γ τ,X τ ,Z τ dτ + G τ,X τ dW τ , τ ∈ [t,T], We show that this system has a unique, global, predictable solution {(X τ ,Y τ ,Z τ ) : τ ∈ [t,T]} that takes value in H, R, and Ξ * , respectively.Then we can conclude that u τ = γ(τ,X τ ,Z τ ) is an optimal control (in the strong sense), the corresponding trajectory X u coincides with the solution X of (1.6), and the optimal cost V (t,x) is given by Y t .We stress again the fact that in system (1.6), the forward equation is infinite-dimensional so it cannot be treated with the finite-dimensional theory developed in [20], and since it is strongly coupled this case cannot be covered by the theory studied in [16].The paper is organized as follows.In Section 2, we set notation and assumption.In Section 3, we provide some preliminary results.In Section 4, we prove the local existence theorems, some regularity properties of the solution, and we provide an example.Finally in Section 5, we apply the previous result to the above-mentioned control problem.

Notation and assumptions
We are given three separable real Hilbert spaces H, K, and Ξ, endowed with their inner scalar products that we will denote by (•,•) H , (•,•) K , and (•,•) Ξ , respectively.L(Ξ,H), L(H) = L(H,H) and L(K) = L(K,K), as usual are, respectively, the Banach spaces of linear and bounded operators from Ξ to H, from H to H, and from K to K endowed by the usual norms.
L 2 (Ξ,H) denote the Hilbert space of Hilbert-Schmidt operators from Ξ to H, endowed with the Hilbert-Schmidt norm, that is, The space L 2 (Ξ,K) is defined in the same way.
The cylindrical Wiener process.We fix a probability basis (Ω,Ᏺ,P).A cylindrical Wiener process with value in Ξ is a family W(t), t ≥ 0, of linear mappings Ξ → L 2 (Ω) such that (i) for every h ∈ Ξ, {W (t)h, t ≥ 0} is a real (continuous) Wiener process; (ii) for every h,k ∈ Ξ and t ≥ 0, E(W (t)h • W(t)k) = t(h,k) Ξ .We denote by Ᏺ t its natural filtration augmented with the set ᏺ of P-null sets of Ᏺ.As it is well known, the filtration Ᏺ t satisfies the usual conditions.By E Ᏺt , or by E(• | F t ), we denote the conditional expectation with respect to Ᏺ t .Finally, by ᏼ we denote the predictable σ-field on Ω × [0,T].Some classes of stochastic processes.Let S be any separable Hilbert space, with scalar product (•,•) S and let Ꮾ(S) be its Borel σ-field.The following classes of processes will be used in this work.
(1) L p ᏼ (Ω × (t,T);S), t ∈ [0,T], denotes the subset of L p (Ω × (t,T);S), given by all equivalence classes admitting a predictable version.This space is endowed with the natural norm is finite.Elements of this space are defined up to indistinguishability.
Statement of the problem and general assumptions on the coefficients.We set the main assumptions on the coefficients of system (1.1).

Preliminary results
In this section, we provide some auxiliary results on backward stochastic equations that we will need in the proof of Theorem 2.6 and in the proof of the regular dependence with respect to the initial datum.Given η ∈ L 2 (Ω,Ᏺ T ,P;K) and f ∈ L 2 ᏼ (Ω × (0,T);K), let us consider the following equation, for all t ∈ [0,T]: We introduce a weaker notion of solution, in analogy to the case of forward equations; see [6,Chapter 6].
Giuseppina Guatteri 7 Since we are dealing only with the backward equation, we can ask the mild solution to be more regular; see also [25,Remark 4.7 Moreover, the following estimates hold: where C is a constant that depends on p, M B , and T.
Proof.We will separate the proof into two parts and we will consider only the case when t = 0, the procedure being identical for all t ∈ [0,T]. Step Here C p depends only on p.Notice that in this step we do not need Hypothesis 2.5 that instead plays a fundamental role in estimating the process Z.
Step 2. Estimate for Z.We introduce the bounded operators J n =n(nI − B) −1 for every n ∈ N * .It is very well known, see for instance [6, Appendix A], that (i) for every (iii) for every n and every s ∈ [0,T], e sB J n x = J n e sB x, for all x ∈ K. Let us multiply each term in (3.1) by J n and set Y n = J n Y , f n = J n f , η n = J n η, and Z n = J n Z.One has that the following equation is verified by (Y n ,Z n ) for every τ ∈ [0,T]: That is, (Y n ,Z n ) is the unique mild solution to Moreover, by the previous lemma we know that (Y n ,Z n ) is the unique weak solution of problem (3.8) and since Y n ∈ D(B), we have that for every ξ ∈ D(B * ), (3.9) Therefore extending by density (3.9) to all ξ ∈ K, we obtain that (Y n ,Z n ) is a strong solution. From Step 1, we also know that We introduce the following sequence of stopping times: For fixed m and n, we have Therefore, since B is dissipative, one has for some constant c p depending only on p that As a consequence of BDG inequality, for some constant κ p depending on p, one has Therefore, one gets that there exists a constant C p depending on p, such that (3.15) Thus Fatou's lemma and property (ii) of J n imply that letting m tend to infinity, Since lim n→+∞ Z n s (ω) = Z s (ω) P-a.s. for a.e.s ∈ [0,T], again by Fatou's lemma we have that letting n tend to infinity, (3.17) Combining this last inequality with (3.6), we conclude the proof of the proposition.Similar estimates in the finite-dimensional case, holding also for p ∈ (1,2) and more general f , can be found in [26,Lemma 3.1].Now let us consider the following generalization to (3.3): The definition of mild solution of (3.18) is the obvious extension of Definition 3.2.We can prove the following.
Proposition 3.5.Besides Hypotheses 2.1(ii) and 2.5, assume that for some constant L > 0 such that for every s ∈ [0,T], y, y 1 ∈ K, and z,z 1 ∈ L 2 (Ξ,K), and that ) where C is a constant the depends on p, M B , and T.
Proof.Take t = 0 being the procedure identical for any t.We set where δ > 0 will be chosen later in the proof.Let Γ : ᐅ → ᐅ be the map such that for any that exists by Proposition 3.4.We will prove that Γ is a contraction in ᐅ for sufficiently small δ.Let us denote (V 1 ,W 1 ) = Γ(V ,W).By Proposition 3.4 and from the hypothesis on f , we have that where the constant C depends on M B and p.Thus we can choose δ small enough such that Γ is a contraction and we find a unique fixed point (Y ,Z) solution to our equation in [0,δ].Since δ depends only on M B , L, and p, we can repeat the same procedure in [δ,2δ] and so on in order to cover the whole interval [0,T].Thus we have obtained a solution (Y ,Z) in [0,T], patching together the solutions obtained in every interval.The uniqueness follows from the local uniqueness in each interval of length δ.It remains to show the estimate.Since Γ is a contraction in [0, δ], we have (3.25) In the second interval [δ,2δ], the solution (Y ,Z) is again the fixed point of a contraction map, thus where C is a constant depending on known parameters.Iterating this procedure in a finite number of intervals until we cover the whole interval [0, T], we get estimate (3.21) and we conclude the proof.

Proofs of theorems
We will prove first Theorem 2.6 and then Theorem 2.4.

4.1.
Proof of Theorem 2.6.We set for an arbitrary p > 2/(1 − 2γ), where γ was introduced in Hypothesis 2.1(iv).For simplicity, we take t = 0, the procedure being identical for all t ∈ [0,T].We define the map ) For ξ ∈ L p (Ω,Ᏺ t ,P;H) fixed, we define the process {X τ : τ ∈ [0,T]} that depends on Y and Z as the solution to in [16,Proposition 3.2] it is shown that this equation has a unique solution 12 Journal of Applied Mathematics and Stochastic Analysis (2) Given X, that depends on (Y ,Z), we define (Y ,Z) as the solution to The existence and uniqueness of a solution (Y ,Z) in Ᏼ T 0 , at every fixed X ∈ L p ᏼ (Ω;C([0,T];H)), are proven in [16,Proposition 4.3].We will prove that there exists a T 1 that depends only on L, M A , and M B such that for any T ≤ T 1 , the map Γ : Ᏼ T 0 → Ᏼ T 0 is a contraction.We start from the forward equation, we have By standard estimates, we obtain that Using now the factorization method, see [6,Chapter 5] or [16, Proposition 3.2], we have that Giuseppina Guatteri 13 We can assume that T ≤ 1, therefore by combining these estimates we have that there exists a constant γ = γ(p,M A ,L) such that Now we consider the backward equation.We recall that, see [10] for instance, the following relation holds: Thus we deduce that there exists a constant γ 1 = γ 1 (p,M B ,L) such that for all T ≤ T 1 , It remains to estimate Z − W. First of all we notice that for every fixed X,U ∈ L p ᏼ (Ω, C([0,T],H)) by Proposition 3.5 there exists a unique mild solution of the following equations: Estimates (4.7), (4.9), and (4.11) imply that there exists a T 1 > 0, depending on p, M A , M B , and L, such that the map Γ is a contraction in Ᏼ T 0 for all T ≤ T 1 .Thus for every p > 2/(1 − 2γ), there exists a unique solution to (1.1).This solution clearly belongs to L p ᏼ (Ω,C([0,T],K)) × L p ᏼ (Ω;L 2 ((0,T);L 2 (Ξ,K))) for all p ≥ 2. The uniqueness in this bigger class of processes is a consequence of the next theorem, therefore we will chose T 1 ≤ T 0 , where T 0 is given in Theorem 2.4.This concludes the proof.

Proof of Theorem 2.4.
The proof of this theorem follows the same procedure of the previous one.We set One has to prove that the map Γ, considered now as a map from T t into itself, is a contraction for a sufficiently small T, that again depends only on L, M A , and M B .For simplicity, we start from t = 0.
We obtain by standard estimates that Now thanks again to (4.8), we have that Giuseppina Guatteri 15 where V , V ∈ L 2 ᏼ (Ω × (0,T);K) and K, K ∈ L 2 ᏼ (Ω × (0,T) × (0,T);K) are related to Φ and Ψ, respectively, as follows: One can deduce the following estimates: We have finally obtained (4.18) By the three inequalities (4.13), (4.14), and (4.18) there exists a positive number T 0 that depends on L, M A , and M B , such that the map Γ is a contraction in T 0 for every T ≤ T 0 , and this concludes the proof of the theorem.

Regular dependence on the parameters.
In this paragraph, we study the differentiability of the solution to the forward-backward system with respect to the initial state.More precisely, we will prove that under appropriate assumptions, the solution is Gâteaux differentiable with respect to x.We introduce the following class of functions.Definition 4.1.We say that a mapping F : X → V belongs to the class Ᏻ 1 (X;V ) if it is continuous, Gâteaux differentiable on X, and ∇F : X → L(X,V ) is strongly continuous.
We need to generalize this definition to functions depending on several variables.Definition 4.2.We say that a mapping F : X × Y → V belongs to the class Ᏻ 1,0 (X × Y ;V ) if it is continuous, Gâteaux differentiable with respect to x on X × Y , and ∇ x F : X × Y → L(X,V ) is strongly continuous.
We make further assumptions on the coefficients.Hypothesis 4.3.We assume that for every t ∈ [0,T], (i) As in [16, paragraph 3.2], we set and we consider the system under assumptions 2.1 and 2.5, system (4.20) for every p ≥ 2 has a unique solution ) for a sufficiently small T.Moreover, the restriction to the time interval [t,T] is the unique solution to (1.1).
(1) First of all we note that Proposition 3.5 applies to (4.26) and from the hypothesis on Ψ, we have that, for fixed η and X, Giuseppina Guatteri 19 where the positive constants C 1 and C 2 depend on M B , T, C, and p.We introduce the following equation, for any N ∈ L p ᏼ (Ω;C([0,T];H)) and ζ ∈ L p (Ω,Ᏺ T ,P;K) and (Y ,Z) ∈ Ᏼ T 0 fixed: From Proposition 3.5, it follows that this equation has a unique solution ( Y , Z) such that where κ is a constant the depends on p, M B , L, and T. From the hypothesis on the coefficients one can deduce-using the same arguments exploited in [16, Proposition It remains to show that the directional derivatives of (Y (X,η),Z(X,η)) in direction (N,ζ) coincide with the processes ( Y (X,N,Y (X,η),Z(X,η),ζ), Z(X,N,Y (X,η),Z(X,η),ζ)).For every > 0, the processes solve the following equation: (2) Since Λ 1 (Y 1 ,Z 1 ,x,t) is the fixed point of Γ 1 (•,Y 1 ,Z 1 ,x,t), we can proceed applying the parameter depending contraction principle.The proof of a very similar result can be found in [16,Proposition 3.3], so will be omitted here.Thus we have that Λ 1 (Y 1 ,Z 1 ,x,t)-for every (Y 1 ,Z 1 ) ∈ Ᏼ T 0 -is Gâteaux differentiable and ∇ x Λ 1 solves the following equation-we omit the dependence on the variables Y 1 and Z 1 for the sake of clearness: Moreover, one has that for every h ∈ H, (4.37) (3) Let us consider the fixed point X(s,t,x) = Λ 1 (s;Y ,Z,t,x) corresponding to the couple Y , Z that is the fixed point of the map Γ.We can choose N σ = ∇ x X(σ,t,x)h and ζ = ∇ x Φ(X(T,t,x))∇ x X(T,t,x)h for every h ∈ H, then (∇ x X(σ,t,x)h,∇ x Y (σ,t,x)h, ∇ x Z(σ,t,x)h) is the unique solution to system (4.22)-(4.23)and combining estimates (4.30) and (4.37), we get estimate (4.24).
This concludes the proof.

Example.
Let Z be the one-dimensional lattice of integers.We introduce an infinite collection of forward-backward systems, Let l 2 (Z) be the set of square summable sequences of real numbers, to fit our assumption 2.1 we assume that the following hold.
(2) {a n } n∈N is a sequence of real numbers such that is dense in l 2 (Z).
(3) {b n } n∈N is a sequence of real numbers such that We can thus formulate problem (4.38) as a forward-backward system: (4.41) We set is a cylindrical Wiener process with values in l 2 (Z).We define A and B by with domains, respectively, these two operators are sectorial with dense domain, and B is negative, thus they obviously fulfill Hypothesis 2.1.The function F : ) n∈Z , and it is immediate to check that these functions are Lipschitz continuous functions.In this case, Theorem 2.6 applies and there exists a T 1 such that for any T ≤ T 1 and any p ≥ 2, for any x ∈ l 2 (Z), there exists a unique solution in

Application of forward-backward systems to stochastic optimal control
5.1.Setting of the problem, assumptions, and auxiliary results.Let U be a real and separable Hilbert space and ᐁ a subset of U.
Let Hypothesis 2.1 be in force and consider a controlled process X u in H on a time interval [t,T] ⊂ [0,T], governed by the following It ô stochastic differential equation of the form An admissible control is defined as a predictable process that takes value in ᐁ and that is square integrable with respect to dP × dt.We recall that the cost functional to be minimized is where l : [0,T] × H × U → R and Φ is defined in Hypothesis 2.1(v).Then the value function is the following: where as usual the infimum is taken over all admissible controls.We introduce the Hamiltonian function We stress the fact that the probability basis (Ω,Ᏺ,P), where the Wiener process W t appearing in (5.1) is defined, is prescribed so we are considering the control problem in its strong formulation.
Giuseppina Guatteri 23 Besides Hypothesis 2.1, we assume also that the following hold.
Hypothesis 5.1.(1) The maps r : [0,T] × H × U → Ξ and l : [0,T] × H × U → R are Borel measurable and there exists a constant C > 0 such that (2) There exists a Borel measurable function γ and such that for some constant C > 0, Note that Hypothesis 5.1(1) implies that for every admissible control u. the functional J(t,x,u.)takes values in (−∞,+∞] and is not identically +∞.See also [20,Section 2.1] for more details.In the rest of the section, Hypotheses 2.1 and 5.1 will always be in force.
For the proof see [20,Lemma 2.3].Now let us consider a standard Wiener space W, defined on a complete probability space ( Ω, Ᏺ, P).For 0 ≤ t ≤ τ ≤ T, we denote by Ᏺ t (resp., Ᏺ [t,τ] ) the σ-algebra generated by W s , s ∈ [0,t] (resp., s ∈ [t,τ]), completed by the null sets of Ᏺ.For fixed t ∈ [0,T] and x ∈ H, we consider the equation by [16,Proposition 3.2] there exists a unique solution X τ such that This property and Lemma 5.2 imply that Thus, following again [16, Proposition 4.3], we have that the system that comprehends (5.9) and has a unique mild solution {( X τ (t,x), Y τ (t,x), Z τ (t,x)),τ ∈ [t,T]}, we will use this notation when we want to stress the dependence on the data.We set J * (t,x) = Y t (t,x), note that J * (t,x) is a deterministic value and depends only on the law of Y , that is, only on t, x, F, G, Ψ, Φ.Using an infinite-dimensional version of Girsanov theorem, see [6,Theorem 10.14], one has the following.

Global unique solvability for a class of forward-backward systems.
Let us fix T > 0 and for every t ∈ [0,T] and x ∈ H, consider (5.16) Exploiting the result proved in Theorem 2.6 and the interpretation of Y t (t,x) given in Proposition 5.4, one has the following.Theorem 5.6.For every t ∈ [0,T] and x ∈ H and every p ≥ 2, there exists a unique mild solution of the forward-backward system (5.16) Proof.First of all, we note that changing if necessary the constant, one can find a positive L > 0 such that (5.17) for every x, x ∈ H and all t ∈ [0,T].
In order to prove the second inequality, we recall that the function V (t,x) coincides with Y t (t,x), where {( X τ (t,x), Y τ (t,x), Z τ (t,x)); τ ∈ [t,T]} is the solution of system (5.9)-(5.12)starting in t at x. First we have that for every τ ∈ [0,T] and x, x ∈ H, one has (5.18) Indeed, for any admissible u, (5.19) Thus, (5.20) Thus, by Gromwall's lemma, there is a constant c depending on T, L, M A such that for every τ ∈ [0,T], (5.21) Giuseppina Guatteri 27 Now, applying the Itô formula to | Y τ (t,x) − Y τ (t, x)| 2 , one gets that (5.22) Thus again by the Gromwall lemma and (5.18), there exists a positive constant L, such that For τ = t in particular, one has for every x, x in H. Now we can conclude the proof in three steps.
Local existence.From Theorem 2.6, we know that for every p ≥ 2, there exists a δ > 0 such that for every t ≥ T − δ, system (5.16) has a unique solution there is nothing to prove, while if t < T − 2δ, then we can proceed repeating the construction below and after a finite number of steps we obtain the required solution in [t,T] for arbitrary t ∈ [0,T].We proceed in some steps.
(1) For every τ ∈ [T − δ,T], consider the system Thus, by Theorem 2.6, there exists a unique solution (2) Consider for τ ∈ [t,T − δ] the following forward-backward system with initial condition x and final condition V : ( (3) Finally we conclude solving again the system of step (1) with boundary condition X T−δ (t,x) and Φ that (5.27) Note that X T−δ (t,x) ∈ L p (Ω,Ᏺ T−δ ,P,H) therefore satisfies the hypothesis of Theorem 2.6 and we find a solution T]} to the system.Thus the triplet of processes (5.28) is a solution to system (5.16) in [t,T] with boundary conditions x and Φ for any p ≥ 2.

Giuseppina Guatteri 29
Global uniqueness.It is a consequence of the local uniqueness.Let us assume that t ∈ [T − 2δ,T − δ) and that there are two different solutions in [t,T] with initial datum ξ and final datum φ.We denote the two solutions by (X 1 (t,x), Y 1 (t,x),Z 1 (t,x)) and (X 2 (t,x),Y 2 (t,x),Z 2 (t,x)).Clearly, (X 1 (t,x),Y 1 (t,x),Z 1 (t,x)) and (X 2 (t,x),Y 2 (t,x),Z 2 (t,x)) are solutions of the system in [T − δ,T] with respect to the initial datum X 1 T−δ (t,x) [X 2 T−δ (t,x)] and final datum φ.The uniqueness proved in Theorem 2.6 implies that Y . Thus (X 1 (t,x),Y 1 (t,x), Z 1 (t,x)) and (X 2 (t,x),Y 2 (t,x),Z 2 (t,x)) are both solutions in [t,T − δ] with respect to x and V , therefore again by Theorem 2.6 and the Lipschitz continuity of V , the two solutions coincide in [t,T − δ].This implies in particular that X 1  T−δ (t,x) = X 2 T−δ (t,x), thus the two solutions have to coincide also in [T − δ,T] having the same initial condition and the same final datum, again by Theorem 2.6.
We are now in the position to solve the control problem.
Proposition 5.7.For every t ∈ [0,T] and x ∈ H, let {(X τ (t,x),Y τ (t,x),Z τ (t,x)),τ ∈ [t,T]} be the solution to system (5.16) in [t,T] with boundary conditions x and Φ. Set (5.29) then u is an optimal control for the control problem starting from x at time t.If X u is the corresponding solution, then P-a.s., X u τ = X τ (t,x) for τ ∈ [t,T].Finally, the optimal cost V (t,x) = J(t,x,u) is equal to Y t (t,x).
(5.45)Moreover, it is easy to verify that Hypotheses 2.1 and 5.1 hold.Thus Proposition 5.7 can be applied to obtain the existence of the optimal control in strong formulation and the feedback.

. 14 )
It remains to treat the term ET 0 |Z s − W s | 2 L2(Ξ,K) ds.Since we have to deal with the convolution term, we follow the technique introduced by Hu and Peng in[10] that is based on the martingale representation theorem.We have the following representation for the processes Z and W:Z s = e (T−s)B V (s) −T s e (σ−s)B K(s,σ)dσ P-a.s. and for a.e.s ∈ [0,T], W s = e (T−s)B V (s) − T s e (σ−s)B K(s,σ)dσ P-a.s. and for a.e.s ∈ [0,T], (4.15)