Stochastic Maximum Principle for Partial Information Optimal Control Problem of Forward-Backward Systems Involving Classical and Impulse Controls

and Applied Analysis 3 of processes ξ(⋅) = ∑ i≥1 ξ i χ [τi ,T] (⋅) such that each ξ i is an RvaluedG τi -measurable random variable. LetKG be the class of impulse process ξ ∈ I such thatE(∑ i≥1 |ξ i |) 2 < ∞.We call AG = UG ×KG the admissible control set. Suppose we are given a performance functional of the form


Introduction
The classical and impulse controls problems have received considerable attention in recent years due to their wide applicability in different areas, such as optimal control of the exchange rate between different currencies (see, e.g., [1][2][3]), optimal financing and dividend control problem of an insurance company facing fixed and proportional transaction costs (see, e.g., [4,5]), stochastic differential game (see, e.g., [6]), and dynamic output feedback controller design problem (see, e.g., [7] and the references therein).
In the existing literatures, the dynamic programming principle and the maximum principle are two main approaches in solving these problems.
In dynamic programming principle, the classical and impulse controls can be solved by a verification theorem and the value function is a solution to some quasi-variational inequalities.However, the dynamic programming approach relies on the assumption that the controlled system is Markovian; see, for example, [8][9][10].
There have been some pioneering works on deriving maximum principles for the classical and impulse controls problems.For example, Wu and Zhang [11] established maximum principle for stochastic recursive optimal control problems involving impulse controls; Wu and Zhang [12] gave maximum principle for classical and impulse controls of forward-backward systems.In their control problems, the information available to the controller is full information.
In many practical systems, the controller only gets partial information, instead of full information, such as delayed information (see, e.g., [13][14][15][16]).The partial information stochastic control problem is not of a Markovian type and hence cannot be solved by dynamic programming.As a result, maximum principles are established to solve the partial information stochastic control problem.There is already a rich literature and versions of corresponding maximum principles for partial information control problems.For example, Baghery and Øksendal [17] derived the maximum principle for partial information stochastic control problem, where the stochastic system is described by stochastic differential equations (SDE hereafter).An and Øksendal [18] gave a maximum principle for the stochastic differential game under partial information.Øksendal and Sulèm [19] established maximum 2 Abstract and Applied Analysis principles for stochastic control of forward-backward systems driven by Lévy processes.In their control problems, the control variable is just the classical stochastic control process (⋅).To the best of our knowledge, there is no literature on studying the maximum principle for partial information classical and impulse controls problems, which motivates our work.
In this paper, we study classical and impulse controls problems of forward-backward systems, where the stochastic systems are represented by forward-backward SDEs driven by Lévy processes, the control variable consists of two components: the stochastic control (⋅) and the impulse control (⋅), and the information available to the controller is possibly partial information, rather than full information.Because of the non-Markovian nature of the partial information, we cannot use dynamic programming principle to solve the problems.Instead, we derive a maximum principle which allows us to handle the partial information case.
The similar maximum principle is also studied by Wu and Zhang [11] in the complete information case and with the Brownian motion setting.There are three main differences between our paper and [11].Firstly, we study the more general cases: the forward-backward system is driven by Lévy processes and the information available to the controller is partial information.Secondly, their proof differs from ours.They used convex perturbation technique to establish the maximum principle.Thirdly, they assumed the concavity conditions of Hamiltonian and utility functional to make the necessary optimality conditions turn out to be sufficient.However, the concavity conditions may not hold in many applications.Consequently, in our maximum principle formulation, we give the sufficient and necessary optimality conditions for the local critical points, instead of global optimums, without the assumption of concavity condition.
The paper is organized as follows: in the next section we formulate the partial information classical and impulse controls of the forward-backward system driven by Lévy processes.In Section 3 we derive the stochastic maximum principle for the considered classical and impulse controls problem.In Section 4 we apply the general results obtained in Section 3 to give the solutions of the example.Finally we conclude this paper in Section 5.
Suppose that we are given a subfiltration G  ⊆ F  representing the information available to the controller at time ,  ∈ [0, ].It is remarked that the partial information of classical and impulse controls is different from the classical and impulse controls of delay systems, where the state function is described by the solution of stochastic differential delay equation (see, e.g., [20]).
Let {  ,  ≥ 1} be a given sequence of increasing G  -stopping times such that   ↑ +∞.At   we are free to intervene and give the system an impulse   ∈ R, where   is G   -measurable random variable.We define impulse process () by It is worth noting that the assumption   ↑ +∞ implies that at most finitely many impulses may occur on [0, ].Now we consider the forward-backward systems involving classical and impulse controls.Given  ∈ R and and  : [0, ] → R be measurable mappings. is a nonempty convex set of R. Then the forward-backward systems are described by forwardbackward SDEs in the unknown processes (), (), (), and () as follows: The result of giving the impulse   is that the state jumps from ((  −), (  −)) to ((  ), (  )) = ((  −)+(  )  , (  −)− (  )  ).We call (, ) classical and impulse controls.
There are two different jumps in the system (2).One jump is the jump of ((), ()) stemming from the random measure , denoted by (Δ  (), Δ  ()).The other jump is the jump caused by the impulse , given by Let U G denote a given family of controls, contained in the set of G  -predictable controls (⋅) such that the system (2) has a unique strong solution.We denote by I the class Suppose we are given a performance functional of the form where  denotes expectation with respect to  and , ℎ Then the classical and impulse controls problem is to find the value function Φ G () ∈ R and optimal classical and impulse controls

Maximum Principle for Partial Information Classical and Impulse Controls Problems
In this section, we derive a maximum principle for the optimal control problems (7).We will give the necessary and sufficient conditions for the local critical points ( * ,  * ).Firstly, we make the following assumptions.
(1) For all  ∈ [0,) and bounded G measurable random variables (), the control   defined by belongs to U G .
(2) For all (, ), (, ) ∈ A G where (, ) is bounded, there exists  > 0 such that the control Next we give the definition of the Hamiltonian process.
(1) (, ) is a critical point for J(, ), in the sense that ( where 0 ≤   ≤ .By substituting ( 22 Abstract and Applied Analysis Hence (25) simplifies to for all bounded (, ) ∈ A G .It is obvious that () is independent of (), 0 ≤  ≤ .So we obtain from (27) that holds for all bounded  ∈ U G and  ∈ I G .Now we prove that ( 17) holds for all () ∈ U G .We know that (28) holds for all bounded  ∈ U G .So (28) holds for all bounded  ∈ U G of the form for a fixed  ∈ [0, ), where () is a bounded G  -measurable random variable.Then we have which holds for all bounded G  -measurable random variable .As a result, we conclude that Moreover, since (29) holds for all bounded G   -measurable random variable   , we conclude that (18) holds.Therefore, we conclude that (1) ⇒ ( 2).
(2) ⇒ (1) Each bounded  ∈ U  can be approximated by linear combinations of controls   of the form.Then we prove that (2) ⇒ (1) by reversing the above argument.

Application
Example 7 (portfolio optimization problem).In a financial market, we are given a subfiltration representing the information available to the trader at time .
On the other hand, by the sufficient and necessary optimality condition (17) where () is given by (43) and () is given by (41).Consequently, we summarize the above results in the following theorem.
where  *  given by (48) is the local critical point of the classical and impulse controls problem (38).

Conclusion
We consider the partial information classical and impulse controls problem of forward-backward systems driven by Lévy processes.The control variable consists of two components: the classical stochastic control and the impulse control.Because of the non-Markovian nature of the partial information, dynamic programming principle cannot be used to solve partial information control problems.As a result, we derive a maximum principle for this partial information problem.Because the concavity conditions of the utility functions and the Hamiltonian process may not hold in many applications, we give the sufficient and necessary optimality conditions for the local critical points of the control problem.To illustrate the theoretical results, we use the maximum principle to solve a portfolio optimization problem with piecewise consumption processes and give its explicit solutions.
In this paper, we assume that the two different jumps in our system do not occur at the same time (Assumption 1).This assumption makes the problem easier to analyze.However, it may fail in many applications.Without this assumption, it requires more attention to distinguish between the two different jumps.This will be explored in our subsequent work.