ANecessary Condition for Optimal Control of Forward-Backward Stochastic Control System with Lévy Process in Nonconvex Control Domain Case

*is paper analyzes one kind of optimal control problem which is described by forward-backward stochastic differential equations with Lévy process (FBSDEL).Wederive a necessary condition for the existence of the optimal control bymeans of spike variational technique, while the control domain is not necessarily convex. Simultaneously, we also get the maximum principle for this control system when there are some initial and terminal state constraints. Finally, a financial example is discussed to illustrate the application of our result.


Introduction
Stochastic optimal control is an important matter that cannot be neglected in modern control theory in long days. As is known to all, Pontryagin's [1] maximum principle is one of the main ways to settle the stochastic optimal control problem. By introducing the Hamiltonian function, a necessary condition for the optimal control of stochastic control systems was given by him, which was called the maximum condition. From that time, plenty of works on this issue have been done. Peng [2] was the first one to prove the general maximum principle of the forward-backward stochastic control system with diffusion coefficient containing the control variable by the technique of the second-order Taylor expansion and the second-order duality. He [3] was also the first one to demonstrate the maximum principle of forward-backward stochastic control systems from the view of backward stochastic differential equations (BSDE). In Peng's paper [3], the control domain was convex (in local form); Xu [4] extended this conclusion to the case of the nonconvex control domain (in global form), but the control variables were not included in the diffusion coefficient. And these results were extended to the fully coupled case in the form of local and global by Shi and Wu [5,6] in 1998 and in 2006, respectively. On the basis of these works, Situ [7] was the first to obtain the maximum principle of the forward stochastic control system with global form of random jumps in 1991. Shi and Wu [8] and Shi [9] acquired the maximum principle for a kind of forward-backward stochastic control system with Poisson jumps in the form of local and global, respectively. e fully coupled forward-backward stochastic control system was extended by Liu et al. [10] at the base of Shi and Wu [8], and in the meanwhile, they also obtained the maximum principle with the control system be constrained about initial-terminal state constraints. Considering that in real life, the decision makers could only get partial information but not complete information in most cases; many scholars have paid attention to the partial observable stochastic optimal control problem and have achieved many results (see, for example, [11][12][13]). Traditionally, when using a stochastic partial differential equation called the Zakai equation to transform a full-information optimal control problem to the partially observable case, scholars will encounter a difficult problem: an infinite-dimensional optimal control problem. Wang and Wu [14] proposed a backward separation approach and replaced the original state and observation equation with the Zakai equation, and lots of complicated stochastic calculi in infinite-dimensional spaces were avoided in this way. Based on this approach, Xiao [15] studied a partially observed optimal control of forwardbackward stochastic systems with random jumps and obtained the maximum principle and sufficient conditions of an optimal control under some certain convexity assumptions. Wang et al. [16] proved the maximum principles for forward-backward stochastic control systems with correlated state and observation noises. More recent conclusions of the partially observed stochastic control problem can be seen from the studies conducted by Wang et al. [17], Zhang et al. [18], and Xiong et al. [19].
In these years, through the study of mathematical economics and mathematical finance, many scholars turn their attention to the stochastic control system driven by Lévy process. In 2000, Nualart and Schoutens [20] built a pair of pairwise strongly orthonormal martingales which was called the Teugels martingale. Meanwhile, under some exponential moment conditions, they also obtained a martingale representation in that paper. Under these two important conclusions, for BSDE driven by the Teugels martingale, they [21] proved the existence and uniqueness theorem of its solution in the next year. From then on, a number of important results were proved: Meng and Tang [22] obtained the maximum principle of the forward stochastic control system driven by Lévy process. A necessary and sufficient condition for the existence of the optimal control of backward stochastic control systems driven by Lévy process was deduced by Tang and Zhang [23] through convex variation methods and duality techniques. For the forward-backward stochastic control system driven by Lévy process, there are also a lot of achievements: based on the existence and uniqueness theorem of FBSDEL [24], Zhang et al. [25] obtained a necessary condition of the optimal control and verification theorem, but in their control system, the backward state variables y t and z t did not enter the forward part. Wang and Huang [26] extended this result to the fully coupled control system and obtained the continuity result depending on parameters about FBSDEL and the local form maximum principle. Subsequently, Huang et al. [27] studied this control system with terminal state constraints and obtained the corresponding necessary maximum principle using Ekeland's variational. For more recent conclusions about the stochastic control problem driven by Lévy process, please refer to [28][29][30].
In this paper, we will study the optimal control problem for forward-backward stochastic control systems driven by Lévy process, which could be considered as a nonconvex control domain case that is extended from the result of [25].
With the technique of spike variation and Ekeland's variational principle, the maximum principle of this type of control system and the control system with initial and final state constraints are obtained. e structure of this paper is as follows. Section 2 describes some of the preparations used in this paper. e maximum principle and the one with initial and terminal state constraints as the major results of this paper will be shown in Sections 3 and 4. As an application of the maximum principle, Section 5 gives an optimal consumption problem in the financial market. Section 6 is the summary of this article.

Preliminary Statement
Let (Ω, F t , P) be a complete probability space which satisfied the usual conditions, and the information structure is given by F t which is generated by two processes: a standard Brownian motion B t 0 ≤ t ≤ T valued in R d and an independent 1-dimensional Lévy process L t 0 ≤ t ≤ T of the form L t � b t + l t ; here, l t is a pure jump process. And assume that Lévy measure ] satisfies the following two conditions; thereby, Lévy process L t 0 ≤ t ≤ T has moments in all orders.
be the compensated power jump process of order i; then, Teugels martingale is defined by are pairwise strongly orthogonal martingales, and their predictable quadratic variation processes are for more details of the Teugels martingale, see Nualart and Schoutens [20].
In the following of this section, we shall assume some notations: for a Hilbert space H, For the following FBSDEL, and mappings b, σ, g, and f take the value in R n , R n×d , l 2 (R n ), and R m , respectively. Convenient for writing, set column vector α � (x, y, z) T and where M is a m × n full rank matrix.
en, the following existence and uniqueness of the solution conclusion holds.
e detailed certification process of this conclusion can be seen in [24].

Stochastic Maximum Principle
In this section, for any given admissible control u(·), we consider the following stochastic control system: where a ∈ R n is given. An admissible control u(·) ∈ M 2 (0, T; R p ) is an F t -predictable process which takes values in a nonempty subset U of R p ; U ad is the set of all admissible controls. And the performance criterion is where c : R m ⟶ R is a given Frechet differential function.
Our optimal control problem amounts to determining an admissible control u * ∈ U ad such that In order to get the necessary conditions for the optimal control, we assume u * t is the optimal control, and the corresponding solution of (2) is recorded as (x * t , y * t , z * t , r * t ) and introduce the "spike variational control" as follows: and (x ε t , y ε t , z ε t , r ε t ) are the state trajectories of u ε t ; here, v t is an arbitrary admissible control and ε is a sufficiently small constant.
We also need the following assumption and variational equation (6).

Assumption 2
(i) b, f, g, σ, Φ, and c are continuously differentiable with respect to (x, y, z, r, u), and the derivatives are all bounded. (ii) ere exists a constant C > 0, and it holds that |c y | ≤ C(1 + |y|).
Mathematical Problems in Engineering Lemma 2. Suppose Assumptions 1 and 2 hold; for the firstorder variation X, Y, Z, R, we have the following estimations: sup 0≤t≤T Proof. We first prove inequations (7) and (8). For the forward part of the first-order variation equation, we have Applying Gronwall's inequation, we have Similarly, (8) holds. We next estimate Y t , Z t , and R t ; the backward part of the first-order variation equation can be rewritten as Squaring both sides of (17) and using the fact of we get When t ∈ [T − δ, T] with δ � 1/12C 2 , we have Applying Gronwall's inequation, we have Consider the BSDE of the first-order variation equation in the interval [t, T − δ]: us, So, when t ∈ [T − 2δ, T] with δ � 1/12C 2 , we have After a finite number of iterations, (9), (11), and (13) are obtained. And (10), (12), and (14) can be proved by using a similar method and the following inequalities: □ Lemma 3. Under hypothesis, Assumptions 1 and 2, it holds the following four estimations: Proof. To prove (25), we observe that Mathematical Problems in Engineering 5 where It follows easily from Lemma 2 that Since with Next, we prove (26), (27),, (28); it can be easily checked that Here, Since then From Lemma 2 and equation (25), we have en, we can get (26), (27), and (28) by applying the iterative method to the above relations. □ Lemma 4 (variational inequality). Under the conditions that Assumptions 1 and 2 are established, we can get the following variational inequality: Proof. From the four estimations in Lemma 3, we have the following estimation: erefore, We introduce the following Hamiltonian function H : [0, T] × R n × R m × R m×d × l 2 (R m ) × U × R n × R m × R n×d × l 2 (R n ) as H(t, x, y, z, r, u, p, q, w, k) � 〈p, b(t, x, u)〉 +〈w, σ(t, x)〉 +〈k, g(t, x)〉 − 〈q, f(t, x, y, z, r, u)〉, and the following adjoint equation q 0 � c y y * 0 , p T � − Φ x x * T q T , t ∈ [0, T], where H x u *