Maximum Principle for Near-Optimality of Mean-Field FBSDEs

*e present paper concerns with a near-optimal control problem for systems governed by mean-field forward-backward stochastic differential equations (FBSDEs) with mixed initial-terminal conditions. Utilizing Ekeland’s variational principle as well as the reductionmethod, the necessary and sufficient near-optimality conditions are established in the form of Pontryagin’s type.*e results are obtained under restriction on the convexity of the control domain. As an application, a linear-quadratic stochastic control problem is solved explicitly.


Introduction
Near-optimal control problems have attracted more attentions in recent years due to its distinct advantages, such as existence under minimal assumptions, availability in most practical cases, and convenience for implementation both analytically and numerically. e study of this theory can be traced back to Ekeland [1] and later greatly developed by Zhou [2][3][4] for deterministic and stochastic cases. Since then, many works have been devoted to the near-optimality of various stochastic control systems. Without being exhaustive, let us refer to [5][6][7][8][9][10][11][12][13] and the references therein.
In 2015, Zhang et al. [14] investigated the near-optimality necessary conditions for classical linear FBSDEs, where the control domain was with nonconvexity. Via convergence technique as well as reduction method, they established the near-optimal maximum principle. Soon afterwards, under the same assumptions, Zhang [15] presented the near-optimal sufficient conditions for such classical linear FBSDEs. Especially, in 2018, by defining viscosity solution with perturbation factor to dispense the illusory differentiability condition of value function, Zhang and Zhou [16] established the necessary near-optimality conditions for stochastic recursive systems by virtue of dynamic programming principle. Another noteworthy thing is that, for recent years, some authors started research studies on near-optimal control problems for delay systems. For example, Zhang [17] first studied near-optimal control problems for linear stochastic delay systems. By anticipated backward stochastic differential equations method as well as maximum principle, necessary condition and sufficient verification theorem were provided. en, also under restriction on convexity control domain, Wang and Wu [18] investigated near-optimal control problem for nonlinear stochastic delay systems. By Ekeland's variational principle and corresponding moment estimations, they presented the sufficient as well as necessary near-optimality conditions. For more details, refer to [19,20] and the references therein.
However, to the best of our knowledge, few papers can be found in the literature on the near-optimality of mean-field backward stochastic differential equations (BSDEs).
is new kind of mean-field BSDEs was first introduced by Buckdahn et al. [21], which were derived as a limit of some highly dimensional system of FBSDEs, corresponding to a large number of particles. It has been shown in Buckdahn et al. [22] that, such a mean-field BSDE described the viscosity solution of the associated nonlocal partial differential equations. Henceforth, many authors take into account of this system of McKean-Vlasov type (Lasry and Lions [23]) adapted for different frameworks, for example, Xu and Wu [24] presented a maximum principle for optimal control problems governed by backward stochastic partial differential equations of mean-field type, and for other related works, refer to [25][26][27][28].
As we can see that all the above literature studies are about mean-field problems involving expectations as meanfield terms. In fact, there is another line dealing with meanfield problems, which involve large-population as mean-field terms to describe the impact of the population's collective behaviors on all agents (Huang et al. [29]) such as the work of Huang [30] and Xu and Shi [31] as well as the work of Xu and Zhang [32] all concerned with general mean-field linearquadratic-Gaussian (LQG) games of stochastic large-population systems; through the consistency condition, they derived the decentralized strategies and further verified the asymptotic near-optimality property (namely, ε-Nash equilibrium) of decentralized strategies for the LQ games. On the contrary, a relevant work of Hafayed and Abbas [8] dealing with near-optimal control problems has established necessary and sufficient conditions for mean-field singular stochastic systems in the case of controlled diffusion coefficient. Particularly, in the concluding section, it is pointed out that the establishment of necessary and sufficient nearoptimal conditions for mean-field FBSDEs also remains an open problem. Motivated by this fact with the addition of above described mean-field theory application background in economics and finance, this paper is to discuss nearoptimal control problems for mean-field FBSDEs, where the controlled state systems are with mixed initial-terminal conditions. e main contribution of this paper lies in the initial introduction of three first-order adjoint equations to eliminate the corresponding variational processes during dual analysis; another is rooted in the usage of reduction method to guarantee the well-posedness of the first-order adjoint equations with mixed initial-terminal conditions. Via classical convex variational technique and Ekeland's variational principle, a necessary condition of Pontryagin's type is derived. en, under some additional assumptions, we prove that the near-maximum condition on the Hamiltonian function is a sufficient condition for near-optimality. It is remarkable that our results extend those of [5] essentially to the framework of mean-field theory. e rest of this paper is organized as follows. In Section 2, we state some preliminaries and basic definitions. In Sections 3 and 4, we establish the main theorems and provide its detailed proof. In Section 5, an example of a linear-quadratic control problem is worked out to illustrate the theoretical applications. Finally, some concluding remarks are given in Section 6.

Preliminaries
Let (Ω, F, F t t ≥ 0 , P) be a filtered probability space satisfying the usual condition, on which a one-dimensional standard Brownian motion (W t ) t ≥ 0 is defined, F � F s , 0 ≤ s ≤ T be the natural filtration generated by (W t ) t ≥ 0 and augmented by all P-null sets, i.e., where N p is the set of all P-null subsets. We now introduce some spaces of random variables and stochastic processes.
We study the near-optimal control problem of the following controlled mean-field FBSDEs having mixed initialterminal conditions: and U is a given convex closed set of R.
e cost functional to be minimized over the space U � L 2 F (0, T; U) of admissible controls takes the form Definition 1 (see [4]). Both a family of admissible pairs (x ε , y ε , z ε , u ε ) parameterized by ε > 0 and any element holds for sufficiently small ε, where r is a function of ε satisfying r(ε) ⟶ 0 as ε ⟶ 0. e estimate r(ε) is called an error bound. If r(ε) � Cε δ for some δ > 0 independent of the constant C, then u ε is called near-optimal with order ε δ . Particularly, when r(ε) � ε, u ε is called ε-optimal. e nearoptimal control problem under consideration in this paper is as follows.
Some notations and assumptions are presented before giving the well-posedness of system (3). We denote the norm by | · | of an Euclidean space.
(A 1 ) e functions b, σ, f, and l are F-progressively measurable in u, continuously differentiable in x, y, z, x, y, and z, and the derivatives of b, σ, f, and l with respect to x, y, z, x, y, and z are bounded. Moreover, for some constant C > 0, (A 2 ) h, c, and φ are continuously differentiable in x and y, and the derivatives of h, c, and φ with respect to x and y are bounded. Moreover, for some constant Furthermore, where i � x, y, z, x, y, z and ρ � h, c, φ.
In fact, due to the mixed initial-terminal conditions in the state equation, even if we have the well-posedness of the state equation via the Lyapunov operator introduced in [34], the well-posedness of the first-order adjoint equation seems to be not guaranteed. To overcome this difficulty, we introduce a reduction method inspired by the study of optimality variational principle for controlled FBSDEs with mixed initial-terminal conditions [35]. First, we pose the following problem.
where (x 0 , y 0 , u) is subject to the forward control system: − dy t � f t, x t , y t , z t , Ex t , Ey t , Ez t , u t dt − z t dW t , with the mixed initial-terminal state constraints: It is remarkable that, for Problem A, the mean-field system (3) has a unique solution (x, y, z) under (A 1 ) − (A 3 ), which implies that y(0) is unique and completely determinate. While, for Problem B, y(0) is arbitrary and viewed as a control variable. It just needs to satisfy the near-optimal state constraints at time T. So, Problem A is embedded into Problem B. Hence, if the triple (x ε 0 , y ε 0 , u ε ) is the nearoptimal control of B, then u ε is near-optimal for Problem A. In the following section, we will adopt the classical convex variational technique to solve Problem B.

Necessary Condition of Near-Optimality
is section is devoted to the study of the main theorem. For simplicity, we denote For any u ∈ U and the corresponding state processes (x, y, z), we define the first-order adjoint equation as e well-posedness of the corresponding adjoint system will be provided in the derivation process of eorem 1.
Define a metric on U by Since U is closed, it can be shown that (U, d) is a complete metric space. Next, we will present some continuity of the state processes and adjoint processes with respect to the metric d.

Lemma 1.
For any 0 < α < 1 and 0 < p ≤ 2, there is a constant C � C(α, p) > 0 such that, for any u, u ∈ U along with the corresponding trajectories (x, y, z) and (x, y, z), it follows that Proof. Applying the classical methods as Lemma 4 in [5] for dealing with mean-field FBSDEs, together with Burkholder-Davis-Gundy inequality and Gronwall's inequality, we can logically obtain the estimates.
Proof. Under the assumption (A 2 ), it is easy to check that J(x 0 , y 0 , u) is lower semicontinuous on R ≔ R × R × U, which is a complete metric space under the following metric:

Mathematical Problems in Engineering
By Ekeland's variational principle [1], there exists an It means that (x ε 0 , y ε 0 , u ε ) is optimal for system (12) with the new cost functional J ε (x 0 , y 0 , u). On the contrary, due to the mixed initial-terminal endpoint constraints in problem B, we need to introduce the penalty functional to transform the original problem with endpoint constraints to the penalized optimal control problem with no endpoint constraints.