Stochastic Linear Quadratic Control Problem on Time Scales

)is paper addresses a version of the stochastic linear quadratic control problem on time scales (SΔLQ), which includes the discrete time and continuous time as special cases. Riccati equations on time scales are given, and the optimal control can be expressed as a linear state feedback. Furthermore, we present the uniqueness and existence of the solution to the Riccati equation on time scales. Furthermore, we give an example to illustrate the theoretical results.


Introduction
e linear quadratic control problem is one of the most important issues for the optimal control problem. It has received much attention [1][2][3] and has a wide range of applications in engineering and finance. Until now, the linear quadratic control problem is well understood from both continuous and discrete points of view. In this paper, the stochastic linear quadratic control problem is studied in the version of time scales.
Time scales were first introduced by Hilger [4] in 1988 in order to unite and extend the continuous and discrete analysis into a general framework. Time scales theory has been extensively studied in many works [5][6][7][8][9][10]. It is well known that the optimal control problem on time scales is an important field for both theory and applications. Since the calculus of variations on time scales was studied by Bohner [11], results on related topics and their applications have become more and more [12,13]. e existence of optimal control for the dynamic systems on time scales was considered in [14,15]. Subsequently, the maximum principle and dynamic programming in the time scale setting were studied [16,17] for a deterministic dynamic system. In [18,19], some results were obtained for the deterministic linear quadratic control problems on time scales. Recently, in Poulsen's Ph.D. thesis [20,21], the authors developed an interesting theory of dynamic equations on time scales, in which the time scale is a discrete and stochastic one depending on a sequence of i.i.d. positive random variables, but the associated dynamic equation does not include stochastic terms; they also studied the associated control and stability problems.
In this paper, we are interested in the stochastic linear quadratic control problem on time scales (SΔLQ for short). Very similar to continuous and discrete cases, we can also obtain the associated Riccati equations (see [22,23] for continuous and discrete cases) on time scales. Meanwhile, we discuss the existence and uniqueness of the solution to Riccati equations on time scales (RΔE for short). e difference from [24] is that the control system does not contain the expectation term, but includes the inhomogeneous term. e organization of this paper is as follows. In Section 2, we introduce some preliminaries about the time scale theory and SΔLQ problem. We will study the stochastic linear quadratic control problem on time scales and the associated Riccati equations on time scales in Section 3. Finally, an example is given.

Preliminaries
A time scale T is a nonempty closed subset of the real numbers R, and we denote [0, T] T � [0, T] ∩ T. In this paper, we fix T ∈ T. e forward jump operator σ is defined by supplemented by inf∅ ≔ supT, where ∅ denotes the empty set. If σ(t) � t and t < supT, the point t is called right-dense, while if σ(t) > t, the point t is called right-scattered. For a function f, we denote f σ (t) � f(σ(t)) to represent the compositions of the functions f and σ. e backward jump operator ρ is defined by supplemented by sup∅ ≔ infT. If ρ(t) � t and t > infT, the point t is called left-dense, while if ρ(t) < t, the point t is called left-scattered. Moreover, a point is called isolated if it is both left-scattered and right-scattered. e definition of the graininess function μ is as follows: We now present some basic concepts and properties about time scales (see [6,7] for any sequence t n ∈ T such that t n ⟶ t 0 as n ⟶ ∞.
Define the set T κ as Definition 3. Let f: T ⟶ R be a function and t ∈ T κ ; if for all ε > 0, there exists a neighborhood U of t such that for all s ∈ U.
If the functions f and g are differentiable at t, then the product fg is also differentiable at t, and the product rule is given by Proposition 1. If a function f is right-dense continuous, then f has antiderivative F.

Definition 4.
A function p is said to be regressive if 1 + μ(t)p(t) ≠ 0 for all t ∈ T. e set of all regressive and rightdense continuous functions p: T ⟶ R is denoted by R.
For an n × n matrix-valued function A, if I + μ(t)A(t) is invertible for all t ∈ T, we say that A is regressive. Similar to the scalar case, the regressive and right-dense continuous matrix-valued function A is denoted by A ∈ R. Notations: the following notations will be used: We now introduce two known results, which will be used in the following.
Lemma 2 (Schur's lemma; see [25]). Let U and V be symmetric matrices and Y be given with appropriate dimensions. e following conditions are equivalent: In this paper, we adopt the stochastic integral defined by Bohner et al. [10]. Let (Ω, F, F t t∈[0,T] T , P) be a complete probability space with an increasing and continuous fil- A Brownian motion indexed by time scale T was defined by Grow and Sanyal [9]. Although the Brownian motion on time scales is very similar to that on continuous time, there are also some differences between them. For example, the quadratic variation of a Brownian motion on time scales (see [26]) is an increasing process yet, but it is not deterministic.
Discrete Dynamics in Nature and Society Similar to Definition 4, we give the definition of stochastic regressive.
Definition 5 (see [27]). A function q is said to be stochastic Now, we give the definition of the stochastic Δ-integral and its properties.
Definition 6 (see [10]). e random process X(t) is sto- where and the Brownian motion on the right side of (9) is indexed by continuous time.
We also have the following properties.
where the integral X with respect to the quadratic variation of Brownian motion 〈W〉 t is defined by the Stieltjes integral as In addition, we have the following result about the stochastic Δ-differential equation which is easy to be proved.
A ∈ R, and (I + μA Proof. For the continuous closed interval [t 1 , t 2 ], we introduce the following equation: and then ΔΨΦ � 0 and Finally, we introduce our SΔLQ problem. Consider the following stochastic linear quadratic control problem: where the control u ∈ L 2 F ([0, T] T ; R m ) and X satisfies the linear Δ-differential equation: Here, W � (W 1 , . . . , W d ) is a d-dimensional standard Brownian motion on time scales. And the associated coef-

Main Results and the Riccati Equations
By some simple calculations, it is not hard for us to obtain the following product rule for stochastic processes on time scales, which is very similar to that of Du and Dieu [8].
Discrete Dynamics in Nature and Society Lemma 4. For any two n-dimensional stochastic processes X 1 and X 2 with where Remark 1. We can also obtain another form of the above product rule as follows: where ΔtΔt � μ(t)Δt, ΔtΔW � ΔWΔt � μ(t)ΔW, and ΔWΔW � Δ〈W〉 t . When T � R, it is consistent with It o's formulas.
Remark 2. As mentioned before, since the quadratic variation of a process depends on not only the process itself but also the structure of the time scale, the quadratic variation of a process becomes a little more complicated than the classical one. For example, the quadratic variation of a deterministic continuous process is no longer zero. erefore, we can have different forms of the product rule on time scales. For example, product rule (6) is equivalent to Now, we introduce the following RΔE for our problem: along with an equation on time scales: where 0 is the zero vector and Clearly, equation (22) has a unique solution since it is a linear equation (see Bohner and Peterson [7]). RΔE (21) is very similar to the classical Riccati equation in continuous and discrete time (see [22,28]). e only difference is that RΔE includes graininess function μ. When T � R + or T � Z + , (21) can degenerate into the classical ones.
A matrix-valued function P ∈ C 1 ([0, T] T ; S n ) is called a solution to RΔE if it satisfies (21). A function g ∈ C 1 ([0, T] T ; R n ) is a solution to (22) if it satisfies the Δ-differential equation. Here, we use the square completion technique to show that the solutions of equations (21) and (22) can give a state-feedback optimal control. Theorem 1. Suppose that P ∈ C 1 ([0, T] T ; S n ) and g ∈ C 1 ([0, T] T ; R n ) are solutions of (21) and (22), respectively; then, SΔLQ problem (14) and (15) have an optimal control u:

t)[(I + μ(t)A(t)) + L(t)]X(t)
and the optimal cost functional J is Proof. Applying the product rule to X(t) ′ P(t)X(t) and X ′ (t)g(t), we can obtain that 4 Discrete Dynamics in Nature and Society

Remark 3.
Obviously, if f ≡ 0, we just need to think about RΔE (21). And we can see from the above proof that the uniqueness of the solutions to (21) and (22) is not necessary for the establishment of (25) and (26).
It is worth pointing out that the state feedback matrix relies on not only the solution of RΔE but also the graininess function μ. In addition to this, (25) and (26) are very similar to the classical cases. Now, we discuss the existence and uniqueness of the solution to RΔE. We will apply a technique given in [29]. (21); then, P is unique.
Proof. Suppose that P ∈ C 1 ([0, T] T ; S n ) is another solution of RΔE (21). We consider a new SΔLQ problem as follows: s.t.

ΔX(t) � (A(t)X(t) + B(t)u(t))Δt
Discrete Dynamics in Nature and Society According to the proof of eorem 1, y ′ P(s)y and y ′ P(s)y are both minimum values of the cost functional J(s, y; u(·)). erefore, we obtain that P(s) � P(s). Due to the arbitrariness of (s, y), it follows that P � P.
For the existence of the solution to RΔE (21), we first consider a special case: In this case, we study a linear equation required in the following.
Proof. Let P 0 (t) ≡ H and

s)H(s) + H(s)A(s) + μ(s)A ′ (s)H(s)A(s) +
Because of the boundedness of A, C j , H, and Q, there exists a constant M > 0 such that Consequently, we have that where h k is the generalized monomials defined in [30]. us, k |P k+1 (t) − P k (t)| is uniformly convergent. Moreover, P k uniformly converges to P, P is a solution of (32), and we get the existence of a solution to (32). If P is another solution of (32), then 6 Discrete Dynamics in Nature and Society where P � P − P. Based on some properties of matrix norms, there is a constant α such that By Lemma 1, it is easy to obtain P � 0. is implies the uniqueness of a solution to (32).
For linear equation (32), we also have the following property. □ Proposition 3. Let Q and H be positive semidefinite symmetric matrices; then, the solution P of (32) is also a positive semidefinite symmetric matrix.

Lemma 5. Assume (H1) holds; RΔE
Discrete Dynamics in Nature and Society Proof. By (45), we have e proof is completed. Consequently, the existence of the solution to (44) is equivalent to the existence of the solution to (21) under assumption (H1). Proof. Let P i+1 be the solution of where P 0 (t) � H, Discrete Dynamics in Nature and Society By Proposition 3, we can see that P i ≥ 0.
We will claim that P i ≥ P i+1 . Set M i � P i − P i+1 ; then, M i satisfies An equivalent form of N(t) can also be written as follows: Noting that M i (T) � 0, by Proposition 3 once again, we get M i ≥ 0. us, P i i ≥ 0 is a decreasing sequence and has a limit P. Clearly, P is the solution to (44). Hence, P is also the solution to RΔE (21) by Lemma 5. Now, we prove the following sufficient and necessary conditions of the existence of the solution to RΔE (21). □

Theorem 4. RΔE (21) admits a solution if and only if there
exists P which satisfies the following conditions:

Example
In this section, in order to illustrate our result, we give an example. (1 + μ(t))P σ (t) > 0.
We have P(t) � e 1/1+μ(t) (1, t). e optimal control u(t) � −1/1 + μ(t)x(t) (see Figure 1). We can see that the optimal strategy depends on the structure of time scale T � [0, 1/4] ∪ 1/2 { } ∪ [3/4, 1]. Furthermore, the example implies that we should take an impulsive control when t � 1/4 in the time scale setting. is interesting result is hidden in the classical continuous and discrete formulation. It reveals that we should use an impulsive control method at the right-scattered points. In addition, the impulsive control relies on the time gap.

Conclusion
e linear quadratic optimal control problems for stochastic differential equations on time scales are studied. ey unify and extend the optimal control problems in continuous-and discrete-time formulations. rough the Riccati equation on time scales, we get the corresponding optimal control with the state feedback representation.
e optimal control problems established in this paper offer a more practical scheme in directly tackling the issue on the mixture of continuous time and discrete time.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.