An Optimal Control Problem Governed by Nonlinear First Order Dynamic Equation on Time Scales

In this paper, we are concerned with a class of optimal control problem governed by nonlinear first order dynamic equation on time scales. By imposing some suitable conditions on the related functions, for any given control policy, we first obtain the existence of a unique solution for the nonlinear controlled system. 'en, we study the existence of an optimal solution for the optimal control problem.


Introduction
e theory of time scales was introduced by Hilger in [1] in order to unify discrete and continuous analysis. Some foundational definitions and results from the calculus on time scales will be defined in Section 2. For more details, one can see [2][3][4].
In recent years, the calculus of variations and optimal control problems on time scales have attracted the attention of some researchers. For example, [5][6][7][8] discussed the calculus of variations on time scales and [9][10][11][12] studied some maximum principles on time scales, while [13][14][15][16] investigated the existence of optimal solutions or the necessary conditions of optimality for some optimal control problems on time scales.
In 2017, Guo [17] studied the projective synchronization problem of a class of chaotic systems in arbitrary dimensions. Firstly, a necessary and sufficient condition for the existence of the projective synchronization problem was presented. Secondly, an algorithm was proposed to obtain all the solutions of the projective synchronization problem. irdly, a simple and physically implementable controller was designed to ensure the realization of the projective synchronization. Finally, some numerical examples were provided to verify the effectiveness and the validity of the proposed results.
In 2020, Xu and Zhang [18] investigated general mean-field linear-quadratic (LQ) games of stochastic large-population system, where the individual diffusion coefficient could depend on both the state and the control of the agent, and the control weight in the cost functional could be indefinite. e asymptotic suboptimality property of the decentralized strategies for the LQ games was derived through the consistency condition. A pricing problem was also studied, for which the decentralized suboptimal price was obtained.
roughout this paper, we always assume that T is a time scale, T > 0 is fixed, 0, T ∈ T and σ 2 (T) � σ(T). For each interval I of R, we denote by I T � I ∩ T.
Suppose that there is a flock of sheep in a pasture. We consider the changes in the number of sheep during a time interval [0, σ(T)] T . It is well known that the supply of herbage, which influences growth rate and reproductive ability of sheep, is one of the main ways to control the number of sheep. Now, we define some related functions as follows: x(t) is the number of sheep at time t r(t) is the number of births per unit of time at time t p(t) is the number of sales per sheep per unit of time at time t u(t) is the amount of herbage supplied at time t q(t) is the number of sheep converted by per unit of herbage supplied per unit of time at time t Let U ad be the admissible control set. en, for any given control policy u ∈ U ad , it is easy to know that the changes in the number of sheep can be described by the following linear dynamic equation: At the same time, in order to keep steady development, we may assume that the number of sheep at the beginning is equal to that at the end, that is, (2) Suppose that x u is the solution of the controlled systems (1) and (2) corresponding to the control policy u and x d is the desired value. Recently, the authors [19] considered the optimal control problem (P 0 ). Find a u 0 ∈ U ad such that where is the quadratic cost functional. Motivated greatly by the abovementioned works, in this paper, we suppose that the controlled system is governed by the following more general nonlinear periodic boundary value problem: First, by imposing some suitable conditions on p, f, and g, for any given control policy u ∈ U ad , we obtain the existence of a unique solution x u for the nonlinear controlled system (5). en, we study the optimal control problem (P). Find a u 0 ∈ U ad such that where where x d is the desired value and h: R ⟶ [0, ∞) is continuous.

Preliminaries
In this section, we will provide some foundational definitions and results from the calculus on time scales.

Definition 1.
We define the forward jump operator σ: T ⟶ T by while the backward jump operator ρ: T ⟶ T is defined by In this definition, we put inf∅ � supT and sup∅ � infT, where ∅ denotes the empty set. If σ(t) > t, we say that t is right-scattered, while if ρ(t) < t, we say that t is left-scattered. Also, if t < supT and σ(t) � t, then t is called right-dense, and if t > inf T and ρ(t) � t, then t is called left-dense. If T has a left-scattered maximum m, then we define Definition 2. Assume f: T ⟶ R is a function and let t ∈ T k . en, we define f Δ (t) to be the number (provided it exists) with the property that given any We call f Δ (t) the delta derivative of f at t.
Definition 3. A function f: T ⟶ R is called rd-continuous provided it is continuous at right-dense points in T and its left-sided limits exist (finite) at left-dense points in T.
Definition 4. We say that a function p: T ⟶ R is regressive provided holds. e set of all regressive and rd-continuous functions will be denoted by R. We define the set of positively 2 Mathematical Problems in Engineering regressive functions R + as the set consisting of those p ∈ R satisfying 1 + μ(t)p(t) > 0, for all t ∈ T.
Lemma 1. Let p ∈ R, t 0 ∈ T, and e p (·, t 0 ) be the exponential function on T. en, In the remainder of this paper, we always assume that Banach space en, it is easy to see that M > 0.
Lemma 4 (see [20]). For any y ∈ C([0, T] T , R), the following first order linear periodic boundary value problem has a unique solution

Main Results
First, we list the following two conditions which we shall use in the sequel.
From now on, we always suppose that the control space is C([0, T] T , R) and the admissible control set U ad is a compact subset of C([0, T] T , R).

Lemma 5.
Assume that conditions (A 1 ) and (A 2 ) are satisfied. en, for any given control policy u ∈ U ad , the nonlinear controlled system (5) has a unique solution x u and Proof. For any fixed u ∈ U ad , we define an operator

Let x, y ∈ C([0, σ(T)] T , R). en, in view of Lemmas 1 and 2 and (A
which together with 0 erefore, it follows from Banach contraction principle that Φ u has a unique fixed point x u ∈ C([0, σ(T)] T , R). is indicates that the nonlinear controlled system (5) has a unique solution x u and Proof. First, it follows from Lemma 5 that, for any given control policy u ∈ U ad , the nonlinear controlled system (5) has a unique solution x u and Mathematical Problems in Engineering Δt, u ∈ U ad , it is obvious that inf u∈U ad J(u) exists. us, by the definition of infimum, we know that there exists a sequence u n ∞ n�1 ⊂ U ad such that lim n⟶∞ J u n � inf u∈U ad J(u). (30) On the one hand, since U ad is a compact subset of C([0, T] T , R) and u n ∞ n�1 ⊂ U ad , u n ∞ n�1 has a convergent subsequence in U ad . Without loss of generality, we may assume that u n ∞ n�1 converges in U ad , that is, there exists u 0 ∈ U ad such that lim n⟶∞ u n � u 0 . (31) On the other hand, in view of Lemmas 1 and 2, (A 1 ) and (A 2 ), for any n � 1, 2, . . ., we have so, for any n � 1, 2, . . ., we obtain which together with (31) implies that us, in view of Lemma 3, (31), and (34), we obtain Mathematical Problems in Engineering 5 which together with (30) indicates that erefore, J(u 0 ) ≤ J(u) for all u ∈ U ad . is shows that u 0 is an optimal solution of the optimal control problem (P). □ 3]. We suppose that the controlled system is governed by the following nonlinear periodic boundary value problem (38) Since is satisfied. Moreover, if we choose L � (9πD/2), then 0 < L < (M/2(1 + M) 2 σ(T)) and it follows from Lagrange mean value theorem that is shows that (A 1 ) is fulfilled. For any given constant By Lemma 5, we know that, for any given control policy u ∈ U ad , the nonlinear controlled system (37) has a unique solution x u . Now, we consider the optimal control problem (P * ). Find a u 0 ∈ U ad such that where J(u) � where x d is the desired value.

Conclusions
In this paper, we consider a class of optimal control problem governed by nonlinear first order dynamic equation on time scales. First, by imposing some suitable conditions on the related functions and applying Banach contraction principle, for any given control policy, we obtain the existence of a unique solution for the nonlinear controlled system. Next, we prove that the optimal control problem has an optimal solution in the admissible control set. Finally, an example is also given to illustrate the main result of this paper. 6 Mathematical Problems in Engineering

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.