Finite Time Inverse Optimal Stabilization for Stochastic Nonlinear Systems

and Applied Analysis 3


Introduction
In many engineering fields, it is desirable that trajectories of a dynamical system converge to an equilibrium in finite time. In order to achieve convergence in finite time, one notices immediately that finite time differential equations cannot be Lipschitz at the origin. As all solutions reach zero in finite time, there is nonuniqueness of solutions through zero in backwards time. Haimo [1] pointed out that this violates the uniqueness condition for solutions of the Lipschitz differential equations. Finite time stability was studied in [1][2][3]. Recently, finite time stability has been further extended to switched systems in Orlov [4], time-delay systems in Moulay et al. [5], and impulsive dynamical systems in Nersesov et al. [6]. The problem of finite time stabilization has been studied by Bhat and Bernstein [7] and Hong et al. [8]. Finitetime stabilization technique has been applied in tracking control of multiagent systems by Li et al. [9] and attitude tracking control of spacecraft by Du et al. [10,11]. Moulay and Perruquetti [12] studied finite time stabilization of a class of continuous system using the control Lyapunov functions (CLFs). The CLF was introduced by Artstein [13] and Sontag [14] and made a tremendous impact on stabilization theory. In particular Sontag's universal formula in [15] has played an important role in control theory. Florchinger [16] proved that the feedback control law defined in [15] can globally asymptotically stabilize stochastic nonlinear systems. This result was extended when the drift as well as the controlled part was corrupted by a noise in the works of Chabour and Oumoun [17].
After the success of finite time stability and stabilization theory for deterministic systems, how to extend them to the case of stochastic systems naturally became an important research area. Chen and Jiao [18][19][20] presented a new concept of finite time stability for stochastic nonlinear systems, and a theorem concerning the finite time stability was proved. However, to the authors knowledge, no work on finite time inverse optimal stabilization for stochastic systems has been done at the present stage.
In this paper, for general stochastic systems affine in the control and noise inputs, a concept of the stochastic finite time control Lyapunov function (SFT-CLF) is given. Next, a sufficient condition is developed for finite time stabilization in probability, and a control law is designed. After considering the finite time stabilization of stochastic systems, an important problem is how to further design a stabilizing controller which is also optimal with respect to meaningful cost functionals, that is, the inverse optimal control. In this paper, we consider the finite time inverse optimal controller design. This result is extended from the inverse optimality result of Freeman and Kokotovic [21] to finite time inverse optimal controller design for stochastic nonlinear systems. Finally, the effectiveness of the proposed design technique is illustrated by an example.
For any given twice continuous differentiable function ( ), associated with stochastic system (1), the infinitesimal generator L is defined as follows: In this paper, denotes the set of all functions + → + , which are continuous, strictly increasing, and vanishing at zero.
Definition 2. For stochastic system (3), the equilibrium = 0 is said to be globally finite time stable in probability, if the following conditions hold: (1) the equilibrium = 0 is globally stable in probability if for all > 0 there exists a class function then the equilibrium = 0 of system (3) is globally finite time stable in probability, and the settling time function 0 ( 0 , )
The stochastic finite time control Lyapunov function is defined as follows.
Definition 5. A positive definite, twice continuous differentiable, and radially unbounded function : → + is a stochastic finite time control Lyapunov function (SFT-CLF) of system (1) if there exist real numbers > 0 and 0 < < 1, such that ( ) is said to satisfy small control property with respect to system (1) if for each > 0 there is a > 0 such that, for all

Main Results
Consider system (1). An explicit feedback control law is designed such that the equilibrium = 0 of the closed-loop system is globally finite time stable in probability. Moreover, a control law for finite time inverse optimal stabilization is dealt with.
We will prove that the control law = ( ) given by (8) is differentiable away from the origin and continuous at the origin. For this objective, consider an open subset of 2 as follows: From implicit function existence theorem, the function defined by is differentiable on . By (9), for any ∈ \ {0}, we have ( ( ), ‖ ( )‖ 2 ) ∈ . So the control law (8) is differentiable away from the origin. If (6) holds, it implies that the control law (8) satisfies the small control property by [23]. Then it is continuous at the origin by [15].
In addition, under the control law (8), the equilibrium = 0 of the closed-loop system has the unique global solution in forward time. From Lemma 3, the closed-loop system (1) and (8) is globally finite time stable in probability. And the settling time function

Finite Time Inverse Optimal Stabilization in Probability.
In this subsection, we consider finite time inverse optimal stabilization in probability. That is, a feedback control law ( ) for system (1) will be constructed such that the following properties hold: (i) the closed-loop system is finite time stable in probability at the equilibrium = 0, (ii) ( ) minimizes the cost functional where ( ) ≥ 0, ( ) > 0 for all and 0 ( 0 , ) is the settlingtime function, and 0 ∈ is an initial value. In the inverse approach, a finite time stabilizing feedback law ( ) is designed first, and then it is shown that the feedback law is to find ( ) ≥ 0 and ( ) > 0 such that ( ) optimizes (14). The problem is inverse because the functions ( ) and ( ) are a posteriori determined by the stabilizing feedback law, rather than a priori chosen by the designer.
where > 0 and ( ), ( ) are defined as (7). In addition, assume that the control law (15) is such that the closed-loop system has the unique global solution in forward time for all initial conditions. Then the control law (15) solves the problem of finite time inverse optimal stabilization in probability for system (1) by minimizing the cost functional (14) with And the settling time function Proof. It is easy to prove that ( ) > 0. Next, we will prove that ( ) ≥ 0.

(30)
Corollary 8. Suppose that the conditions of Theorem 6 hold. Then the control law (8) is finite time inverse optimal stabilization in probability for system (1) by minimizing the cost functional (14) with > 0 and ( ), ( ) being given as (7) and And the settling time function Proof. Using the arguments as Theorem 7, one can prove it.

Simulation Example
Some designs of finite time stabilizing control laws employ cancellation and do not have satisfactory stability margins, let alone optimality properties. The inverse optimal approach is a constructive alternative to such designs, which achieves desired stability margins. Let us clarify this important issue by an example in this section.
Example 1. Consider the following first-order stochastic nonlinear system: One possible finite time stabilization design is to let cancel 2/3 and add a finite time stabilizing term. This is accomplished with the following control law 1 = − 2/3 − 1/3 which results in what appears to be a desirable closed-loop system: It is easy to verify that the equilibrium = 0 of system (33) is globally finite time stable in probability. However, because of the cancellation, this feedback control law does not have any stability margin: with a slightly perturbed feedback control law (1 + ) 1 ( ), the closed-loop system has solutions which escape to infinity in finite time for any ̸ = 0.
One can deduce that ( ) = (1/2) 2 is an SFT-CLF for system (32). By Theorem 7, we have It can be verified that the closed-loop system (32) Moreover, the control law (36) has two desirable properties.
(i) For < 0, it recognizes the beneficial effect of the nonlinearity 2/3 to enhance the negativity of L .
(ii) Instead of cancelling the destabilizing term 2/3 for > 0, the inverse optimal control (36) dominates it and, by doing so, achieves a stability margin. Figures 1, 2, 3, and 4 show the state ( ) under state feedback control law (36) and the control law for the initial state (0) = 1, 2, 3, 4, respectively, and the response curve of the Wiener process ( ) in the closed-loop system. One can observe that the stabilization is achieved.

Conclusion
In this paper, we study the finite time inverse optimal stabilization for stochastic nonlinear systems. First, a concept of the SFT-CLF is presented. Secondly, a control law for finite time stabilization for the closed-loop system is obtained. Furthermore, a sufficient condition is developed for finite time inverse optimal stabilization in probability, and a control law is designed to ensure that the equilibrium of the closed-loop system is finite time inverse optimal stable. Finally, a simulation result shows the effectiveness of the method.