STABILITY OF DIFFERENCE ANALOGUE OF LINEAR MATHEMATICAL INVERTED PENDULUM

To use numerical investigation of functional differential equations it is very important to know if difference analogues of the considered differential equations have the reliability to preserve some general properties of these equations, in particular, property of stability. This problem is considered here by investigation of a difference analogue of the linear mathematical inverted pendulum. The problem of stabilization of the mathematical inverted pendulum is very popular among the researches (see, for instance [1, 2, 3, 5, 13, 14]). The linearized mathematical model of the controlled inverted pendulum can be described by the following linear differential equation of second order


Statement of the problem
To use numerical investigation of functional differential equations it is very important to know if difference analogues of the considered differential equations have the reliability to preserve some general properties of these equations, in particular, property of stability.This problem is considered here by investigation of a difference analogue of the linear mathematical inverted pendulum.
The problem of stabilization of the mathematical inverted pendulum is very popular among the researches (see, for instance [1,2,3,5,13,14]).The linearized mathematical model of the controlled inverted pendulum can be described by the following linear differential equation of second order ẍ(t) − ax(t) = u(t), a > 0, t ≥ 0. (1.1) The classical way of stabilization of system (1.1) uses the control u(t) = −b 1 x(t) − b 2 ẋ(t), where b 1 > a, b 2 > 0. But this type of control which represents an instantaneous feedback is quite difficult to realize because usually we need some finite time to make measurements of the coordinates and velocities, to treat the results of the measurements and to implement them in the control action.Unlike of the classical way of stabilization in which the stabilized control is a linear combination of the state and velocity of the pendulum another way of stabilization was proposed in [4].There it was supposed that only the trajectory of the pendulum can be observed and stabilized control depends on the whole trajectory of the pendulum, that is where the kernel K(τ) is a function of bounded variation on [0,∞] and the integral is understood in the Stiltjes sense.It means, in particular, that both distributed and discrete delays can be used depending on the concrete choice of the kernel K(τ).The initial condition for the system (1.1), (1.2) has the form where ϕ(s) is a given continuously differentiable function.Put Theorem 1.2 (see [4]).Let Then the trivial solution of system (1.1)-(1.3) is asymptotically stable.
It is shown also [4] that inequalities (1.5) are necessary conditions for asymptotic stability of the trivial solution of system (1.1)-(1.3)but inequality (1.6) is only sufficient one.
Below the mathematical model of the controlled inverted pendulum (1.1)-(1.3) is considered in the following simple form Here a > 0, b 1 , b 2 , h 1 > 0, h 2 > 0 are given arbitrary numbers.From (1.4) it follows that for equation (1.7) The main conclusion of our investigation here can be formulated in the following way: if conditions (1.5), (1.6) hold then the trivial solution of equation (1.7) is asymptotically stable and there exists enough small step of discretization of this equation that the trivial solution of the corresponding difference equation is asymptotically stable too.
Note, that the conditions for asymptotic stability are obtained here by virtue of Kolmanovskii and Shaikhet's general method of Lyapunov functionals construction [6,7,8,9,10,11,12,15] which is applicable for both differential and difference equations, both for deterministic and stochastic systems with delay.

Construction of difference analogue
Transform equation (1.7) to a system of the equations To construct a difference analogue of system (2.1) put A difference analogue of system (2.1) can be considered in the form From the first equation of system (2.3) we have From here and (1.8) it follows (2.5) Substituting (2.5) into the second equation of system (2.3) and using (1.4) we obtain Put (2.7) (2.8) From here and (2.6) it follows (2.9) So, system (2.3) can be written in the matrix form where (2.11)

Stability conditions of the auxiliary equation
Following the general method of Lyapunov functionals construction [7] at first consider the auxiliary equation which can be written in a scalar form with It is well known [15] that necessary and sufficient conditions for asymptotic stability of the trivial solution of equation (3.2) have the form For A 1 from (3.3), (3.4) it follows 0 < τ(k 1 − τa 1 ) < 2. It means that (3.5) As a result we obtain necessary and sufficient conditions for asymptotic stability of the trivial solution of auxiliary equation (3.2) in the form Let matrix C be a diagonal matrix with positive elements c 1 and c 2 .Then the elements d i j of the matrix D satisfy the system of the equation with the solution

Stability conditions of the difference analogue
Let us obtain now a sufficient condition for asymptotic stability of the trivial solution of (2.10).Transform this equation to the form Following the general method of Lyapunov functionals construction [7] we will construct Lyapunov functional V i for equation (2.10) in the form V i = V 1i + V 2i , where and the matrix D is a positive definite solution of matrix equation (3.8).
Calculating ∆V 1i via (4.2), (4.1), (3.8) we have where Put now Using (2.7) and λ 1 > 0 we have and analogously As a result we obtain where To neutralize the positive component in the estimate for ∆V 1i choose V 2i in the form Calculating ∆V 2i , we obtain Thus, for the functional 222 Stability of inverted pendulum Using (4.14) we obtain the stability conditions in the form To minimize the left-hand part of the second condition (4.18) put λ 2 = q 3 /q 2 .Then (4.18) takes the form Choosing λ 1 > 0 from the condition we obtain Put Then Therefore, Using dependence α and β on c put where Proof.For τ = 0 condition (4.30) takes the form It is easy to see that if condition (1.6) holds then condition (4.31) (or condition (4.30) for τ = 0) holds too.Since the function δ(τ) is continuous in the point τ = 0 then if condition (4.30) holds for τ = 0 then it holds for enough small τ > 0 also.The proof is completed.

Numerical analysis
Here we consider some numerical examples which illustrate the theoretical results obtained above.For illustration of Corollary 4.3 consider the following example.) is asymptotically stable.Besides there exists enough small τ > 0 that condition (4.30) holds.Using τ = 0.01 we obtain δ(0.01) = 0.869 < 1, that is condition (4.30) holds.Therefore, the trivial solution of difference system (2.3) is asymptotically stable.On Figure 5.1 it is shown that the trajectory of solution of system (2.3) with the initial condition x j = 33, j ≤ 0, y 0 = 0 goes to zero.
If conditions (1.5) hold but condition (1.6) does not hold then the trivial solution of equation (1.7) can be asymptotically stable or unstable.If in this case for some τ > 0 condition (4.30) does not hold too then the trivial solution of difference system (2.3) can be also asymptotically stable or unstable.In the following two examples one can see the both situations.
for arbitrary positive definite matrix C the matrix equation A DA − D = −C.(3.8) has a positive definite solution D then the function v(i) = z (i)Dz(i) is a Lyapunov function for equation (3.1), that is ∆v(i) = −z (i)Cz(i).

. 10 ) 3 . 1 .
Remark Note that without loss of generality in (3.10) we can put c 1 = 1, c 2 = c.Really, if it is not so we can divide matrix equation (3.8) on c 1 .As a result we obtain a new diagonal matrix C with the elements 1 and c = c 2 /c 1 and a new matrix D with the elements 11), (4.5), (4.6) we obtain