EFFICIENT CRITERIA FOR THE STABILIZATION OF PLANAR LINEAR SYSTEMS BY HYBRID FEEDBACK CONTROLS

We suggest some criteria for the stabilization of planar linear systems via linear hybrid feedback controls. The results are formulated in terms of the input matrices. For instance, this enables us to work out an algorithm which is directly suitable for a computer realization. At the same time, this algorithm helps to check easily if a given linear 2× 2 system can be stabilized (a) by a linear ordinary feedback control or (b) by a linear hybrid feedback control.


Introduction
Consider a linear control 2 × 2 system ẋ = Ax + Bu, y = Cx, (1.1) on [0, ∞), where x ∈ R 2 is the state variable of the system, y ∈ R m is the output variable, u ∈ R is the control variable, and B and C are given real matrices of the sizes 2 × and m × 2, respectively.If the pair (A,B) is controllable, or more generally, stabilizable, and rank C = 2 (which describes the case of complete observability of the solutions), then it is always possible (see, e.g., [5,6]) to achieve exponential stability of the zero solution to the control system (1.1) with an arbitrary matrix A. In such a case, there exists a linear ordinary feedback control of the form u = Gy with an × m matrix G, which yields exponential stability.
Similarly, if rank B = 2 and the pair (A,C) is observable, or at least detectable, then again a suitable linear feedback control of the form u = Gy solves the stabilization problem for system (1.1).
However, it is known that in practice, neither the condition rank B = 2 nor the complete observability of the solutions (i.e., rank C = 2) can be unavailable.The most interesting situation for applications is, therefore, the case when rank B = rank C = 1.
A simple example of such a system is the harmonic oscillator with the external force as the control, where Here, the displacement variable x 1 is available for measurements, while the controller can only change the velocity variable x 2 (we assume that x = ( x1 x2 ) ∈ R 2 ).This control system is both controllable and observable, but it cannot be stabilized by ordinary (even nonlinear and discontinuous) output feedback controls of the form u = f (y) (see, e.g., [1]).However, as it was shown by Artstein [1], there exists a hybrid feedback control which provides asymptotical stability of the zero solution to (1.1) with the matrices from (1.2).
A hybrid feedback control includes essentially two features (see Section 3 for the formal definitions): a discrete time controller (an automaton) attached to the given dynamical system (i.e., to (1.1) in our case) via the matrices B and C, and a switching algorithm describing when and how a control u should be changed.Artstein's example shows that such a hybrid feedback control may help even when the ordinary feedback fails to stabilize the system.
In [2,3], the following result is obtained for B and C being nonzero matrices of rank 1: system (1.1) is stabilizable by a linear hybrid feedback control (LHFC) if and only if for at least one α ∈ R, the matrix A + αBC does not have nonnegative real eigenvalues.This result gives a necessary and sufficient stabilization condition, and it is straightforward that making use of hybrid feedback controls provides a better stabilization criterion compared to any one we can obtain exploiting ordinary feedback controls.
However, the shortcoming of this criterion is that it does not give any explicit description of how its assumptions can be verified in practice.In other words, it does not suggest any efficient, finite-step algorithm in terms of the given matrices (A,B,C), which would answer the question when system (1.1) admits a stabilizing feedback control.
In contrast to [2,3], the present paper aims at (1) finding verifiable criteria for LHFC stabilization of system (1.1), (2) constructing efficient algorithms (which should also be "computer-friendly"), which can easily test a specific system (1.1) in terms of the input matrices (A,B,C) to find out whether the zero solution to (1.1) can be stabilized by an ordinary feedback linear control or by an LHFC.

Notations and relevant facts of control theory
We define by N, R, and C the sets of all natural, real, and complex numbers, respectively.The set R will in the sequel be naturally identified with {z | Im z = 0} ⊂ C. By •, • and | • | we mean the scalar product and the Euclidean norm in R 2 , respectively.We write Span{b} for the one-dimensional vector space containing a given vector b ∈ R 2 .We also put Let M( ,m) denote the set of all real × m matrices.Matrices will often be addressed as linear operators in the appropriate vector spaces.In the sequel, I and Θ will stand for the Elena Litsyn et al. 489 identity 2 × 2 matrix and the zero 2 × 2 matrix, respectively.Given a matrix D ∈ M(2,2), we will denote its spectrum by σ (D).
The characteristic and the minimal polynomials of the matrix A will be denoted by π A (λ) and p A (λ), respectively.Clearly, π A (λ) = λ 2 − trA • λ + detA.The decomposition C = C − C + implies also a special factorization of the minimal polynomial p A = p − A p + A , where the zeros of p − A (λ) and p + A (λ) belong to C − and C + , respectively.The notation A|B is used for the controllability space of the pair (A,B), that is, A|B := B(R ) + AB(R ).
We recall some well-known facts (see, e.g., [5,6]) from the theory of control linear systems, which are summarized in Definitions 2.1, 2.3, and 2.6 and Lemmas 2.2, 2.4, 2.5, 2.7, 2.8, and 2.9.Although some of the results are quite general, we will formulate them for the case of 2 × 2 systems, as it is the case of interest in this paper.

(I)
The following conditions are equivalent: The following conditions are equivalent: Definition 2.6.The pair (A,B) is called stabilizable if there exists F ∈ M( ,2) such that the matrix A + BF is stable.
The pair (A,C) is called detectable if there exists F ∈ M(2,m) such that the matrix A + FC is stable.

Lemma 2.7. The pair (A,B) is stabilizable if and only if ker p +
Remark 2.10.We point out that the converse to Lemma 2.9 is not true in general.Indeed, for the matrices A = 1 0 0 −1 and B = (1 0) , the pair (A,B) is stabilizable but not controllable and the pair (A,B ) is detectable but not observable.

Definitions of linear hybrid feedback controls and hybrid feedback stabilization
Definition 3.1.By a discrete automaton we mean in the sequal a 6-tuple ∆ = (Q,I,ᏹ,T, j, q 0 ), where (i) Q is a finite set of all possible automaton states (locations); (ii) the finite set I contains the input alphabet; (iii) the transition map ᏹ : Q × I → Q indicates the location after a transition time, based on the previous location q and input i ∈ I at the time of transition; (iv) T : Q → (0,∞) is a mapping which sets a period T(q) between transitions times; (v) j : R m → I is a function with property j(λy) = j(y), y ∈ R m , λ > 0; (vi) q 0 = q(0) is the state of the automaton at the initial time.
In [1,4], a similar definition (without condition (v)) is considered.We add (v) to the standard requirements as we are going to use LHFCs in this particular paper (see Definition 3.2).
Intuitively, the automaton follows the output y and uses this information to determine switching times and the values of the new continuous piece of the control function.
For any automaton ∆ satisfying (i)-(vi), we can iteratively define a special feedback operator F ∆ .Given y : [0,∞) → R m , the function F ∆ y : [0,∞) → Q is defined by the following: The sequence {t k } ∞ k=0 (t 0 = 0), constructed in the definition of F ∆ y, determines when the automaton should switch between locations.Note that the sequence {t k } is allowed to depend on the output function y(•).Definition 3.2.The pair (∆,{G q }), where ∆ is a discrete automaton and {G q | q ∈ Q} ⊂ M( ,m), will be addressed as an LHFC; dependence between the control function u(•) and the output function y(•) is defined by u(t) = G q(tk) y(t), t ∈ [t k ,t k+1 ) and k = 0,1,..., where {t k } ∞ k=0 is the corresponding sequence of the switching times.
Elena Litsyn et al. 491 The set of all LHFCs will in the sequel be denoted by ᏸᏴ, while u = (∆,{G q }) ∈ ᏸᏴ will stand for a specific control.According to Definition 3.2, system (1.1), governed by a control u = (∆,{G q }) ∈ ᏸᏴ (in short, the u-governed system (1.1)), is equivalent to the nonlinear functional differential equation The dynamics of system (1.1), governed by an LHFC u, is a triple H(t)=(x(t), q(t),τ(t)), where x(•) is a solution to (1.1), q(t) is the automaton's location at instance t, and τ(t) is the time remaining till the next transition instance (see [1]).The function ) is also called a hybrid trajectory of system (1.1).Typical switching procedures (with examples) for systems with LHFC are described in [1,4] in detail.In [4], some general properties of hybrid trajectories for linear and nonlinear finite-dimensional systems are discussed.In the same paper, one can find a review of the authors' results on some properties of the hybrid dynamics.
In the sequel, we define by ᏸᏴ 1 ⊂ ᏸᏴ the class of those LHFCs, for which Q contains only one point.Clearly, the class ᏸᏴ 1 can naturally be identified with the class of ordinary linear feedback controls of the form u = Gy with G being an appropriate matrix.Definition 3.4 [1].System (1.1) is said to be stabilizable by a control u ∈ ᏸᏴ (u-stab.)if the trivial solution to (1.1) is uniformly asymptotically stable.In other words, (a) for any ε > 0, there is δ > 0 such that every solution x(•) with the property |x(0)| < δ satisfies the estimate |x(t)| < ε for t ≥ 0; (b) for every solution x(•), |x(t)| → 0 as t → ∞, the convergence being uniform with respect to initial points x(0 (1) For all G ∈ M( ,m), there exists the unique matrix G 1 ∈ M( 1 ,m 1 ) such that

Some elementary and well-known facts about hybrid stabilization
Conversely, for all G 1 ∈ M( 1 ,m 1 ), there exists G ∈ M( ,m) such that (4.1) is valid.
The first statement of the lemma is just a simple exercise from the matrix algebra, while the second statement is a straightforward corollary from the first if one takes into account the definition of the classes ᏸᏴ and ᏸᏴ 1 in Section 3. (1) the pair (A,B) is stabilizable and rank C = 2, (2) the pair (A,C) is detectable and rank B = 2.
Proof.Suppose that the first statement is valid.By Lemma 4.1, one can then assume that C ∈ M(2,2), det C = 0.By Definition 2.6, there exists F ∈ M( ,2) such that the matrix A + BF is stable.Then A + BGC is stable, where G = FC −1 .According to Lemma 4.2, (A,B,C) is ᏸᏴ 1 -stab.Case (2) can be treated similarly.
In [2,3], the following result is proved.The results of this section show that if we wish to construct an algorithm which would test whether a given triple (A,B,C) provides ᏸᏴ 1 -stabilizability or ᏸᏴ-stabilizability of system (1.1), then we need to do the following: (1) study the cases when the pair (A,B) is not stabilizable or the pair (A,C) is not detectable, (2) find efficient algorithms for verifying the assumptions of Corollary 4.3 and Theorem 4.6.

The pair (A,B) is controllable if and only if for all λ
Proof.By Lemma 2.4, the pair (A,B) is not controllable if and only if det(B AB ) = 0, which implies that λ ∈ σ(A) ∩ R, B ∈ ker(A − λI).
(b) If CB = 0, then (see Lemma 5.7) for all G ∈ M(1,m), one has B ∈ ker(A G − λI).Hence, the solution to ẋ = A G x, x(0) = B is given by x(t) = e AGt B = e λt B, t ≥ 0. The last equality does not depend on G, so that it constitutes a solution to the u-governed system (1.1) satisfying x(0) = B for any u ∈ ᏸᏴ.Since λ ≥ 0, system (1.1) is not ᏸᏴ-stab.
The theorem can be proved in a similar manner if (A,C) is not detectable.

Lemma 4 . 1 .
Putting rank B = 1 and rank C = m 1 , let B 1 ∈ M(2, 1 ) be a matrix consisting of 1 linearly independent columns of the matrix B, and C 1 ∈ M(m 1 ,2) a matrix consisting of m 1 linearly independent rows of the matrix C. Then the following statements are valid.