Transferring Instantly the State of Higher-Order Linear Descriptor ( Regular ) Differential Systems Using Impulsive Inputs

In many applications, and generally speaking in many dynamical differential systems, the problem of transferring the initial state of the system to a desired state in (almost) zero-time time is desirable but difficult to achieve. Theoretically, this can be achieved by using a linear combination of Dirac δ-function and its derivatives. Obviously, such an input is physically unrealizable. However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration. In this paper, the approximation process of the distributional behaviour of higher-order linear descriptor (regular) differential systems is presented. Thus, new analytical formulae based on linear algebra methods and generalized inverses theory are provided. Our approach is quite general and some significant conditions are derived. Finally, a numerical example is presented and discussed.


Introduction
From the point of view of several important applications, in several fields of research, see for instance [1,2], taking a given state of a linear system to a desired state in minimum time is very desirable, though it is a challenging problem in control and system theory.
Significant attention has been given to this problem in the case of linear systems; see [1][2][3].Recently, Kalogeropoulos et al. [4] have further enriched these first approaches, as they have relaxed some of the rather restricted assumptions that [1,2] are considered.Afterwards, the method has been also applied to the more general class of linear descriptor (regular) systems; see [5].
To summarize, in this paper, we investigate how we can transfer the initial state of an open loop, linear higherorder descriptor (regular) differential system in (practically speaking, almost) zero-time, that is, Fx (r) (t) = Gx(t) + bu(t) (1) with known initial conditions where F, G ∈ M(n × n; F), and b ∈ M(n × 1; F) (i.e., M is the algebra of n × m matrices with elements in the field F) with det M = 0 (0 is the zero element of M(n = 1, F)), x(t) ∈ C ∞ (F, M(n × 1; F)), and u(t) ∈ C ∞ (F, M(1; F)).For the sake of simplicity, we set in the sequel M n = M(n × n; F) and M nm = M(n × m; F).
In order to solve this problem, the appropriate input vector has to be made up as a linear combination of the δ-function of Dirac and its derivatives, see [1,2], and for more details consult [7].Obviously, such an input is very hard to imagine physically.However, we can think of it approximately as a combination of small pulses of very high magnitude and infinitely small duration.
Linear descriptor (singular or regular) differential systems have been extensively used in control theory; see for instance [8][9][10], and for more details [11].
A brief outline of the paper is as follows.Section 2 provides the incentives and the typical modelling features of the problem.Moreover, a classical approximated expression for the controller, that is, a linear combination of the δfunction of Dirac and its derivatives that is based on the normal (Gaussian) probability density function, is used.Then, the need to determine the unknown coefficients is derived.Section 3 is divided into four extensive subsections.In Section 3.1, the reduction of the higher-order system to first-order is discussed.The first-order descriptor (regular) is divided into a fast and a slow subsystem, using the Weierstrass canonical form.Section 3.2 investigates and presents some analytical formulas based on the slow subsystem.In Section 3.3, the theory of the {1}-generalized inverses is used.Finally, some significant conditions for the solution of the slow subsystem are presented in Section 3.4.In Section 4, a necessary condition based on the fast subsystem is discussed and obtained.Section 5 provides an interesting numerical application from physics and Section 6 concludes the paper.Two appendices for the analytical calculations of two important integrals are also available.

Preliminary Results-Matrix Pencil Framework
In this section, some preliminary results for matrix pencil and system theory are briefly presented.First, we assume that for an nth-order linear differential system, see (1), the input can be a linear combination of Dirac δ-function and its first (n − 1)th-derivatives as follows: where δ (k) (t) or d k δ(t)/dt k is the kth-derivative of the Dirac δ-function, and a i for i = 0, 1, . . ., n − 1 are the magnitudes of the delta function and its derivatives.Furthermore, we assume that the state of the system at time 0 − is and at time 0 + , it achieves With the following definitions, a brief presentation of the most important elements of matrix pencil theory is given.Definition 1.Given F, G ∈ M nm and an indeterminate s ∈ F, the matrix pencil sF − G is called regular when m = n and det(sF − G) / = 0.In any other case, the pencil will be called singular.
Definition 2. The pencil sF − G is said to be strictly equivalent to the pencil s F − G if and only if there exist nonsingular P ∈ M m and Q ∈ M n such as In this paper, we consider the case that pencil is regular.Thus, the strict equivalence relation can be defined rigorously on the set of regular pencils as follows.
This is the set of elementary divisors (e.d.s) obtained by factorizing the invariant polynomials f i (s, s) into powers of homogeneous polynomials irreducible over field F.  In the case where sF − G is a regular, we have e.d. of the following types: (i) e.d. of the type s p are called zero finite elementary divisors (z.f.e.d.), (ii) e.d. of the type (s − a) π , a / = 0 are called nonzero finite elementary divisors (nz.f.e.d.), (iii) e.d. of the type s q are called infinite elementary divisors (i.e.d.).
Let B 1 , B 2 , . . ., B n be elements of M n .The direct sum of them denoted by B 1 B 2 • • • B n is the block diag{B 1 , B 2 , . . ., B n }.
Then, the complex Weierstrass form sF w − Q w of the regular pencil sF − G is defined by sF w − Q w sI p − J p sH q − I q .(7) Now, the Jordan type element, that is, J p , is uniquely defined by the set of f.e.d.

3
The q blocks of the second, that is, sH q − I q , see (7), are uniquely defined by the set of i.e.d.
of sF − G and has the form where Furthermore, H q is a nilpotent element of M n with index q = max{q j : j = 1, 2, . . ., σ}, where and I pj , J pj (a j ), H qj are defined as Moreover, for the matrices F and G, we have the parameterization Since the state x(0 + ) which we wish to reach is specified, we need to determine the n unknown coefficients a i , for i = 0, 1, . . ., n − 1.
In practice, we cannot create an exact impulse function nor its derivatives.However, if we use one of the approximations of Dirac δ-function, we will be able to change the state in some minimum practical time depending mainly upon how well we generate the approximations.Let the Dirac δfunction be viewed as the limit of sequence function where δ a (t) is called a nascent delta function.This limit is in the sense that lim Some wellknown and very useful in applications nascent delta functions are the Normal and Cauchy distributions, the rectangular function, the derivative of the sigmoid (or Fermi-Dirac) function, the Airy function, and so forth; see for instance [2,5,[12][13][14][15][16][17].The results given below are based on the normal function.Thus, by taking into consideration expression (18) and the normal (Gaussian) probability density distribution, we obtain where . So, the approximate expression for the controller ( 2) is given by Then, we take the limit In the next section, the main results are presented.(1).The section begins with the following important lemma.

Lemma 3.
System (1) is divided into the following two subsystems: Proof.Consider the transformation Substituting the previous expression into (1) and considering also (3), we obtain Multiplying by P, we arrive at Now, we denote where p + q = n.
Taking into account the following expressions, that is, we arrive easily at (23) and (24).
System ( 23) is standard form of nonhomogeneous higherorder linear differential equations of Apostol-Kolodner type, which may be treated by methods that are more classical; see for instance [18] and references therein.
Thus, it is convenient to define new variables as Then, we have the following system of ordinary differential equations: Finally, (31) can be expressed by using vector-matrix equations where ] t (( ) t is the transpose tensor) and the coefficient matrices R and L are given by with corresponding dimension of z(t), r p × 1.The state of system (1) at time 0 Now, considering (25), at time 0 − we have and at time 0 + we obtain Moreover, where χ i j ∈ F for i = 0, 1, 2, . . ., r − 1, and j = 1, 2, . . ., n. Furthermore, (37)

The Solution of Subsystem (32)
. In order to solve subsystem (32), the following definitions should be provided.
Definition 4. The characteristic polynomial of matrix R is given by with λ i / = λ j for i / = j and κ i=1 τ i = r p.Without loss of generality, we assume that where d i , τ i , i = 1, 2, . . ., κ is the geometric and algebraic multiplicity of the given eigenvalues λ i , respectively.
Generally speaking, the matrix R is not diagonalizable.However, we can generate r p linearly independent vectors v 1 , v 2 , . . ., v r p and r p × r p similarity transformation C = [v 1 , v 2 , . . ., v r p ] that takes R into the Jordan canonical form, as the following definition clarifies.Definition 5.There exists an invertible matrix C ∈ M r p such as J = C −1 RC, J ∈ M r p ; is it Jordan canonical form of matrix R. Analytically, is also a diagonal matrix with diagonal elements the eigenvalue λ i , for i = 1, . . ., l.Consequently, the dimension of J o is s × s, s l i=1 τ i .
(ii) Also, the block matrix J j = J j,1 J j,2 • • • J j,dj ∈ M τj , where According to the classical theory of ordinary differential equations, the solution of system (32) is given by the following lemma.Lemma 6.The solution of subsystem (32) is given by Proof.Consider the transformation where nonsingular C ∈ M r p , and ζ(0 − ) = C −1 z(0 − ) = 0 t r p .Substituting (44) into (43), we obtain (45) Furthermore, we define b C −1 L, such as the last equation is transformed into Now, according to the relevant theory of first-order differential systems of (46) form, see for instant [16], using also (44), the solution is expressed by (43).
Proposition 10.System (60) is solvable if for every nonzero element of vectors b τ1 , b τ2 , . . ., b τl , where , for every i = 1, 2, . . ., l.Moreover, if one of the elements of vectors b τ1 , b τ2 , . . ., b τl is zero, then the relative element of the ith-row of the vector z τi (0 + ) should be also zero.
Proof.System (60) contains l-subsystems of the following type: or equivalently . ., τ i , then the relative element of the ith row of the vector z τi (0 + ) should be also zero (in order to obtain solution).
Consequently, system (60) is solvable if (62) is satisfied for every i = 1, 2, . . ., l.Now, we will work with subsystems (61) which can be written as follows: The coefficient matrix V jz j can be transposed to the following equivalent matrix; see (67).Note that V jz j is a μ zj × n matrix.Moreover, from the μ zj elements b c jz j , for = 0. Thus, we obtain In order to be able to understand (67) better, the following example is helpful.
Example 1.We take the coefficient matrix We assume that b 6 = 0 and b 5 / = 0. Since b 5 / = 0, we are not interested in b 3 , b 4 .
Thus, under our assumption, we obtain Afterwards, we multiply the 3rd row with −b 4 /b 5 and it is added to the 2nd row.Moreover, we multiply the 3rd row with −b 3 /b 5 and it is added with the 1st row.Then, the following equivalent matrix is derived: Now, we multiply the 2nd row with −b 4 /b 5 and it is added with the 1st row.Thus, we obtain Finally, we multiply the matrix with the element 1/b 5 and we conclude to obtain Remark 2. Considering the results already presented, it is clear that there exists a nonsingular matrix L jz j , μ zj × μ zj such that Thus, system (61) can be transformed into where L jzj z jz j (0 + ), Proposition 11.Subsystems (74) are solvable when the elements of vectors z * jz j (0 + ), for every z j = 1, 2, . . ., d j , j = l + 1, l + 2, . . ., κ (76) are included into the vector with the greatest dimension, that is, ρ j × 1 where Equivalently, if one assumes that then each of the following vectors z * j2 (0 + ), z * j3 (0 + ),. .., z * jd j (0 + ) should be vectors of Proof.We take the d j subsystems; as follows: The matrix Without loss of generality, we assume that Looking carefully at the type of matrices we can easily verify that the μ 1 -first rows of them are identically same.Thus, it is necessary the relevant μ 1 -first rows of z * j1 (0 + ), z * j2 (0 + ), . . ., z * jd j (0 + ) to be also identically the same.Analogously, the μ 2 -first rows of should be identically the same with the relevant μ 2 -first rows of z * j1 (0 + ), z * j2 (0 + ), . . ., z * jd j (0 + ).And so on until the μ dj = ρ j row.
Consequently, it is time to use the results of Propositions 10 and 11.Thus, system (81) is solvable if we obtain the following.
(i) The first nonzero elements of vector coefficient b τ1 , b τ2 , . . ., b τl (practically speaking, without loss of generality, we assume that the first elements are nonzero).
Consequently, system (81) is transposed to the solvable system (83) or equivalently, where Remark 3. The matrices given by (67) with some row transformations can be transformed to the following: The matrix (84) is denoted by V * * jz j .Thus, there is a nonsingular matrix Λ jz j such as Then, subsystems (74) are transformed to where Λ jz j z * jz j (0 + ) = z * * jz j (0 + ) for every z j = 1, 2, . . ., d j , and j = l+1, l+2, . . ., κ.Consequently, system (83) is transformed to Comment 1.Without loss of generality, we can assume that the matrices V * * jd j do not contain zero rows.On contrary, if there exist some zero rows, the solvability of system (87) is not affected; only the number of nonzero rows which are included in (87) is changed.
Remark 4. System (87) can be further transposed to a more convenient system.Analytically, if we multiply the 1st row of Vandermonde matrix V l , that is, ] with the number (−1) and it is added to the 1st row of each of (88) (Note that we have assumed that the matrices V * * jz j , for j = l + 1, l + 2, . . ., κ, do not cotain zero rows; see also Comment) We can easily see that the 1st row of matrix (88) can be rewritten as what follows.
The element Thus, the first row is presented as Since, the element λ j − λ 1 / = 0, we can multiply by left the equation (88) with a properly chosen transformation matrix, so as to obtain Finally, this subsection is concluded by considering system (91), where the matrices A jz j for j = l + 1, l + 2, . . ., κ are derived by taking into account a properly chosen transformation left-matrix Z as follows: . . . where with for j = l + 1, l + 2, . . ., κ.
In order to solve system (91), we should use the relevant theory of generalized inverses.Some basic elements are briefly presented in the next subsection.

Solving Subsystem (91)
. This subsection is beginning with the brief presentation of the most basic results of {1}- generalized inverses.

Definition 12. For a matrix
The following proposition provides the means for finding {1}-inverses; see [19,20] for proofs and details.Proposition 13.Let A ∈ M pq , with rank A = r < min{p, q} and P ∈ M p , Q ∈ M q , invertible such that Then, every X, {1}-inverse of matrix A, is written as follows: for arbitrary L ∈ M (q−r)(p−r) .
The following result is due to [6].
Then the matrix equation is consistent if and only if for some A (1) , B (1)   AA (1) DB (1) and the general solution is given by DB (1) + Y − A (1) AY BB (1)  (100) The following characterization of the set of A (1) is due to [20].
A special case of {1}-inverses is the left/right inverses of a full column/row rank matrix, respectively.The proofs and further details are presented extensively by [19,20].Definition 17.Consider the following matrices.
(a) Let P i (α) be a m × m-matrix (The nonzero element a is in the ith row and the ith column.)Thus, whenever a matrix A is multiplied from the left by P i (α), then its ith row is multiplied by the nonzero number a.
(b) Let P i ( j, a) be an m × m-matrix (The nonzero element a is in the ith row and the jth column.)Thus, whenever a matrix A is multiplied from the left by P i ( j, a), then its jth row is multiplied by the nonzero number a and it is added to the jth row of A.
(The nonzero element a is in the ith-row and the ithcolumn).Thus, whenever a matrix A is multiplied from the right by Q i (α), then its ith column is multiplied by the nonzero number a.
(The nonzero element a is in the jth row and the ith column.)Thus, whenever a matrix A is multiplied from the right by Q i ( j, a), then its ith column is multiplied by the nonzero number a and it is added to the jth column of A. Now, in order to calculate the generalized inverses of Vandermonde and the relative to it matrices, we use the recent results of paper [6].In this context, the following propositions are relevant.
Proposition 18.The {1}-inverse of the Vandermonde matrix is given by V (1)  l = Q where (the multiplication counts backwards, starting from and the general solution of system V l a = z * l (0 + ) is given by for an arbitrarily chosen vector ψ ∈ M n1 .
Proof.The proof is a straightforward application of Theorem 15.Analytically, In order for the matrix Q Il On−l,l P to be the {1}-inverse of V l , we should prove the following equality Thus, Hence, the solution of system V l a = z * l (0 + ) is given by (113).
Proof.The proof is also a straightforward application of Theorem 15.Analytically, In order for the matrix to be the {1}-inverse of A j , we should prove the following equality: Thus, Hence, the solution of system A j a = d j (0 + ) is given by (115).

The Solution of System (91)
. This subsection concludes the whole discussion.The following theorem holds.
There are infinitely many solutions, since the vector ψ ∈ M n1 is arbitrary.Moreover, each of the above subsystems also contains the solution of system Aa = b.Thus, we assume that the desired solution is given by a, then it should also satisfy the first system, that is, V l a = z * l (0 + ).Thus, we take (1)  l z * l (0 + ) A l+2 V (1)  l z * l (0 + ) . . . (1) l V l . . .
With this important theorem, the discussion of system (32) has finished.In the next section, we present the solution of system (24).

The Solution of Subsystem (24)
The differential system of ( 15) is given by We start this subsection by observing that-as is well known, see Section 2-there exists a q * ∈ N such that H q * q = O, that is, q * is the annihilation index of H q .We obtain or equivalently Afterwards, we present a condition, in order to accept the already known solution a (see Section 3).
We know that the nilpotent block matrix H q is given by Now, taking into consideration that H q * q = O, it follows that for each , q * = max q j : j = 1, 2, . . ., μ . (131) Lemma 21 (see [5]).All the elements of the block diagonal matrix H q * −1 q are zero, that is, H q * −1 i j = 0 apart from the ξelements at positions (1, q * ), (q * + 1, 2q * ),. . ., ((ξ − 1)q * + 1, ξq * ) which are one.Note that ξ is the number of maximum block matrices with the same dimension, that is, q * ×q * , where q * = max{q j : j = 1, 2, . . ., r}.Theorem 22.The condition which makes the system (24) solvable and provides an acceptable choice of vector a is given by Proof.First, we obtain system (129), that is, H q * −1 q y q (t) + H q * −1 q b q u o (t) = 0 q .Then, for the solution of system (129), we take into account that Following Remark 1, we can assume that t = Kσ with K big enough so that Moreover, we have Kσ → 0 + and t → 0 + .Then, we take (135) However, Thus, (132) is derived.
An equivalent proof to [5] for Theorem 23 is provided.
Theorem 23.A necessary condition for obtaining the solution of system (24) is given by where are elements of the matrix Q, the G (1) is {1}-inverse of matrix G, and for an arbitrarily chosen X ∈ M n1 .
(141) System (141) can be expressed as follows: where The proof is concluded by using the results of Theorem 22.

Conclusions
In this paper, an analytical method and several sufficient conditions were proposed and discussed for the changing of the state of a higher-order linear descriptor (regular) differential system in (almost) zero time by using the impulse function and its derivatives.The input vector has to be made up a linear combination of the δ-function of Dirac and its derivatives approximated using the normal (Gaussian) probability density function.Using the tools of linear algebra and generalized inverses, exact calculations of the relevant matrices are obtained.
To date, no other approximating function has been used in the analysis of the problem studied in this paper.It will be of interest to use some other approximations and obtain comparative results.In [14,18,21], there are only some hints about the minimum time.However, there is no known formula which can calculate the minimum time considering some other significant problem parameters, such as the volatility σ, K and so forth.
Since we have applied an approximation for the impulse input, we transfer the initial state ε-close to a desired state.Consequently, a natural question is to ask how close we can get, and how we can calculate that distance, and if it is possible to minimize it.
This appendix is finished with (A.29).This equation is a consequence of (A.26) and (A.27) lim with z j = 1, 2, . . ., d j and the matrix b jz j ∈ M ( jz j )1 .
Journal of Control Science and Engineering Thus, However, before we prove Lemma 27, the following lemmas should be considered firstly.
And, we take lim In this part, we use also Remark 2.1, with as t = Kσ with σ → 0 and K 0. Thus Moreover, we take lim (using (B.9) and (B.23)) = 0. Thus, we obtain lim Thus, in order to conclude, we should prove lim The proof is concluded by considering.
We assume that (B.33) is true.Now, we easily have to prove the following expression for ρ+ 1, that is, lim where μ zj are the Weyr characteristics via Ferrer diagrams, for j = l + 1, l + 2, . . ., κ and z j = 1, 2, . . ., d j .Note that ρ j = max zi=1,2,...,dj μ zj is the index of annihila-tion for the eigenvalue λ j .Note that matrix (B.42) has d j -blocks.Each one of the d j blocks can be written as follows: (B.44)