Toeplitz Matrices in the Problem of Semiscalar Equivalence of Second-Order Polynomial Matrices

We consider the problem of determining whether two polynomial matrices can be transformed to one another by left multiplying with some nonsingular numericalmatrix and rightmultiplying by some invertible polynomialmatrix.Thus the equivalence relation arises. This equivalence relation is known as semiscalar equivalence. Large difficulties in this problem arise already for 2-by-2 matrices. In this paper the semiscalar equivalence of polynomial matrices of second order is investigated. In particular, necessary and sufficient conditions are found for two matrices of second order being semiscalarly equivalent. The main result is stated in terms of determinants of Toeplitz matrices.


Introduction
Let C be a field of complex numbers and C[] the ring of polynomials in an indeterminate  over C. Let (, C) and (, C[]) denote the algebras of  ×  matrices over C and C[], respectively, and (, C), (, C[]) their corresponding groups of units.Given two matrices (), () ∈ (, C[]), the question of determining whether there exist the matrices  ∈ (, C), () ∈ (, C[]), such that has attracted much attention for many years.This proved to be a bigger problem than originally anticipated.Large difficulties in this problem arise already for elements of (2, C[]).The matrices (), () ∈ (, C[]) are called semiscalarly equivalent, if the equality (1) is satisfied for some nonsingular matrix  ∈ (, C) and for some invertible matrix () ∈ (, C[]) [1] (see also [2]).Due to this fact the problem of finding the conditions under which the matrices are semiscalarly equivalent is of current interest.In this paper the indicated problem for the matrices of second order is solved.Toeplitz matrices plays an important role in the conditions under which two matrices of second order can be transformed to one another by semiscalar equivalent transformation.In spite of Toeplitz matrices forming a special matrix class, many classical problems related Laurent series, moment's problem, orthogonal polynomials, and others are reduced to them.A more serious interest to Toeplitz matrices has, to a large extent, the following explanation.Any matrix has connection with Toeplitz matrices in the sense that every matrix can be represented as a sum of the products of Toeplitz matrices.Much applied problems of electrodynamics, geophysics, acoustics, and automatic control require investigation of Toeplitz matrices.Also, there is a correspondence between complex functions and Fourier series and the latter is closely related to some sequence of Toeplitz matrices.
The monographs [3][4][5] present plenty of material about the research of Toeplitz and Hankel matrices.The articles [6][7][8] refer to modern problems of these matrices.The results of this paper may be applied in the solving of the matrix equations, which are utilized in many problems of engineering.

Preliminaries
Let (), () ∈ (2, C[]).If matrices () and () are semiscalarly equivalent then it is necessary for them to satisfy the following condition: det () =  det (),  ∈ C,  ̸ = 0.If the matrices () and () are of full rank, then 2 International Journal of Analysis according to [1] (see also [2]), they are semiscalarly equivalent to the lower triangular matrices () and (), respectively, with the invariant multipliers of the main diagonal.Similar results are published in [9,10].We may assume, without loss of generality, that first invariant multipliers of matrices () and () are identities.Therefore, these matrices can be considered in the form ( Denote by  () () and  () () the values at  =  of the th derivatives of () and (), respectively, in the matrices () and ().The determinant |()| = () is called the characteristic polynomial and its roots are called the characteristic roots of the matrix () (resp., for matrix ()).Let us denote by M the set of characteristic roots of matrix () of the form (2). Now consider a partition of the set M into subsets M V such that ,  ∈ M V if () = ().Evidently, each two different subsets M V of the partition (3) are disjoint.
The following two assertions are valid.We deduce from (4) the relations Setting  =  and  = ,  ̸ = , ,  ∈ M, we obtain the relations If () = (), then left sides of the resulting relations are equal.Therefore, from the equality of right sides, taking into account that  11 () =  11 () ̸ = 0 (see ( 5), ( 6)), we have () = ().The notion of semiscalar equivalence is a symmetric relation.Then () = () by the similar argument yields () = ().This completes the proof.Proposition 2. Let  be the characteristic root of the multiplicity  of matrix () of the form (2). Let also  be the lowest (nonzero) order of the nonzero derivative  () () ̸ = 0 of the entry () of this matrix at  = .Then the number  (as ) is an invariant for the class {()()} of semiscalarly equivalent matrices, if  < .
The Case 2. There is a root   ∈ M such that   <   and 2  ≥   for every root   ∈ M.
The Case 3.There is a root   ∈ M such that 2  <   .
Let us now consider each of them separately.

The Case 1
Let us now formulate the following Theorem based on the defined notation and the assumption.
Theorem 3. Let the partition of the set M of characteristic roots of matrix () (2) be of the form (3) and where  =  11 / 22 .
Conversely, the quality (17), after some transformations, can be written in the form Introducing the notations based on the equalities (18), we obtain the system (16 This means that the matrices () and () are semiscalarly equivalent.The Theorem is proved.

The Case 2
Further we shall retain the earlier introduced notations (see, in particular, (13)).
Theorem 4. Let the partition of the set M of characteristic roots of matrices (), () (2) be of the form (3) and (1) for every root

International Journal of Analysis
Proof.
From the congruence (15) we can write the system of equations for every root   ∈ M such that   ∉ M 1 and   <   .It is understood that   ,   ,  11 ,  22 ̸ = 0. From first equation of system (22) we find that Substituting it into second and every succeeding equalities we obtain The condition (2) of the Theorem is proved.
As in the proof of Theorem 3, from congruence (15) by virtue of the substitution  =  V , V = 1, . . ., , we can easily obtain the equalities (16).By excluding  12 , we arrive at condition (3) of Theorem.The necessity of the conditions (1)-(3) of Theorem is proved.
Sufficiency.Suppose that the conditions of Theorem are satisfied.If we introduce the notations  11 = ,  22 = 1, then the condition (1) denotes the equalities (21) being satisfied for every root   ∈ M 1 such that   <   .From this it follows immediately the congruence where  12 is arbitrary number, for every root   ∈ M 1 (but not necessarily   <   ).
From the condition (3) of Theorem follows the equalities (18).If we introduce the notations (19), then from (18) we can obtain the system (16).This means that the congruence Using the notations (19) we can proceed from condition (2) of Theorem to the system of equations (22).Last system is equivalent to the congruence for every root When it is considered the congruence (26), then from (27) we actually have for every root   ∉ M 1 (but not necessarily   <   ).Combining (25) with (28), we obtain the congruence (15).We complete the proof of Theorem in a way analogous to the end of the proof of Theorem 3.

Auxiliary Statements
In the following studies we need to use Lemmas 5 and 6 that we prove in the current section.
If we add left sides of equalities (38) and separate right sides, we obtain Gathering similar terms in both sides of obtained equalities, we obtain It follows from (32) that From this relation it is easy to say that the following equality holds: From (31) and the induction hypothesis, we can write Comparing (40), (42), and (43), we obtain equality that is, Δ +1 () =  +1 Δ +1 (), where  =  0 / 0 .The necessity of conditions of Lemma is proved.
Assume by induction that (45) satisfies first  −  + 1 equations of system (32), that is, While proceeding we may assume  > 2.In the opposite case the proof is completely analogous.Taking into account the conditions (30) and the assumption of induction we can write the equalities (42), (43), and (44).From these equalities we obtain the relation (40).This relation implies the equality (39).It is evident that from the second and all following equalities of (47) we find that first  −  equalities of (38) are valid.The first  −  equalities of (38) along with relation (39) yield the last equality of (38).Dividing by (− 0 )  =   (− 0 )  and after some simplifications this equality can be written in the form This means that (45) is the solution of ( −  + 2)-th equation of system (32).

International Journal of Analysis
We have inductively proved the existence of the nonzero solution (45) of systems ( 31
After the calculation of the determinant on the left, we obtain the equation We can divide by  0  0 both sides to obtain the equality Let us denote by Δ V () and Δ V () the determinants in the left and right sides of the equality (50), respectively.
International Journal of Analysis inductively proves that the matrix of equation (49) has the rank 2. From this it follows that (49) has the nonzero solution.The rest of the Lemma will be proved by contradiction.Let      10  20  30     T be the nonzero solution of the equation ( 49), where  10 = 0 or  20 = 0. Hence we have the equality

The Case 3
The notations are the same as in the Cases 1 and 2 (in particular, see (13)).
Taking into account the roots   ∈ M 1 such that 2  <   and comparing the coefficients of equal degrees of binomial  −   on both sides of the congruence (15), we obtain the system of equalities, which in the matrix form can be written (73) By exclusion from these equalities  12 and considering that  0 =  0 ,  0 =  0 , we have equality (65).This proves the condition (2) of Theorem.
From the congruence (15) for the coefficients of the decomposition of types ( 69) and (70) of the entries (), () into degrees of binomial  −   for root   ∈ M such that   ∉ M 1 and   <   can be written the system where   = min(2  ,   ).Excluding from this system  12 , we have or where  = where   = 0, 1, . . .,   −   − 1.Here for  = 0, 1, . . .,   − 1 the equalities (76) hold.Now we multiply the left side of the equality (80) by the left side of the equality (76) at  = 0 and carry out analogous operation with the right sides.As a result we obtain the equality (66).This proves the condition (3) of Theorem entirely.