The Matrix Linear Unilateral and Bilateral Equations with Two Variables over Commutative Rings

Correspondence should be addressed to V. M. Petrychkovych, vas petrych@yahoo.com Received 16 January 2012; Accepted 20 February 2012 Academic Editors: I. Cangul, H. Chen, and P. Damianou Copyright q 2012 N. S. Dzhaliuk and V. M. Petrychkovych. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The method of solving matrix linear equations AX BY C and AX YB C over commutative Bezout domains by means of standard form of a pair of matrices with respect to generalized equivalence is proposed. The formulas of general solutions of such equations are deduced. The criterions of uniqueness of particular solutions of such matrix equations are established.


Introduction
The matrix linear equations play a fundamental role in many talks in control and dynamical systems theory 1-4 .The such equations are the matrix linear bilateral equations with one and two variables The Roth's theorem was extended by many authors to the matrix equations 1.1 , 1.2 in cases where their coefficients are the matrices over principal ideal rings 6-8 , over arbitrary commutative rings 9 , and over other rings 10-14 .
The matrix linear unilateral equation 1.3 has a solution if and only if one of the following conditions holds: a a greatest common left divisor of the matrices A and B is a left divisor of the matrix C; b the matrices A B C and A B 0 are right equivalent.
In the case where A, B, and C in 1.3 are the matrices over a polynomial ring F λ , where F is a field, these conditions were formulated in 1, 4 .It is not difficult to show, that these conditions of solvability hold for the matrix linear unilateral equation 1.3 over a commutative Bezout domain.
The matrix equations 1.1 , 1.2 , 1.3 , where the coefficients A, B, and C are the matrices over a field F, reduce by means of the Kronecker product to equivalent systems of linear equations 15 .Hence 1.1 over algebraic closed field has unique solution if and only if the matrices A and −B have no common characteristic roots.
One of the methods of solving matrix polynomial equation, where A λ , B λ , and C λ are matrices over a polynomial ring F λ , is based on reducibility of polynomial equation to equivalent equation with matrix coefficients over a field F, that is, where A and B are companion matrices of matrix polynomials A λ r i 0 A i λ r−i and B λ s j 0 B j λ s−i , respectively, D is matrix over a field F, and Z is unknown matrix 16, 17 .Equation 1.5 has a unique solution X 0 λ , Y 0 λ of bounded degree deg Feinstein and Bar-Ness 18 established that for 1.5 , in which at least one from the matrix coefficients A λ or B λ is regular, there exists unique minimal solution The similar result was established in 19 in the case where at least one from matrix coefficients A λ or B λ is regularizable.
In this paper we propose the method of solving matrix linear equations 1.2 , 1.3 over a commutative Bezout domain.This method is based on the use of standard form of a pair of matrices with respect to generalized equivalence introduced in 20, 21 , and on congruences.We introduce the notion of particular solutions of such matrix equations.We establish the criterions of uniqueness of particular solutions and write down the formulas of general solutions of such equations.

The Linear Congruences and Diophantine Equations
Let R be a commutative Bezout domain.A commutative domain R is called a Bezout domain if any two elements a, b ∈ R have a greatest common divisor a, b d, d ∈ R and d pa qb, for some p, q ∈ R 22, 23 .Note that a commutative domain R is a Bezout domain if and only if any finitely generated ideal is principal.
Further, U R denotes a group of units of R, R m denotes a complete set of residues modulo the ideal m generated by element m ∈ R or a complete set of residues modulo m.An element a of R is said to be an associate to an element b of R, if a bu, where u belongs to U R .A set of elements of R, one from each associate class, is said to be a complete set of nonassociates, which we denoted by R 24 .For example, if R Z is a ring of integers, then Z can be chosen as the set of positive integers with zero, that is, Z {0, 1, 2, . ..}, and Z m can be chosen as the set of the smallest nonnegative residues, that is, Z m {0, 1, 2, . . ., m − 1}.
Many properties of divisibility in principal ideal rings 24-27 can be easily generalized to the commutative Bezout domain.Recall some of them which will be used later.
In what follows, R will always denote a commutative Bezout domain.where the union is taken over all residues of arbitrary complete set of residues R d modulo d, where d / 0.
In the case where R is an euclidean ring, this lemma was proved in 27 .By the same way, this lemma can be proved in the case where R is a commutative Bezout domain.
The class of elements x ≡ x 0 mod m satisfying the congruence ax ≡ b mod m is called solution of this congruence.
1, and x ≡ x 0 mod m 1 be a solution of congruence 1.9 Then the general solution of congruence 1.8 has the form: where r is any element of R d .
Proof.Necessity.It is obvious.
Sufficiency.Let a, m d and d | b.Then dividing both sides a congruence 1.8 and m by d, we get congruence 1.9 , where a 1 , m 1 1.There exist elements u, v of R such that a 1 u m 1 v 1.Thus we have a 1 u ≡ 1 mod m 1 .Multiply two sides of this congruence by b 1 / 0, that is, a 1 ub 1 ≡ b 1 mod m 1 .Therefore, x ≡ ub 1 mod m 1 is a solution of congruence 1.9 .Set ub 1 x 0 .Then by Lemma 1.2 we get the general solution of congruence 1.8 : x ≡ x 0 m 1 r mod m , where r is an arbitrary element of R d .This proves the lemma.Let x 0 , y 0 be a solution of 1.12 , that is, x 0 is a solution of congruence a and y The solution x 0 , y 0 of 1.12 is obviously the solution of 1.11 .Example 1.7.Let 9x 6y 6 1.14 be linear Diophantine equation over ring Z. Then 9, 6 3 d and 1.14 is solvable.Then the particular solutions of 1.14 are Then the general solution of 1.14 can be written using 1.13 where r is arbitrary element of Z 3 {0, 1, 2} and k is any element of Z.

Standard Form of a Pair of Matrices
Let R be a commutative Bezout domain with diagonal reduction of matrices 28 , that is, for every matrix A of the ring of matrices M n, R , there exist invertible matrices U, V ∈ GL n, R such that for some invertible matrices U and V i over R.
In 20, 21 , the forms of the pair of matrices with respect to generalized equivalence are established.Theorem 1.9.Let R be an adequate ring, and let A, B ∈ M n, R be the nonsingular matrices and be their canonical diagonal forms.
Then the pair of matrices A, B is generalized equivalent to the pair D A , T B , where T B has the following form: 1.20 The pair D A , T B defined in Theorem 1.9 is called the standard form of the pair of matrices A, B or the standard pair of matrices A, B .
Definition 1.10.The pair A, B is called diagonalizable if it is generalized equivalent to the pair of diagonal matrices D A , D B , that is, its standard form is the pair of diagonal matrices It is clear taking into account Corollary 1.11 that if det A, det B 1, then the standard form of matrices A, B is the pair of diagonal matrices D A , D B .
Let us formulate the criterion of diagonalizability of the pair of matrices.
Theorem 1.12.Let A, B ∈ M n, R and A be a nonsingular matrix.Then the pair of matrices A, B is generalized equivalent to the pair of diagonal matrices D A , D B if and only if the matrices adj A B and adj D A D B are equivalent, where adj A is an adjoint matrix.

The Construction of the Solutions of the Matrix Linear Unilateral Equations with Two Variables
Suppose that the matrix linear unilateral equation 1.3 is solvable, and let D A , T B be a standard form of a pair of matrices A, B from 1.3 with respect to generalized equivalence, that is, is a lower triangular matrix of the form 1.20 with the principal diagonal where U, V A , V B ∈ GL n, R .Then 1.3 is equivalent to the equation , and C UC.The pair of matrices X 0 , Y 0 satisfying 2.3 is called the solution of this equation.Then is the solution of 1.3 .The matrix equation 2.3 is equivalent to the system of linear equation: Using solutions of system 2.6 , we construct the solutions X, Y of matrix equation 2.3 .Then X V A X and Y V B Y are the solutions of matrix equation 1.3 .

The General Solution of the Matrix Equation AX BY C with the Diagonalizable Pair of Matrices A, B
Suppose that the pair of matrices A, B is diagonalizable, that is, , and C UC. From matrix equation 2.9 , we get the system of linear Diophantine equation: . ., n, be a particular solution of corresponding equation of system 2.10 , that is, The general solution of corresponding equation of system 2.10 by the formula 1.13 will have the following form: where d ii ϕ i , ψ i , r i are arbitrary elements of R d ii , and k ij are any elements of R, i, j 1, . . ., n.The particular solution of matrix equation 2.9 is where , is a particular solution of corresponding equation of system 2.10 .Then is a particular solution of matrix equation 1.3 .Thus we get the following theorem.
Theorem 2.1.Let the pair of matrices A, B from matrix equation 1.3 be diagonalizable and its standard pair be the pair of matrices Φ, Ψ in the form 2.8 .Let X 0 , Y 0 be a particular solution of matrix equation 2.9 .Then the general solution of matrix equation 2.9 is

The general solution of matrix equation 1.3 has the form
where

2.24
The particular solution of each linear equation of system 2.24 has the following form:

2.25
The particular solution of matrix equation 2.22 is 2.29

The Uniqueness of Particular Solutions of the Matrix Linear Unilateral Equation
The conditions of uniqueness of solutions of bounded degree minimal solutions of matrix linear polynomial equations 1.5 were found in 16-19 .We present the conditions of uniqueness of particular solutions of matrix linear equation over a commutative Bezout domain R.
Theorem 2.3.The matrix equation 2.3 has a unique particular solution Proof.From matrix equation 2.3 , we get the system of linear equations 2.6 .The solving of this system reduces to the successive solving of the linear Diophantine equations of the form 2.7 .
The matrix equation 2.3 has a unique particular solution X 0 x

The Matrix Linear Bilateral Equations AX Y B C
Consider the matrix linear bilateral equation 1.2 , where A, B, and C are matrices over a commutative Bezout domain R, and are the canonical diagonal forms of matrices A and B, respectively, and where X V −1 A XV B , Y U A Y U −1 B , and C U A CV B .Such an approach to solving 1.2 , where A, B and C are the matrices over a polynomial ring F λ , where F is a field, was applied in 3 .
The equation 3.2 is equivalent to the system of linear Diophantine equations ϕ i x ij ψ j y ij c ij , i,j 1, . . ., n.The general solution of matrix equation 3.2 is 3.9

Lemma 1 . 2 .
Each residue class a mod m over R can be represented as union a mod m r∈R d a mr mod md , 1.7

Lemma 1 . 3 .
Let ax ≡ b mod m 1.8 and a, m d, where a, b, m, d ∈ R. Congruence 1.8 has a solution if and only

Corollary 1 . 4 .
The congruence 1.8 has unique solution x ≡ x 0 mod m such that x 0 ∈ R m if and only if a, m Diophantine equation over R and a, b d.Equation 1.11 has a solution if and only if d | c.Suppose that a a 1 d, b b 1 d, and c c 1 d, where a 1 , b 1 1.Then 1.11 implies a 1 x b 1 y c 1 .1.12

Definition 1 . 5 .
The solution x 0 , y 0 of 1.11 such that x 0 ∈ R b is called the particular solution of this equation.Then by Lemmas 1.2 and 1.3, a general solution of 1.11 has the form where r is an arbitrary element of R d and k is any element of R. Corollary 1.6.A particular solution x 0 , y 0 of 1.11 with x 0 ∈ R b is unique if and only if a, b 1.
solution of matrix equation 3.2 , that is, x 0 ij , y 0 ij , i, j1, . . ., n, are particular solutions of linear Diophantine equations of system 3.3 .
C, where matrix A T is transpose of A, is called Lyapunov equation and it is special case of Sylvester equation.Equation 1.3 is called the matrix linear Diophantine equation 3, 4 .Roth 5 established the criterions of solvability of matrix equations 1.1 , 1.2 whose coefficients A, B, and C are the matrices over a field F. The matrix equation 1.1 , where A, B, and C are matrices with elements in a field F, has a solution X with elements in F if and only if the matrices where A, B, and C are matrices of appropriate size over a certain field F or over a ring R, X, Y are unknown matrices.Equations 1.1 , 1.2 are called Sylvester equations.The equation AX XA T The matrix equation 1.2 , where A, B, and C are matrices with elements in F, has solution X, Y with elements in F if and only if the matrices M and N are equivalent.
n, then the matrix D A is unique and is called the canonical diagonal form Smith normal form of the matrix A. Such rings are so-called adequate rings.The ring R is called an adequate if R is a commutative domain in which every finitely generated ideal is principal and for every a, b ∈ R with a / 0; a can be represented as a cd where c, b 1 and d i , b / 1 for every nonunit factor d i of d 29 .The pairs A 1 , A 2 and B 1 , B 2 of matrices A 15 − 3k 11 18k 21 8 6r 2 − 3k 12 18k 22 −2 − 3r 2 k 11 − 9k 21 −3 − 3r 2 k 12 − 9k 22 .
is diagonalizable and 1.3 gives us the equation of the form 2.9 .Thus by Theorem 2.1, we get the formula 2.31 of the general solution of 2.3 and the formula 2.32 for computation of general solution of 1.3 in the case where 2.3 has unique particular solution of the form 2.30 .The theorem is proved.
Proof.The particular solution of the form 2.30 of 2.3 is unique if and only if det D A , det T B 1, that is, det A, det B 1. Then by Corollary 1.11 the pair of matrices A, B , where k ij are arbitrary elements of R, i, j 1, . . ., n.The general solution of matrix equation 1.2 is Similarly as for 2.3 , we prove that particular solution of 3.2 is unique if and only if det Φ, det Ψ 1. Then by the same way as for 1.3 we write down the general solution of matrix equation 1.2 .ij ∈ R ψ i , i 1, . . ., n, is unique particular solution of matrix equation 3.2 and D A Φ diag ϕ 1 , . . ., ϕ n , D B Ψ diag ψ 1 , . . ., ψ n 3.7 are canonical diagonal forms of matrices A, B from matrix equation 1.2 , respectively.Then the general solution of matrix equation 3.2 is ; k ij are arbitrary elements of R, i, j 1, . . ., n.The general solution of matrix equation 1.2 is