On the Necessary and Sufficient Condition for a Set of Matrices to Commute and Some Further Linked Results

This paper investigates the necessary and sufficient condition for a set of real or complex matrices to commute. It is proved that the commutator A,B 0 for two matrices A and B if and only if a vector v B defined uniquely from the matrix B is in the null space of a well-structured matrix defined as the Kronecker sum A ⊕ −A∗ , which is always rank defective. This result is extendable directly to any countable set of commuting matrices. Complementary results are derived concerning the commutators of certain matrices with functions of matrices f A which extend the well-known sufficiency-type commuting result A, f A 0.


Introduction
The problem of commuting operators and matrices, in particular, is very relevant in a significant number of problems of several branches of science, which are very often mutually linked, cited herein after.
1 In several fields of interest in Applied Mathematics or Linear Algebra 1-22 including Fourier transform theory, graph theory where, for instance, the commutativity of the adjacency matrices is relevant 1, 17-19, 21-35 , Lyapunov stability theory with conditional and unconditional stability of switched dynamic systems involving discrete systems, delayed systems, and hybrid systems where there is a wide class of topics covered including their corresponding adaptive versions including estimation schemes see, e.g., 23-41 .Generally speaking, linear operators, and in particular matrices, which commute share some common eigenspaces.On the other hand, a known mathematical result is that two graphs with the same vertex set commute if their adjacency matrices commute 16 .Graphs are abstract representations of sets of objects vertices where some pairs of them are connected by links arcs/edges .Graphs are often used to describe behaviors of multiconfiguration switched that the associate research inspired in Quantum Mechanics principles.There is also other relevant basic scientific applications of commuting operators.For instance, the symmetry operators in the point group of a molecule always commute with its Hamiltonian operator 20 .The problem of commuting matrices is also relevant to analyze the normal modes in dynamic systems or the discussion of commuting matrices dependent on a parameter see, e.g., 2, 3 .
It is well known that commuting matrices have at least a common eigenvector and also, a common generalized eigenspace 4, 5 .A less restrictive problem of interest in the above context is that of almost commuting matrices, roughly speaking, the norm of the commutator is sufficiently small 5, 6 .A very relevant related result is that the sum of matrices which commute is an infinitesimal generator of a C 0 -semigroup.This leads to a wellknown result in Systems Theory establishing that the matrix function e A 1 t 1 A 2 t 2 e A 1 t 1 e A 2 t 2 is a fundamental or state transition matrix for the cascade of the time invariant differential systems ẋ1 t A 1 x 1 t , operating on a time t 1 , and ẋ2 t A 2 x 2 t , operating on a time t 2 , provided that A 1 and A 2 commute see, e.g., 7-11 .Most of the abundant existing researches concerning sets of commuting operators, in general, and matrices, in particular, are based on the assumption of the existence of such sets implying that each pair of mutual commutators is zero.There is a gap in giving complete conditions guaranteeing that such commutators within the target set are zero.This paper formulates the necessary and sufficient condition for any countable set of (real or complex) matrices to commute.The sequence of obtained results is as follows.Firstly, the commutation of two real matrices is investigated in Section 2. The necessary and sufficient condition for two matrices to commute is that a vector defined uniquely from the entries of any of the two given matrices belongs to the null space of the Kronecker sum of the other matrix and its minus transpose.The above result allows a simple algebraic characterization and computation of the set of commuting matrices with a given one.It also exhibits counterparts for the necessary and sufficient condition for two matrices not to commute.The results are then extended to the necessary and sufficient condition for commutation of any set of real matrices in Section 3. In Section 4, the previous results are directly extended to the case of complex matrices in two very simple ways, namely, either by decomposing the associated algebraic system of complex matrices into two real ones or by manipulating it directly as a complex algebraic system of equations.Basically, the results for the real case are directly extendable by replacing transposes by conjugate transposes.Finally, further results concerning the commutators of matrices with matrix functions are also discussed in Section 4. The proofs of the main results in Sections 2, 3, and 4 are given in corresponding Appendices A, B, and C. It may be pointed out that there is implicit following duality of the main result.Since a necessary and sufficient condition for a set of matrices to commute is formulated and proven, the necessary and sufficient condition for a set of matrices not to commute is just the failure in the above one to hold.

Notation
A, B is the commutator of the square matrices A and B.
A ⊗ B : a ij B is the Kronecker or direct product of A : a ij and B. A ⊕ B : A ⊗ I n I n ⊗ B is the Kronecker sum of the square matrices A : a ij and both of order n, where I n is the nth identity matrix.
A T is the transpose of the matrix A and A * is the conjugate transpose of the complex matrix A. For any matrix A, Im A and Ker A are its associate range or image subspace and null space, respectively.Also, rank A is the rank of A which is the dimension of Im A and det A is the determinant of the square matrix A.
v A a T 1 , a T 2 , . . ., a T n T ∈ C n 2 if a T i : a i1 , a i2 , . . ., a in is the ith row of the square matrix A. σ A is the spectrum of A; n : {1, 2, . . ., n}.If λ i ∈ σ A then there exist positive integers μ i and ν i ≤ μ i which are, respectively, its algebraic and geometric multiplicity; that is, the times it is repeated in the characteristic polynomial of A and the number of its associate Jordan blocks, respectively.The integer μ ≤ n is the number of distinct eigenvalues and the integer m i , subject to 1 ≤ m i ≤ μ i , is the index of λ i ∈ σ A ; ∀i ∈ μ, that is, its multiplicity in the minimal polynomial of A.
A ∼ B denotes a similarity transformation from A to B T −1 AT for given A, B ∈ R n×n for some nonsingular T ∈ R n×n .A ≈ B EAF means that there is an equivalence transformation for given A, B ∈ R n×n for some nonsingular E, F ∈ R n×n .
A linear transformation from R n to R n , represented by the matrix T ∈ R n×n , is denoted identically to such a matrix in order to simplify the notation.
The symbols "∧" and "∨" stand for logic conjunction and disjunction, respectively.The abbreviation "iff" stands for "if and only if."The notation card U stands for the cardinal of the set U. C A resp., C A is the set of matrices which commute resp., do not commute with a matrix A. C A resp., C A is the set of matrices which commute resp., do not commute with all square matrix A i belonging to a given set A.

Results Concerning the Sets of Commuting and No Commuting Matrices with a Given One
Consider the sets C A : {X ∈ R n×n : A, X 0} / ∅, of matrices which commute with A, and C A : {X ∈ R n×n : A, X / 0}, of matrices which do not commute with A; ∀A ∈ R n×n .Note that 0 ∈ R n×n ∩C A ; that is, the zero n-matrix commutes with any n-matrix so that, equivalently, 0 / ∈ R n×n ∩ C A and then C A ∩ C A ∅; ∀A ∈ R n×n .The subsequent two basic results which follow are concerned with commutation and noncommutation of two real matrices A and X.The used tool relies on the calculation of the null space and the range space of the Kronecker sum of the matrix A, one of the matrices, with its minus transpose.A vector built with all the entries of the other matrix X has to belong to one of the above spaces for A and X to commute and to the other one in order that A and X not to be two commuting matrices.
Note that according to Proposition 2.1 the set of matrices C A which commute with the square matrix A and its complementary C A i.e., the set of matrices which do not commute with A can be redefined in an equivalent way by using their given expanded vector forms.

Proposition 2.2. One has
Then, Proposition 2.2 has been proved.
The subsequent mathematical result is stronger than Proposition 2.2 and is based on characterization of the spectrum and eigenspaces of A ⊕ −A T .

Theorem 2.3. The following properties hold. i The spectrum of
; ∀i, j ∈ n} and possesses ν Jordan blocks in its Jordan canonical form of, subject to the constraints n 2 ≥ ν dim S μ i 1 ν i 2 ≥ ν 0 , and 0 ∈ σ A ⊕ −A T with an algebraic multiplicity μ 0 and with a geometric multiplicity ν 0 subject to the constraints: where a S : span{z i ⊗ x j , ∀i, j ∈ n}, μ i μ λ i and ν i ν λ i are, respectively, the algebraic and the geometric multiplicities of b x j and z i are, respectively, the right eigenvectors of A and A T with respective associated eigenvalues λ j and λ i ; ∀i, j ∈ n.

respectively, the algebraic and the geometric multiplicity of
ii One has

Mathematical Problems in Engineering
Expressions which calculate the sets of matrices which commute and which do not commute with a given one are obtained in the subsequent result.
i One has , where E, F ∈ R n 2 ×n 2 are permutation matrices and X ∈ R n×n and v X ∈ R n 2 are defined as follows.
a One has ii X ∈ C A , for any given A / 0 ∈ R n×n , if and only if Also,

2.9
Also, with the same definitions of E, F, and where v X 2 is any solution of the compatible algebraic system for some M / 0 ∈ R n×n such that X, M ∈ R n×n which are defined according to v X Fv X and

Results Concerning Sets of Pair Wise Commuting Matrices
Consider the following sets.
1 A set of nonzero p ≥ 2 distinct pair wise commuting matrices 2 The set of matrices MC A C : {X ∈ R n×n : X, A i 0; ∀A i ∈ A C } which commute with the set A C of pair wise commuting matrices.
3 A set of matrices C A : {X ∈ R n×n : X, A i 0; ∀A i ∈ A} which commute with a given set of nonzero p matrices A : {A i ∈ R n×n ; ∀i ∈ p} which are not necessarily pair wise commuting.

The complementary sets of MC A C and C
for a set of pair wise commuting matrices A C so that the notation MC A C is directly referred to a set of matrices which commute with all those in a set of pair wise commuting matrices.The following two basic results are concerned with the commutation and noncommutation properties of two matrices.

3.2
Then iii One has

3.3
where iv One has

3.5
where vi One has The following result is related to the rank defectiveness of the matrix N A C and any of their submatrices since A C is a set of pair wise commuting matrices.Proposition 3.2.The following properties hold: and, equivalently,

3.8
Results related to sufficient conditions for a set of matrices to pair wise commute are abundant in literature.For instance, diagonal matrices are always pair wise commuting.Any sets of matrices obtained via multiplication by real scalars with any given arbitrary matrix are sets of pair wise commuting matrices.Any set of matrices obtained by linear combinations of one of the above sets consists also of pair wise commuting matrices.Any matrix commutes with any of its matrix functions, and so forth.In the following, a simple, although restrictive, sufficient condition for rank defectiveness of N A of some set A of p square real n-matrices is discussed.Such a condition may be useful as a practical test to elucidate the existence of a nonzero n-square matrix which commutes with all matrices in this set.Another useful test obtained from the following result relies on a necessary condition to elucidate if the given set consists of pair wise commuting matrices.Theorem 3.3.Consider any arbitrary set of nonzero n-square real matrices A : {A 1 , A 2 , . . ., A p } for any integer p ≥ 1 and define matrices:

3.9
Then, the following properties hold:

Mathematical Problems in Engineering
iii If A A C is a set of pair wise commuting matrices then

3.11
iv One has with the above set inclusion being proper.
Note that Theorem 3.3 ii extends Proposition 3.1 v since it is proved that for any R λ / 0 and any set of matrices A. Note that Theorem 3.3 iii establishes that v A i ∈ i∈p\{i} Ker A j ⊕ −A T j ; ∀i ∈ p is a necessary and sufficient condition for the set to be a set of commuting matrices A being simpler to test by taking advantage of the symmetry property of the commutators than the equivalent condition v A i ∈ i∈p Ker A j ⊕ −A T j ; ∀i ∈ p.Further results about pair wise commuting matrices or the existence of nonzero commuting matrices with a given set are obtained in the subsequent result based on the Kronecker sum of relevant Jordan canonical forms.
Theorem 3.4.The following properties hold for any given set of n-square real matrices A {A 1 , A 2 , . . ., A p }.
i The set C A of matrices X ∈ R n×n which commute with all matrices in A is defined by:

3.13
where P i ∈ R n×n is a nonsingular transformation matrix such that A i ∼ J A i P −1 i A i P i , J A i being the Jordan canonical form of A i .
ii One has

3.14
where ν i 0 and ν ij are, respectively, the geometric multiplicities of 0 ∈ σ A i ⊕ −A T i and λ ij ∈ σ A i and μ i 0 and μ ij are, respectively, the algebraic multiplicities of 0 iii The set A consists of pair wise commuting matrices, namely

Equivalent conditions follow from the second and third equivalent definitions of C A in Property (i).
Theorems 3.3 and 3.4 are concerned with MC A / {0} ∈ R n×n for an arbitrary set of real square matrices A and for a pair wise-commuting set, respectively.

Further Results and Extensions
The extensions of the results for commutation of complex matrices are direct in several ways.It is first possible to decompose the commutator in its real and imaginary part and then apply the results of Sections 2 and 3 for real matrices to both parts as follows.Let A A re iA im and B B re iB im be complex matrices in C n×n with A re and B re being their respective real parts, and A im and B im , all in R n×n , their respective imaginary parts, and i √ −1 is the imaginary complex unity.Direct computations with the commutator of A and B yield The following three results are direct and allow to reduce the problem of commutation of a pair of complex matrices to the discussion of four real commutators.

Proposition 4.1. One has B ∈ C
Proposition 4.1 yields to the subsequent result.
Theorem 4.4.The following properties hold.i Assume that the matrices A and B re are given.Then, B ∈ C A if and only if B im satisfies the following linear algebraic equation: The various results of Section 3 for a set of distinct complex matrices to pair wise commute and for characterizing the set of complex matrices which commute with those in a given set may be discussed by more general algebraic systems like the above one with four block matrices for each j ∈ p in the whole algebraic system.Theorem 4.5 extends directly for sets of complex matrices commuting with a given one and complex matrices commuting with a set of commuting complex matrices as follows.
i Consider the sets of nonzero distinct complex matrices A : and a nonzero solution X ∈ C A exists since the rank of the coefficient matrix of 4.11 is less than 2n 2 .
ii Consider the sets of nonzero distinct commuting complex matrices A C : {A i ∈ C n×n : i ∈ p} and MC A : {X ∈ C n×n : X, A i 0; A i ∈ A, ∀i ∈ p} for p ≥ 2. Thus, MC A X X re iX re if and only if v X re and v X im are solutions to 4.11 .
iii Properties (i) and (ii) are equivalently formulated by from the algebraic set of complex equations: The following corollaries are direct from Theorem 4.8 from the subsequent facts:

4.13
where f A p A , from the definition of f being a function of the matrix A, with p λ being a polynomial fulfilling , where m k is the index of λ k , that is, its multiplicity in the minimal polynomial of A.

Corollary 4.12. Consider any countable set of function matrices C F : {f
Note that matrices which commute and are simultaneously triangularizable through the same similarity transformation maintain a zero commutator after such a transformation is performed.
A direct consequence of Theorem 4.13 is that if a set of matrices are simultaneously triangularizable to their real canonical forms by a common transformation matrix then the pair wise commuting properties are identical to those of their respective Jordan forms.
Mathematical Problems in Engineering direct verification by using the properties of the Kronecker product, with T P ⊗ P T for a nonsingular P ∈ R n×n such that A ∼ J A P −1 AP , as follows: and the result has been proved.Thus, rank A ⊕ −A T rank J A ⊕ −J A T .It turns out that P is, furthermore, unique except for multiplication by any nonzero real constant.Otherwise, if T / P ⊗ P T , then there would exist a nonsingular Thus, note that Those results follow directly from the properties of the Kronecker sum A ⊕ B of n-square real matrices A and B −A T since direct inspection leads to the following.
, the algebraic multiplicity of 0 ∈ σ A ⊕ −A T is at least n i 1 μ 2 i and since ν i ≥ 1; ∀i ∈ n.Also, a simple computation of the number of eigenvalues of 2 The number of linearly independent vectors in S is A.8 4 There are at least ν 0 linearly independent vectors in S : span{z i ⊗ x j , ∀i, j ∈ n}.Also, the total number of Jordan blocks in the Jordan canonical form of Property i has been proved.Property ii follows directly from the orthogonality in R n 2 of its range and null subspaces.
is a solution to the algebraic compatible linear system: From Theorem 2.3, the nullity and the rank of A ⊕ −A T are, respectively, dim Ker A ⊕ −A T ν 0 rank A ⊕ −A T n 2 − ν 0 .Therefore, there exist permutation matrices E, F ∈ R n 2 ×n 2 such that there exists an equivalence transformation: such that A 11 is square nonsingular and of order ν 0 .Define M EMF ≈ M / 0 ∈ R n×n .Then, the linear algebraic systems A ⊕ −A T v X v M , and are identical if X and M are defined according to v X Fv X and v M Ev M .As a result, Properties i and ii follow directly from A.12 for M M 0 and for any M satisfying rank A ⊕ −A T rank A ⊕ −A T , v M n 2 − ν 0 , respectively.

B. Proofs of the Results of Section 3
Proof of Proposition 3.1.i The first part of Property i follows directly from Proposition 2.1 since all the matrices of A C pair wise commute and any arbitrary matrix commutes with itself thus j i may be removed from the intersections of kernels of the first double sense implication .The last part of Property i follows from the antisymmetric property of the commutator A ii It follows from its equivalence with Property i since Ker iii Property iii is similar to Property i for the whose set M A C of matrices which commute with the set A C so that it contains A C and, furthermore, Ker v and vi are similar to ii -iv except that the members of A do not necessarily commute.
Proof of Proposition 3.2.It is a direct consequence from Proposition 3.1 i -ii since the existence of nonzero pair wise commuting matrices all the members of A C implies that the above matrices N A C , N i A C , A j ⊕ −A T j are all rank defective and have at least identical number of rows than that of columns.Therefore, the square matrices This assumption implies directly that Thus, it follows by complete induction that ; ∀i ∈ p and Property iii has been proved.iv The definition of MA C follows from Property iii in order to guarantee that X, A i 0; ∀A i ∈ A. The fact that such a set contains properly Proof of Theorem 3.4.If A i ∼ J A i P −1 i A i P i , with J A i being the Jordan canonical form of T i with T i P i ⊗ P T i ∈ R n 2 × R n 2 see proof of Theorem 2.3 being nonsingular; ∀i ∈ p.Thus, A i ⊕ −A T i T i J A i ⊕ −J T A i T −1 i so that: since T is nonsingular.Thus, ∀X ∈ Dom A ⊂ R n 2 : B.9 ii It is close to that of i but the rank condition for compatibility of the algebraic system is not needed since the coefficient matrix of 4.11 is rank defective since A j ∈ A C ⇔ v T A jre , v T A jim T is in the null space of the coefficient matrix; ∀j ∈ p.
iii Its proof is close to that of Theorem 4.5 ii and it is then omitted.

Remark 4 . 7 .Theorem 4 . 8 .
Note that all the proved results of Sections 2 and 3 are directly extendable for complex commuting matrices, by simple replacements of transposes by conjugate transposes, without requiring a separate decomposition in real and imaginary parts as discussed in Theorems 4.5 ii and 4.6 iii .Let f : C → C be an analytic function in an open set D ⊃ σ A for some matrix A ∈ C n×n and let p λ be a polynomial fulfilling p i λ k f i λ k ; ∀k ∈ σ A , ∀i ∈ m k − 1 ∪ {0}; ∀k ∈ μ the number of distinct elements in σ A , where m k is the index of λ k , that is, Mathematical Problems in Engineering its multiplicity in the minimal polynomial of A. Then, f A is a function of a matrix A if f A p A , 8 .Some results follow concerning the commutators of functions of matrices.Consider a nonzero matrix B ∈ C A ∩ C n×n for any given nonzero A ∈ C n×n .Then, f B ∈ C A ∩ C n×n , and equivalently v f B ∈ Ker A ⊕ −A * , for any function f : C n×n → C n×n of the matrix B.

3Corollary 4 . 9 .
Theorem 4.8 is extendable for any countable set {f i B } of matrix functions of B. Consider a nonzero matrix B ∈ C A ∩ C n×n for any given nonzero A ∈ C n×n .Then, g B ∈ C f A ∩C n×n for any function f : C n×n → C n×n of the matrix A and any function g : C n×n → C n×n of the matrix B. Corollary 4.10.f A ∈ C A ∩ C n×n , and equivalently v f A ∈ Ker A ⊕ −A * , for any function f : C n×n → C n×n of the matrix A. Corollary 4.11.If B ∈ C A ∩ C n×n then any countable set of function matrices {f i B } is C A and in MC A .

where
Y i ∈ Ker J A i ⊕ −J T A i ; ∀i ∈ p and Y ∈ p i 1 Ker J A i ⊕ −J T A i. Property i has been proved.The first inequality of Property ii follows directly from Property i .The results of equalities and inequalities in the second line of Property ii follow by the first inequality by taking into account Theorem 2.3.Property iii follows from the proved equivalent definitions of C A in Property i by taking into account that A j , A j 0; ∀j ∈ p so that

Proof of Theorem 4 . 8 .
For any B∈ C A ∩ C n×n : A, B 0 ⇒ λI n − B A A λI n − B ; ∀λ ∈ C ⇒ λI n − B −1 A A λI n − B −1 ; ∀λ ∈ C ∩ σ B ⇒ A, f B A 1 2πi C f λ λI n − B −1 dλ 1 2πi C f λ λI n − B −1 Adλ 1 2πi C f λ λI n − B −1 dλ A f B , A 0, C.4where C is the boundary of D and consists in a set of closed rectifiable Jordan curves which contains no point of σ A since λ ∈ C ∩ σ A so that the identityλI n − B −1 A A λI n − B −1 is true.Then, f B ∈ C A ∩ C n×n has been proved.From Theorem 4.5, this is equivalent to v f B ∈ Ker A ⊕ −A * .Proof of Theorem 4.13.B ∈ C A ⇔ F A, B G 0; ∀F, G ∈ C n×n being nonsingular.By choosing F −1 G T , it follows that T −1 A, B T T −1 A TT −1 BT − T −1 B TT −1 AT Λ A , Λ B 0. C.5 the total number of Jordan blocks in the Jordan canonical form of A is and any given set A. Property i has been proved.ii The first part follows by contradiction.Assume i∈p Ker A