ON MIXED-TYPE REVERSE-ORDER LAWS FOR THE MOORE-PENROSE INVERSE OF A MATRIX PRODUCT

If A and B are a pair of invertible matrices of the same size, then the product AB is nonsingular, too, and the inverse of the product AB satisfies the reverse-order law (AB)−1 = B−1A−1. This law can be used to find the properties of (AB)−1, as well as to simplify various matrix expressions that involve the inverse of a matrix product. However, this formula cannot trivially be extended to the Moore-Penrose inverse of matrix products. For a general m×n complex matrix A, the Moore-Penrose inverse A† of A is the unique n×m matrix X that satisfies the following four Penrose equations: (i) AXA=A, (ii) XAX =X, (iii) (AX)∗ =AX, (iv) (XA)∗ =XA, where (·)∗ denotes the conjugate transpose of a complex matrix. A matrix X is called a {1}-inverse (inner inverse) ofA if it satisfies (i) and is denoted byA−. General properties of the Moore-Penrose inverse can be found in [2, 4, 16]. LetA and B be a pair of matrices such thatAB exists. In many situations, one needs to find the Moore-Penrose inverse of the product AB and its properties. Because A†A, BB†, and BB†A†A are not necessarily identity matrices, the relationship between (AB)† and B†A† is quite complicated and the reverse-order law (AB)† = B†A† does not necessarily hold. Therefore, it is not easy to simplify matrix expressions that involve the MoorePenrose inverse of matrix products. Theoretically speaking, for any matrix product AB, the Moore-Penrose inverse (AB)† can be written as (AB)† = B†A† or (AB)† = B†A†+X, (1)

If A and B are a pair of invertible matrices of the same size, then the product AB is nonsingular, too, and the inverse of the product AB satisfies the reverse-order law (AB) −1 = B −1 A −1 . This law can be used to find the properties of (AB) −1 , as well as to simplify various matrix expressions that involve the inverse of a matrix product. However, this formula cannot trivially be extended to the Moore-Penrose inverse of matrix products. For a general m × n complex matrix A, the Moore-Penrose inverse A † of A is the unique n × m matrix X that satisfies the following four Penrose equations: (i) AXA = A, (ii) XAX = X, (iii) (AX) * = AX, (iv) (XA) * = XA, where (·) * denotes the conjugate transpose of a complex matrix. A matrix X is called a {1}-inverse (inner inverse) of A if it satisfies (i) and is denoted by A − . General properties of the Moore-Penrose inverse can be found in [2,4,16].
Let A and B be a pair of matrices such that AB exists. In many situations, one needs to find the Moore-Penrose inverse of the product AB and its properties. Because A † A, BB † , and BB † A † A are not necessarily identity matrices, the relationship between (AB) † and B † A † is quite complicated and the reverse-order law (AB) † = B † A † does not necessarily hold. Therefore, it is not easy to simplify matrix expressions that involve the Moore-Penrose inverse of matrix products. Theoretically speaking, for any matrix product AB, the Moore-Penrose inverse (AB) † can be written as where X is a residue matrix. For these two situations, one can consider the following two problems: (I) necessary and sufficient conditions for (AB) † = B † A † to hold, (II) if (AB) † ≠ B † A † , find possible expressions of X in (AB) † = B † A † + X, and then determine necessary and sufficient conditions for (AB) † = B † A † + X to hold. The investigation of the Moore-Penrose inverse of the product AB was started in 1960s. For the standard situation (AB) † = B † A † , a well-known result due to Greville [9] asserts that where R(·) denotes the range (column space) of a matrix. Many other equivalent conditions for (AB) † = B † A † to hold can be found in [2,4,16,26]. Generally speaking, the two range inclusions in (2) are strict conditions for any pair of matrices A and B to satisfy. Therefore, it is necessary to seek various weaker reverse-order laws for (AB) † to satisfy. In addition to (2), (AB) † may satisfy some other mixed-type reverse-order laws. For example, These reverse-order laws were studied in [6,8,11,26]. Although these matrix equalities are more complicated than the law in (2), the conditions for these equalities to hold are weaker than that for (2) to hold. In fact, mixed-type reverse-order laws also stem from various reasonable operations for the Moore-Penrose inverse of matrix products (see Remark 10). Although (AB) † can be written as (AB) † = B † A † + X in general, it is not easy to give an explicit expression for the residue matrix X for the given matrices A and B. Some discussion for the expression of X and its properties were given in [8].
In the investigation of (AB) † , we observe that a possible expression for (AB) † is A direct motivation for us to find out the residue matrix in (4) arises from two different decompositions of the following block matrix: and its generalized inverses. In fact, it is easy to verify that M can be decomposed as the following two forms: where T = (I n − BB † )(I n − A † A). From these two decompositions, one can find two {1}-inverses of M as follows: These two {1}-inverses of M are not necessarily equal. Therefore, it is natural to consider under what conditions the two {1}-inverses of M in (7) are equal; or some blocks of them are equal. The mixed-type reverse-order law (4) is noticed by comparing the lower right blocks of (7).
Because the right-hand side of (4) involves complicated matrix operations, it is not easy to establish necessary and sufficient conditions for (4) to hold by definitions, as well as various matrix decompositions associated with A, B, and AB. In the investigation of various problems on generalized inverses of matrices, the present author notices that the rank of matrix is a simple and powerful method for dealing with the relationship between any two matrix expressions involving generalized inverses. In fact, any two matrices A and B of the same size are equal if and only if r (A − B) = 0, where r (·) denotes the rank of a matrix. If one can find some nontrivial formulas for the rank of A − B, then necessary and sufficient conditions for A = B to hold can be derived from these rank formulas. This method can be used for investigating the relations between any two matrix expressions that involve generalized inverses. Several simple rank formulas for the differences of matrices found by the present author are given below: where [A, B] denotes a row block matrix; see Tian [20,23,25,26]. The significance of these simple rank formulas is: they connect different matrix expressions through the rank of these matrices. From these rank equalities, one can derive some basic properties for the matrices on the left-hand sides. For instance, let the right-hand sides of the above rank equalities be zero and simplify by some elementary methods, one can immediately obtain necessary and sufficient conditions for the matrices on the left-hand sides to be zero.
In this paper, we establish a rank formula associated with (4) and then derive from the rank formula a necessary and sufficient condition for (4) to hold.
The following rank formulas are well known: see Marsaglia and Styan [12]. [28]) and notice that R(A) = R(AA * A) and R(A * ) = R(A * AA * ). By appealing to (11), the rank of the Schur com- This rank equality is quite useful in dealing with various matrix expressions involving the Moore-Penrose inverse. Rank formulas for the Schur complement can be found in [23].
The main result of this paper is given below.
Hence, the reverse-order law (4) holds if and only if A and B satisfy the following two range equalities: Proof. From (12), we first obtain Applying (10) to T yields It is easy to verify that Recall that elementary matrix operations and block elementary matrix operations do not change the rank of matrix. Thus, we can find by (10) and block elementary matrix operations that Also note that Thus, Applying (9) gives Substituting (20) into (19), and (19) into (17), and then (16) and (17) into (15) gives us (13). Let the right-hand side of (13) be zero and note that Then the equivalence of (4) and (14) follows.
The establishment of (13) is not easy, because the matrix expression on the left-hand side of (13) involves three terms consisting of the Moore-Penrose inverse and the righthand side of (13) involve ranks of block matrices with the products AB, AA * AB, and ABB * B. However, the two block matrices on the right-hand side of (13) and the two range equalities in (14) are easy to simplify when A and B satisfy some conditions. For example, if both A and B are partial isometries, that is, A † = A * and B † = B * , then (14) is satisfied. In this case, (4) becomes The most valuable consequence of (4) is concerned with the Moore-Penrose inverse of the product of two orthogonal projectors. Corollary 2. Let P and Q be a pair of orthogonal projectors of order n. Then, the product P Q satisfies the following two identities: where P ⊥ = I n − P and Q ⊥ = I n − Q. In particular, Proof. Note that P 2 = P = P * = P † , Q 2 = Q = Q * = Q † for any pair of orthogonal projectors P and Q. Thus, P and Q satisfy (14), and (4) is reduced to (23). Premultiplying and postmultiplying both sides of (23) by P Q yield (24).
Recall that for any pair of orthogonal projectors P and Q, see, for example, [1]. Thus, (25) could be regarded as two new equivalent statements for the commutativity of two orthogonal projectors. There are many results in the literature on products of orthogonal projectors and related topics; see, for example, [1,13]. The two identities (23) and (24) are two fundamental results for the product of two orthogonal projectors. They can be used for dealing with various matrix expressions that involve products of two orthogonal projectors. For example, the product P QP satisfies the following identity: Moreover, it is easy to verify that where (P Q) # is the group inverse of P Q. Hence, one can also derive from (23) two identities for [(P Q) 2 ] † and (P Q) # . From (23), one can also derive some valuable expressions for (P ± Q) † and (P Q ± QP ) † ; see [5]. Let A denote the spectral norm of a matrix A, that is, the maximal singular value of A. For a nonnull orthogonal projector P , P = 1. For any pair of orthogonal projectors P and Q with P Q ≠ 0, it can be derived from (23) the following norm equality: It was shown in [3,15] that if P is idempotent with P ≠ 0 and P ≠ I, then I −P = P . If P and Q are two orthogonal projectors, then (P Q) † is idempotent; see [14]. Note that I n − P and I n − Q are orthogonal projectors. Hence, [(I n − Q)(I n − P )] † is idempotent. Thus, if (I n − Q)(I n − P ) ≠ 0, then Applying this equality to (29), we see that if P Q ≠ 0 and Q ⊥ P ⊥ ≠ 0, then Replacing P with I n − Q and Q with I n − P in (31) also gives Hence, we have the following result.
Theorem 3. Let P and Q be a pair of orthogonal projectors with both P Q ≠ 0 and Q ⊥ P ⊥ ≠ 0. Then, Identity (23) can be used to establish some identities for the Moore-Penrose inverse of ABC, where A, B, and C are three orthogonal projectors. A, B, and C be a triple of orthogonal projectors of order n. Then,
Although reverse-order laws for the Moore-Penrose inverse of matrix products have many different expressions, some of these reverse-order laws may be equivalent. A simple example is see [21]. When investigating (4), we also find that (4) is equivalent to the following two mixed-type reverse-order laws: The reverse-order law (39) was first studied by Galperin and Waksman [8], and then by Izumino [11] for a product of two linear operators. They showed that (39) holds if and only if This result is also true for a complex matrix product. Because the Moore-Penrose inverses of A and B are contained in the condition (41), it cannot be regarded as a satisfactory necessary and sufficient condition for (39) to hold. In fact, (41) is equivalent to (14) Without much effort, one can show that see [26]. Equality (43) is derived from the following result.
Lemma 5 [22]. Let X 1 and X 2 be a pair of outer inverses of a matrix A, that is, X 1 AX 1 = X 1 and X 2 AX 2 = X 2 . Then, Hence, Obviously, the matrix (AB) † is an outer inverse of AB by the definition of the Moore-Penrose inverse. It is also easy to verify that B † (A † ABB † ) † A † is an outer inverse of AB. Thus, it follows by (44) that Hence, one can derive from (44) that (39) holds if and only if It is also easy to verify that Then, (43) follows. Another rank equality related to (39) is which is shown in [26]. Two rank formulas associated with (40) are given below.
Theorem 6. Let A ∈ C m×n and B ∈ C n×p . Then, Hence, the following statements are equivalent: (44), and (51) is a simplification of (50).
Theorem 9. Let A ∈ C m×n and B ∈ C n×p , and let M ∈ C m×m and N ∈ C p×p be a pair of Hermitian positive definite matrices. Then, the following statements are equivalent: (a) (AB) Some reasonable extensions of (39) and (40) to (ABC) † are Various rank formulas associated with these reverse-order laws can be established. For more details, see [18].

Call for Papers
This subject has been extensively studied in the past years for one-, two-, and three-dimensional space. Additionally, such dynamical systems can exhibit a very important and still unexplained phenomenon, called as the Fermi acceleration phenomenon. Basically, the phenomenon of Fermi acceleration (FA) is a process in which a classical particle can acquire unbounded energy from collisions with a heavy moving wall. This phenomenon was originally proposed by Enrico Fermi in 1949 as a possible explanation of the origin of the large energies of the cosmic particles. His original model was then modified and considered under different approaches and using many versions. Moreover, applications of FA have been of a large broad interest in many different fields of science including plasma physics, astrophysics, atomic physics, optics, and time-dependent billiard problems and they are useful for controlling chaos in Engineering and dynamical systems exhibiting chaos (both conservative and dissipative chaos). We intend to publish in this special issue papers reporting research on time-dependent billiards. The topic includes both conservative and dissipative dynamics. Papers discussing dynamical properties, statistical and mathematical results, stability investigation of the phase space structure, the phenomenon of Fermi acceleration, conditions for having suppression of Fermi acceleration, and computational and numerical methods for exploring these structures and applications are welcome.
To be acceptable for publication in the special issue of Mathematical Problems in Engineering, papers must make significant, original, and correct contributions to one or more of the topics above mentioned. Mathematical papers regarding the topics above are also welcome.
Authors should follow the Mathematical Problems in Engineering manuscript format described at http://www .hindawi.com/journals/mpe/. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http:// mts.hindawi.com/ according to the following timetable:

Manuscript Due
December 1, 2008 First Round of Reviews March 1, 2009