Execute Elementary Row and Column Operations on the Partitioned Matrix to Compute MP Inverse A †

and Applied Analysis 3


Introduction
Throughout this paper we use the following notation.Let   and  ×  be the  dimensional complex space and the set of  ×  complex matrices with rank .For a matrix  ∈  × , () and () are the range and null space of ; () and  * denote the rank and the the conjugate transpose of , while  † and ‖‖ denote the M-P inverse and Frobenius norm, respectively.
In 1920, Moore [1] defined a new inverse of a matrix by projection matrices.Moore's definition of the generalized inverse of an  ×  matrix  is equivalent to the existence of an  ×  matrix  such that  =  () , = () , where  () is the orthogonal projector on ().Unaware of Moore's work, In 1955 Penrose [2] showed that there exists a unique matrix  satisfying the four conditions  = ,  = , () * = , where * denotes conjugate transpose.These conditions are equivalent to Moore's conditions.The unique matrix  that satisfied these conditions was known as the Moore-Penrose inverse (abbreviated M-P) and is denoted by  † .For a subset {, , . . ., } of the set {1, 2, 3, 4}, the set of  ×  matrices satisfied the equations (), (), . . ., (), from among (2) is denoted by  {,,...,} .These concepts can be found in Ben-Israel and Greville's famous book [3] or Campell and Meyer's book [4].In their famous books [3,4], the next statement is valid for a rectangular matrix.
One handy method of computing the inverse of a nonsingular matrix  is the Gauss-Jordan elimination procedure by executing elementary row operations on the pair (, ) to transform it into (,  −1 ).Moreover Gauss-Jordan elimination can be used to determine whether a matrix is nonsingular 2 Abstract and Applied Analysis or not.However, one cannot directly use this method on a generalized inverse of a rectangular matrix or a square singular matrix .
Recently, Author [19] proposed a Gauss-Jordan elimination algorithm to compute  † , which required 3 3 multiplications and divisions.More recently, Ji [20] improved author's algorithm [19] and pointed that only 2 3 multiplications and divisions are required.Following these lines, Stanimirovi ć and Petkovi ć in [21] extended the method of [20] to  (2)  , .But these three algorithms need also switching.Guo and Huang [22] executed row and column elementary transformations for computing M-P inverse  † by applying the rank equalities of matrix .They did not analyze the complexity of their algorithm.In this paper, we first study the total number arithmetic operation of GH-algorithm, then improve it, and present an alternative explicit formula for the M-P inverse of a matrix; the improvements save the total number arithmetic operation.We must point that GH-algorithm and our algorithm do not need to switch blocks of certain matrix in the process of computation.
The paper is organized as follows.The computational complexity of GH-Algorithm 3 for computing M-P inverse  † is surveyed in the next section.In Section 3, we derive a novel explicit expression for  † , propose a new Gauss-Jordan elimination procedure for  † based on the formula, and study the computational complexity of the new approach and Algorithm 7. In Section 4, an illustrative example is presented to explain the corresponding improvements of the algorithm.

The Computational Complexity of GH-Algorithm
In [22], Guo and Huang gave a method of elementary transformation for computing M-P inverse  † by applying the rank equalities of matrix .
In [22], the authors also considered an algorithm based on Lemma 2, which was stated as follows.
Algorithm 3. M-P inverse GH-Algorithm is as follows.
(3) Make the block matrices of  2 (1, 2) and  2 (2, 1) be zero matrices by applying matrix  () , which yields Then Nevertheless, Guo and Huang [22] did not analyze the complexity of the numerical algorithm.In the following theorem, we will study the total number of arithmetic operations.For simplicity, assume that  1 = (   12 ).Following the same line, this requires (−) multiplications and divisions on the  column pivoting steps.
Then resume elementary row and columns operations on the matrix  2 to transform it into  3 , which requires  multiplications, which is the count of  1  1 .

Main Results
The Gauss-Jordan row and column elimination procedure for the M-P inverse  † of a matrix by Guo and Huang is based on the partitioned matrix  1 = (  *  *  *  * 0 ).In this section, we will first propose a modified Gauss-Jordan elimination process to compute  † and then summarize an algorithm of this method.Finally, the complexity of the algorithm is analyzed.
where (  1 0 ) and ( 1 0) are the row and column reduced echelon form of  * , respectively.
Further, there exists an elementary row operation matrix Then Proof.For () = ( * ) = , there exist two elementary row and column operation matrices  1 = (  11  12 )   − and It is easy to check that  1 ∈  ×  and  1 ∈  ×  ; then the matrix  1  1 is nonsingular, which implies that there exists another elementary row operation matrix  2 ∈  × such that The above formula also shows that  2 = ( 1  1 ) −1 .
By deducing, we obtain that This means that  is a 2-inverse of  with () = ( * ) and () = ( * ).From Lemma 1, we know that  =  † .Remark 6.The representation of  † in Theorem 5 is consistent with the one in [3], although we use Gauss-Jordan elimination procedure.
According to the representation introduced in Theorem 5, we summarize the following algorithm for computing M-P inverse  † .Algorithm 7. M-P inverse-Sheng algorithm is as follows.
(2) Execute elementary row operations on first  rows of the partitioned matrix ) is a reduced row-echelon matrix.
(6) Make the block matrices of  5 (1, 3) and  5 (3, 1) be zero matrices by applying elementary row and column transformations, respectively, through matrix   , which yields Then  † =  1 B1 .According to Algorithm 7, the next theorem will analyze the computational complexity of it.
Proof.For a matrix  with rank ,  pivoting steps are needed to make the partitioned matrix  1 into  2 .First pivoting step involves  nonzero columns in  * .Thus, it needs  − 1 divisions and ( − 1)( − 1) multiplications with a total number of ( − 1) multiplications and divisions.On the second pivoting step, there is one less column in the first part of the pair.There are  − 1 nonzero columns to deal with.This pivoting step requires (−2) operations.Following the same idea, the th (1 ≤  ≤ ) pivoting step requires ( − ) operations.So these  pivoting steps require multiplications and divisions to reach the matrix  2 .
For simplicity, assume that  1 = (   11 ) and  1 = (    11 ), which follows from that  and  are row-echelon and column-echelon reduced matrix, respectively.
Then resume elementary row and columns operations on the matrix  5 to transform it into  6 .The complexity of this process is  multiplications, which is the count to compute  1 B1 .
Therefore, when  = , () obtains its maximum value, which yields Furthermore, we give two remarks: one is explaining the computation speed and the other is how to improve the accuracy of Algorithm 7.
Remark 9.The algorithm in this paper does not need to switch block of certain matrix in the process computation, unlike the existing algorithm in [19][20][21].The higher computational complexity is about 3 3 multiplications and divisions, that is, less than GH-algorithm [22], which requires 4.5 3 multiplications and divisions, when they are applied to the case of  ≈  =  for  † .
Remark 10.In order to improve the accuracy of the algorithm, we must select nonzero entries in pivot row and column in each step of the Gauss-Jordan elimination.This improvement is based on the fact that Gauss-Jordan elimination is applied on matrices containing nonnegligible number zero elements.

Numerical Examples
In this section, we will use a numerical example to demonstrate our results.A handy method is used to compute  † on a low order matrix.