On Convergents Infinite Products and Some Generalized Inverses of Matrix Sequences

and Applied Analysis 3 and ‖b −Ax‖2 ‖b −Ax‖2, then ‖x‖2 x∗x < ∥ ∥x′ ∥ ∥2 2. 1.8 Thus, x A b is the unique minimum least-squares solution of the following linear squares problem, see, e.g., 14, 21, 22 , ‖b −Ax‖2 min z∈Cn ‖b −Az‖2. 1.9 It is well known also that the singular value decomposition of any rectangular matrix A ∈ Mm,n with rank A r / 0 is given by A U [ D 0 0 0 ] V ∗ : U∗U Im, V ∗V In, 1.10 where D diag μ1, μ2, . . . , μr ∈ Mr is a diagonal matrix with diagonal entries δi i 1, 2, . . . , r , and μ1 ≥ μ2 ≥ · · · ≥ μr > 0 are the singular values of A, that is, μi i 1, 2, . . . , r are the nonzero eigenvalues of A∗A. This decomposition is extremely useful to represent the MPI of A ∈ Mm,n by 20, 23 A V [ D−1 0 0 0 ] U∗, 1.11 where D−1 diag μ−1 1 , μ −1 2 , . . . , μ −1 r ∈ Mr is a diagonal matrix with diagonal entries μ−1 i i 1, . . . , r . Furthermore, the spectral norm of A is defined by ‖A‖s max 1≤i≤r { μi } μ1; ‖A ‖s 1 μr , 1.12 where μ1and μr are, respectively, the largest and smallest singular value of A. Generally speaking, the outer inverse A 2 T,S of a matrix A ∈ Mm,n, which is a unique matrix X ∈ Mn,m satisfying the following equations see, e.g., 20, 24–27 : AXA A, rang A T, null A S, 1.13 where T is a subspace of C of s ≤ r, and S is a subspace of C of dimension m − s. As we see in 13, 20, 24–29 , it is well-known fact that several important generalized inverses, such as the Moore-Penrose inverse A , the weighted Moore-Penrose inverse A M,N , the Drazin inverse A, and so forth, are all the generalized inverse A 2 T,S, which is having the prescribed range T and null space S of outer inverse of A. In this case, the Moore-Penrose inverse A can be represented in outer inverse form as follows 27 : A A 2 rang A∗ , null A∗ . 1.14 4 Abstract and Applied Analysis Also, the representation and characterization for the outer generalized inverseA 2 T,S have been considered by many authors see, e.g., 15, 16, 20, 27, 30, 31 Finally, given two matricesA aij ∈ Mm,n, and B bkl ∈ Mp,q, then the Kronecker product of A and B is defined by see, e.g., 5, 7, 32–35 A ⊗ B aijB ] ∈ Mmp,nq. 1.15 Furthermore, the Kronecker product enjoys the following well-known and important properties: i The Kronecker product is associative and distributive with respect to matrix addition. ii If A ∈ Mm,n, B ∈ Mp,q, C ∈ Mn,r and D ∈ Mq,s, then A ⊗ B C ⊗D AC ⊗ BD 1.16 iii If A ∈ Mm, B ∈ Mp are positive definite matrices, then for any real number r, we have: A ⊗ B r A ⊗ B. 1.17 iv If A ∈ Mm,n, B ∈ Mp,q, then A ⊗ B A ⊗ B . 1.18 2. Convergent Moore-Penrose Inverse of Matrices First, the need to compute A by using sequences method. The key to such results below is the following two Lemmas, due to Wei 23 and Wei and Wu 17 , respectively. Lemma 2.1. Let A ∈ Mm,n ba a matrix. Then A Â−1A∗, 2.1 where Â A∗A rang A∗ is the restriction of A∗A on rang A∗ . Lemma 2.2. Let A ∈ Mm,n with rank A r and Â A∗A rang A∗ . Suppose Ω is an open set such that σ Â ⊂ Ω ⊂ 0,∞ . Let {Sn x } be a family of continuous real valued function on Ω with limn→∞Sn x 1/x uniformly onσ Â . Then A lim n→∞ Sn ( Â ) A∗. 2.2 Furthermore, ∥ ∥ ∥Sn ( Â ) A∗ −A ∥ ∥ ∥ 2 ≤ sup x∈σ Â |Sn x x − 1|‖A ‖2. 2.3 Abstract and Applied Analysis 5 Moreover, for each λ ∈ σ Â , we have μr ≤ λ ≤ μ1. 2.4and Applied Analysis 5 Moreover, for each λ ∈ σ Â , we have μr ≤ λ ≤ μ1. 2.4 It is well known that the inverse of an invertible operator can be calculated by interpolating the function 1/x, in a similar manner we will approximate the Moore-Penrose inverse by interpolating function 1/x and using Lemmas 2.1 and 2.2. One way to produce a family of functions {Sn x } which is suitable for use in the Lemma 2.2 is to employ the well known Euler-Knopp method. A series ∑∞ n 0 an is said to be Euler-Knopp summable with parameter α > 0 to the value a if the sequence defined by


Introduction and Preliminaries
A scalar infinite product p ∞ m 1 b m of complex numbers is said to converge if b m is nonzero for m sufficiently large, say m ≥ N, and q lim m → ∞ ∞ m 1 b m exists and is nonzero.If this is so then p is defined to be p q N−1 m 1 b m .With this definition, a convergent infinite product vanishes if and only if one of its factors vanishes.
Let {B m } be a sequence of k × k matrices, then 1.1 In 1 , Daubechies and Lagarias defined the converges of an infinite product of matrices without the adverb "invertibly" as follows.
i An infinite product The idea of invertibly convergence of sequence of matrices was introduced by Trench 2, 3 as follows.An infinite product ∞ m 1 B m of k × k matrices is said to be invertibly converged if there is an integer N such that B m is invertible for m ≥ N, and exists and is invertible.In this case, Let us recall some concepts that will be used below.Before starting, throughout we consider matrices over the field of complex numbers C or real numbers R. The set of m-by-n complex matrices is denoted by M m,n C C m×n .For simplicity, we write M m,n instead of M m,n C and when m n, we write M n instead of M n,n .The notations A T , A * , A , A 2 T,S , rank A , rang A , null A , ρ A , A s , A p , and σ A stand, respectively, for the transpose, conjugate transpose, Moore-Penrose inverse, outer inverse, rank, range, null space, spectral radius, spectrum norm, p-norm, and the set of all eigenvalues of matrix A.
The Moore-Penrose and outer inverses of an arbitrary matrix including singular and rectangular are very useful in various applications in control system analysis, statistics, singular differential and difference equations, Markov chains, iterative methods, least-square problem, perturbation theory, neural networks problem, and many other subjects were found in the literature see, e.g., 4-14 .It is well known that Moore-Penrose inverse MPI of a matrix A ∈ M m,n is defined to be the unique solution of the following four matrix equations see, e.g. 4, 11, 14-20 : and is often denoted by X A ∈ M n,m .In particular, when A is a square and nonsingular matrix, then A reduce to A −1 .For x A b, x ∈ C n \ {x} arbitrary, it holds, see, e.g., 14, 18 , 1.9 It is well known also that the singular value decomposition of any rectangular matrix A ∈ M m,n with rank A r / 0 is given by where D diag μ 1 , μ 2 , . . ., μ r ∈ M r is a diagonal matrix with diagonal entries δ i i 1, 2, . . ., r , and . ., r are the nonzero eigenvalues of A * A. This decomposition is extremely useful to represent the MPI of A ∈ M m,n by 20, 23 Furthermore, the spectral norm of A is defined by where μ 1 and μ r are, respectively, the largest and smallest singular value of A.
Generally speaking, the outer inverse A 2 T,S of a matrix A ∈ M m,n , which is a unique matrix X ∈ M n,m satisfying the following equations see, e.g., 20, 24-27 : where T is a subspace of C n of s ≤ r, and S is a subspace of C m of dimension m − s.
As we see in 13, 20, 24-29 , it is well-known fact that several important generalized inverses, such as the Moore-Penrose inverse A , the weighted Moore-Penrose inverse A M,N , the Drazin inverse A D , and so forth, are all the generalized inverse A 2 T,S , which is having the prescribed range T and null space S of outer inverse of A. In this case, the Moore-Penrose inverse A can be represented in outer inverse form as follows 27 : Also, the representation and characterization for the outer generalized inverse A 2 T,S have been considered by many authors see, e.g., 15,16,20,27,30,31 Finally, given two matrices A a ij ∈ M m,n , and B b kl ∈ M p,q , then the Kronecker product of A and B is defined by see, e.g., 5, 7, 32-35 Furthermore, the Kronecker product enjoys the following well-known and important properties: i The Kronecker product is associative and distributive with respect to matrix addition.
ii If A ∈ M m,n , B ∈ M p,q , C ∈ M n,r and D ∈ M q,s , then iii If A ∈ M m , B ∈ M p are positive definite matrices, then for any real number r, we have:

Convergent Moore-Penrose Inverse of Matrices
First, the need to compute A by using sequences method.The key to such results below is the following two Lemmas, due to Wei 23 and Wei and Wu 17 , respectively.
Lemma 2.1.Let A ∈ M m,n ba a matrix.Then

2.3
Moreover, for each λ ∈ σ A , we have It is well known that the inverse of an invertible operator can be calculated by interpolating the function 1/x, in a similar manner we will approximate the Moore-Penrose inverse by interpolating function 1/x and using Lemmas 2.1 and 2.2.
One way to produce a family of functions {S n x } which is suitable for use in the Lemma 2.2 is to employ the well known Euler-Knopp method.A series ∞ n 0 a n is said to be Euler-Knopp summable with parameter α > 0 to the value a if the sequence defined by In fact,

2.10
Iterating on this equality, it follows that if x is confined to a compact subset of E α {x : 0 < x < 2/α}.Then there is a constant β defining on this compact set with 0 < β < 1 and According to the variational definition, A b is the vector x ∈ C n which minimizes the functional Ax − b 2 and also has the smallest 2 norm among all such minimizing vectors.The idea of Tikhonov's regularization 36, 37 of order zero is to approximately minimize both the functional Ax − b 2 and the norm x 2 by minimizing the functional g : where t > 0. The minimum of this functional will occur at the unique stationary point u of g, that is, the vector u which satisfies ∇g u 0. The gradient of g is given by and hence the unique minimizer u t satisfies On intuitive grounds, it seems reasonable to expect that Therefore, if we define a sequence of functions {S n x } by using Euler-Knopp method, Newton-Raphson method and the idea of Tikhonov's regularization that mentioned above, then we get the following nice Theorem.

Theorem 2.3.
Let A ∈ M m,n with rank A r and 0 < α < 2μ −2 1 .Then i the sequence {A n } defined by converges to A .Furthermore, the error estimate is given by where ii The sequence {A n } defined by Furthermore, the error estimate is given by
Abstract and Applied Analysis 7 iii for t > 0, Thus, the error estimate is given by , and hence we apply Lemma 2.2 if we choose the parameter α is such a way that 0, μ 2  1 ⊆ E α , where E α is defined by 2.7 .We may choose α such that 0 < α < 2μ −2 1 .If we use the sequence defined by But it is easy to see from 2.22 that S n A A * A n , where A n is given by 2.16 .This is surely the case if 0 < α < 2μ −2 1 , then, for such α, we have the representation Note that if we set then we get 2.16 .
To derive an error estimate for the Euler-Knopp method, suppose that 0 < α < 2μ −2 1 .If the sequence S n x is defined as in 2.22 , then

2.27
Therefore, since S 0 α, where β is given by Huang and Zhang 28 presented the following sequence which is convergent to A : where α n 1 ∈ 1, 2 is called an acceleration parameter and chosen so as to minimize that which is bound on the maximum distance of any nonzero singular value of A n 1 A from 1.
They chose A 0 according to the first term of sequence 2.34 with α 0 2/ μ 1 μ r , and let p 0 α 0 μ 1 , then the acceleration parameters α n 1 and p n 1 have the following sequences:

2.35
Abstract and Applied Analysis 9 We point out that the iteration 2.18 is a special case of an acceleration iteration 2.34 .Further, we note that the above methods and the first-order iterative methods used by Ben-Israel and Greville 4 for computing A are a set of instructions for generating a sequence {A n : n 1, 2, 3, . ..} converging to A .Similarly, Liu et al. 25 introduced some necessary and sufficient conditions for iterative convergence to the generalized inverse A 2 T,S and its existence and estimated the error bounds of the iterative methods for approximating A 2 T,S by defining the sequence {A n } in the following way: where What is the best value β opt such that ρ I − βAX minimize in order to achieve good convergence?Unfortunately, it may be very difficult and still require further studies.If σ AX is a subset of R and λ min min{λ : λ ∈ σ AX } > 0, analogous to 38, Example 4.1 , we can have

Convergent Infinite Products of Matrices
Trench 3, Definition 1 defined invertibly convergence of an infinite products matrices ∞ m 1 B m as in the invertible of matrices B m for all m > N where N is an integer number .Here, we define the less restricted definitions of convergence of an infinite products exists and is nonzero.In this case, we define Similarly, an infinite product ∞ m 1 ⊗ B m converges if there is an integer N such that B m / 0 for m ≥ N, and exists and is nonzero.In this case, we define In the above Definition 3.1, the matrix R may be singular even if B m is nonsingular for all m ≥ 1, but R may be singular if B m is singular for some m ≥ 1.However, this definition does not require that B m is invertible for large m.
Similarly, it is easy to prove ii .

If the infinite products
The main reason for interest in these products above is to generate matrix sequences for solving such matrix problems such as singular linear systems and singular coupled matrix equations.For example, Cao 21 and Shi et al. 22 constructed the general stationary and nonstationary iterative process generated by {B m } ∞ m 1 for solving the singular linear system Ax b, and Leizarowitz 39 established conditions for weak ergodicity of products, existence of optimal strategies for controlled Markov chains, and growth properties of certain linear nonautonomous differential equations based on a sequence an infinite product of stochastic matrices {B m } ∞ m 1 ∞ m 1 B m .Also, as discussed in 2 , the motivation of Definition 3.1 stems from a question about linear systems of difference equations and coupled matrix equations, under what conditions on {B m } ∞ m 1 of, for instance, the system x m B m x m−1 , m 1, 2, . .., approach a finite nonzero limit whenever x 0 / 0? A system with property linear asymptotic equilibrium if and only if B m is invertible for every m ≥ 1 and ∞ m 1 B m invertibly convergent, but a system with the so-called least-squares linear asymptotic equilibrium if B m / 0 for every m ≥ 1 and ∞ m 1 B m converges.Because of Theorem 3.4, we consider only infinite product of the form ∞ m 1 I A m , where lim m → ∞ A m 0. We will write

3.14
The following Theorem provides the convergence and invertibly convergence of the infinite product ∞ m 1 I A m , and the proof here is omitted.
The following theorem relates convergence of an infinite product to the asymptotic behavior of least-square solutions of a related system of difference equations.
Theorem 3.7.The infinite product ∞ m 1 I A m converges if and only if for some integer N ≥ 1 the matrix difference equation

3.16
Proof.Suppose that ∞ m 1 I A m converges.Choose N so that I A m / 0 for m ≥ N, and let 3.17 Then where X N / 0. Therefore, which implies that ∞ m 1 I A m converges.From 3.14 and 3.18 , we get which proves the first expression in 3.16 .From 3.14 and 3.20 , we get which proves the second expression in 3.16 .

3.23
Theorem 3.7 indicates the connection between convergence of an infinite product of k × k matrices ∞ m 1 I A m and the asymptotic properties of least-squares solutions of matrix difference equation as defined in 3.15 .We can say that 3.15 has a least-squares linear asymptotic equilibrium if every least-squares solution of 3.15 for which X N / 0 that approaches I as n → ∞.
For example, Ding and Chen given by 5, 8 :

3.25
where G A D and H B E are full column and full row-rank matrices, respectively; converges to X and Y for any finite initial values X 0 and Y 0 .
The convergence factor μ in 3.26 may not be the best and may be conservative.In fact, there exists a best μ such that the fast convergence rate of X k to X and Y k to Y can be obtained as in numerical examples given by Cao 21 and Kılıc ¸man and Al-Zhour 8 .How to find the connections between convergence of an infinite products of k × k matrices and least-square solutions of coupled Sylvester matrix equation in 3.24 requires further research.

Numerical Examples
Here, we give some numerical example for computing outer inverse A 2 T,S and Moore-Penrose inverse A 2 T,S by applying sequences methods which are studied and derived in Section 2. Our results are obtained in this Section by choosing Frobenius norm • 2 and using MATLAB software.
From the iteration 2.5 in 24, Theorem 2.2 , Let A ∈ C m×r , and T and S be given subspaces of C m×r such that there exists A 2 T,S .Then the sequence A n n in C m×r defined in the following way:

R n P A T ,S − P A T ,S AA n ,
A n 1 A 0 R n A n n 0, 1, 2, . . .

T,S if and only if rang
Thus we have Tables 1 and 2 respectively, where Table 1 illustrates that β 1.25 is best value such that A 2 T,S −A n reaches 2.020636405220133× 10 −16 iterating the least number of steps, the reason for which is that such a β is calculating by using 2.38 .Thus, for an appropriate β, the iteration is better than the iteration 4.4 cf.Tables 1 and 2 .And with respect to the error bound, the iterations for almost all are also better.Let us take the error bound smaller than 10 −16 ; for instance, the number of steps of iterations in Table 1 is smaller than that of the iterations in Table 2. But, in practice, we consider also the quantity A n − A n−1 in order to cease iteration since there exist such cases as β 1.25.For example, for A n − A n−1 < μ A n , where μ is the machine precision, the iteration for β 1.25 only needs 3 steps.Therefore, in general, the iteration of 2.36 is better than the iteration 4.4 for an appropriate β.Note that the iterations in both Tables 1 and 2 indicate a fast convergence for the quantity A 2 T,S − A n more than the quantity R β, n in Table 1 and the quantity R n in Table 2 Then by computing we have Thus, see Tables 3 and 4 .Example 4.3.We generate a random matrix, A ∈ C 100×80 by using MATLAB, and then we obtain the results as in Tables 5 and 6.
Note that from Tables 3, 4, 5, and 6, it is clear that the quantities XAX − X , and AXA − A , AX * − AX , XA * − XA are becoming smaller and smaller and goes to zero as n increases in both iterations 2.34 and 2.18 .We can also conclude that both iterations almost have same fast of convergence when the dimension of any arbitrary matrix A is not so large, but the acceleration iteration 2.34 is better more than the iteration 2.18 when the dimension of any arbitrary matrix A is so large with an appropriate acceleration parameter α n 1 ∈ 1, 2 .

Concluding Remarks
In this paper, we have studied some matrix sequences convergence to the Moore-Penrose inverse A and outer inverse A 2 T,S of an arbitrary matrix A ∈ M m,n .The key to derive matrix sequences which are convergent to weighted Drazin and weighted Moore-Penrose inverses is the Lemma 2.2.Some sufficient conditions for infinite products ∞ m 1 B m and ∞ m 1 ⊗ B m of k × k matrices are also derived.In our opinion, it is worth establishing some connections between convergence of an infinite products of k × k matrices and least-square solutions of such linear singular systems as well as the singular coupled matrix equations.

Acknowledgments
The authors express their sincere thanks to the referee s for careful reading of the manuscript and several helpful suggestions.The authors also gratefully acknowledge that this research was partially supported by Ministry of Science, Technology and Innovations MOSTI , Malaysia under the e-Science Grant 06-01-04-SF1050.
x A b is the unique minimum least-squares solution of the following linear squares problem, see, e.g., 14, 21, 22 , b − Ax 2 min z∈C n b − Az 2 .

Definition 3 . 2 .TheTheorem 3 . 3 .m 1 ⊗Theorem 3 . 4 .
Let {B m } be a sequence of k × k matrices.Then an infinite product ∞ m 1 ⊗ B m is said to be invertibly convergent if there is an integer N such that B m is invertible for m ≥ N, and Definitions 3.1 and 3.2 have the following obvious consequence.Let {B m } be a sequence of k × k matrices such that the infinite products ∞ m 1 B m and ∞ B m are invertibly convergent.Then, both infinite products are convergent, but the converse, in general, is not true.Let {B m } be a sequence of k × k matrices such that

∞ m 1 B m and ∞ m 1 ⊗Corollary 3 . 5 .
B m are invertibly convergent in Theorem 3.4, then we get the following corollary.Let {B m } be a sequence of k × k matrices such that 5 and generalized later by Kılıc ¸man and Al-Zhour 8 studied the convergence of least-square solutions to the coupled Sylvester matrix equations AX Y B C; DX Y E F, 3.24 where A, D ∈ M m , B, E ∈ M n , C, F ∈ M m,n are given constant matrices, and X, Y ∈ M m,n are the unknown matrices to be solved.If the coupled Sylvester matrix equation determined by 3.24 has a unique solution X and Y , then the following iterative solution X n 1 and Y n 1 < β < 1.From Lemma 2.2, we establish 2.17 .iiUsing the Newton-Raphson iterations in 2.8 -2.11 in conjunction with Lemma 2.2, we see that the sequence of {S n A } defined by It follows as in 2.11 and hence from Lemma 2.2, then we get the error bound as in 2.19 .
n A 2I − A * AS n A 2.32has the property that lim n → ∞ S n A A * A uniformly in C m×n .If we set A n S n A A * , then we get 2.18 .If x ∈ σ A and 0 < α < 2μ −2 1 , then we see that |1 − αx| ≤ β, where β is given by 2.30 .

Table 1 :
Results for Example 4.1 using the iterations 2.36 and 2.37 for β 1.25.

Table 2 :
Results for Example 4.1 using the iterations 4.4 and 4.5 .
since each of R β, n and R n is an upper bound of the quantity A 2 T,S −A n , and, to find the best or least upper bound for the quantity, A

Table 3 :
Results for Example 4.2 using an acceleration iteration 2.34 .

Table 4 :
Results for Example 4.2 using the iteration 2.18 .

Table 5 :
Results for a random matrix A ∈ C 100×80 using an acceleration iteration 2.34 .

Table 6 :
Results for a random matrix A ∈ C 100×80 using the iteration 2.18 .