JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 680975 10.1155/2013/680975 680975 Research Article On Nonnegative Moore-Penrose Inverses of Perturbed Matrices Jose Shani Sivakumar K. C. Zhang Yang 1 Department of Mathematics Indian Institute of Technology Madras Chennai, Tamil Nadu 600036 iitm.ac.in India 2013 20 5 2013 2013 22 04 2013 07 05 2013 2013 Copyright © 2013 Shani Jose and K. C. Sivakumar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Nonnegativity of the Moore-Penrose inverse of a perturbation of the form AXGYT is considered when A0. Using a generalized version of the Sherman-Morrison-Woodbury formula, conditions for (AXGYT) to be nonnegative are derived. Applications of the results are presented briefly. Iterative versions of the results are also studied.

1. Introduction

We consider the problem of characterizing nonnegativity of the Moore-Penrose inverse for matrix perturbations of the type A-XGYT, when the Moore-Penrose inverse of A is nonnegative. Here, we say that a matrix B=(bij) is nonnegative and denote it by B0 if bij0, i,j. This problem was motivated by the results in , where the authors consider an M-matrix A and find sufficient conditions for the perturbed matrix (A-XYT) to be an M-matrix. Let us recall that a matrix B=(bij) is said to Z-matrix if bij0 for all i, j, ij. An M-matrix is a nonsingular Z-matrix with nonnegative inverse. The authors in  use the well-known Sherman-Morrison-Woodbury (SMW) formula as one of the important tools to prove their main result. The SMW formula gives an expression for the inverse of (A-XYT) in terms of the inverse of A, when it exists. When A is nonsingular, A-XYT is nonsingular if and only I-YTA-1X is nonsingular. In that case, (1)(A-XYT)-1=A-1-A-1X(I-YTA-1X)-1YTA-1.

The main objective of the present work is to study certain structured perturbations A-XYT of matrices A such that the Moore-Penrose inverse of the perturbation is nonnegative whenever the Moore-Penrose inverse of A is nonnegative. Clearly, this class of matrices includes the class of matrices that have nonnegative inverses, especially M-matrices. In our approach, extensions of SMW formula for singular matrices play a crucial role. Let us mention that this problem has been studied in the literature. (See, for instance  for matrices and  for operators over Hilbert spaces). We refer the reader to the references in the latter for other recent extensions.

In this paper, first we present alternative proofs of generalizations of the SMW formula for the cases of the Moore-Penrose inverse (Theorem 5) and the group inverse (Theorem 6) in Section 3. In Section 4, we characterize the nonnegativity of (A-XGYT). This is done in Theorem 9 and is one of the main results of the present work. As a consequence, we present a result for M-matrices which seems new. We present a couple of applications of the main result in Theorems 13 and 15. In the concluding section, we study iterative versions of the results of the second section. We prove two characterizations for (A-XYT) to be nonnegative in Theorems 18 and 21.

Before concluding this introductory section, let us give a motivation for the work that we have undertaken here. It is a well-documented fact that M-matrices arise quite often in solving sparse systems of linear equations. An extensive theory of M-matrices has been developed relative to their role in numerical analysis involving the notion of splitting in iterative methods and discretization of differential equations, in the mathematical modeling of an economy, optimization, and Markov chains [4, 5]. Specifically, the inspiration for the present study comes from the work of , where the authors consider a system of linear inequalities arising out of a problem in third generation wireless communication systems. The matrix defining the inequalities there is an M-matrix. In the likelihood that the matrix of this problem is singular (due to truncation or round-off errors), the earlier method becomes inapplicable. Our endeavour is to extend the applicability of these results to more general matrices, for instance, matrices with nonnegative Moore-Penrose inverses. Finally, as mentioned earlier, since matrices with nonnegative generalized inverses include in particular M-matrices, it is apparent that our results are expected to enlarge the applicability of the methods presently available for M-matrices, even in a very general framework, including the specific problem mentioned above.

2. Preliminaries

Let , n, and m×n denote the set of all real numbers, the n-dimensional real Euclidean space, and the set of all m×n matrices over . For Am×n, let R(A), N(A), R(A), and AT denote the range space, the null space, the orthogonal complement of the range space, and the transpose of the matrix A, respectively. For x=(x1,x2,,xn)Tn, we say that x is nonnegative, that is, x0 if and only if xi0 for all i=1,2,,n. As mentioned earlier, for a matrix B we use B0 to denote that all the entries of B are nonnegative. Also, we write AB if B-A0.

Let ρ(A) denote the spectral radius of the matrix A. If ρ(A)<1 for An×n, then I-A is invertible. The next result gives a necessary and sufficient condition for the nonnegativity of (I-A)-1. This will be one of the results that will be used in proving the first main result.

Lemma 1 (see [<xref ref-type="bibr" rid="B4">5</xref>, Lemma 2.1, Chapter 6]).

Let An×n be nonnegative. Then, ρ(A)<1 if and only if (I-A)-1 exists and (2)(I-A)-1=k=0Ak0.

More generally, matrices having nonnegative inverses are characterized using a property called monotonicity. The notion of monotonicity was introduced by Collatz . A real n×n matrix A is called monotone if Ax0x0. It was proved by Collatz  that A is monotone if and only if A-1 exists and A-10.

One of the frequently used tools in studying monotone matrices is the notion of a regular splitting. We only refer the reader to the book  for more details on the relationship between these concepts.

The notion of monotonicity has been extended in a variety of ways to singular matrices using generalized inverses. First, let us briefly review the notion of two important generalized inverses.

For Am×n, the Moore-Penrose inverse is the unique Zn×m satisfying the Penrose equations: AZA=A, ZAZ=Z, (AZ)T=AZ, and (ZA)T=ZA. The unique Moore-Penrose inverse of A is denoted by A, and it coincides with A-1 when A is invertible.

The following theorem by Desoer and Whalen, which is used in the sequel, gives an equivalent definition for the Moore-Penrose inverse. Let us mention that this result was proved for operators between Hilbert spaces.

Theorem 2 (see [<xref ref-type="bibr" rid="B7">7</xref>]).

Let Am×n. Then An×m is the unique matrix Xn×m satisfying

ZAx=x, xR(AT),

Zy=0, yN(AT).

Now, for An×n, any Zn×n satisfying the equations AZA=A,  ZAZ=Z, and AZ=ZA is called the group inverse of A. The group inverse does not exist for every matrix. But whenever it exists, it is unique. A necessary and sufficient condition for the existence of the group inverse of A is that the index of A is 1, where the index of a matrix is the smallest positive integer k such that rank (Ak+1)=rank (Ak).

Some of the well-known properties of the Moore-Penrose inverse and the group inverse are given as follows: R(AT)=R(A), N(AT)=N(A), AA=PR(AT), and AA=PR(A). In particular, xR(AT) if and only if x=AAx. Also, R(A)=R(A#), N(A)=N(A#), and A#A=AA#=PR(A),N(A). Here, for complementary subspaces L and M of k, PL,M denotes the projection of k onto L along M. PL denotes PL,M if M=L. For details, we refer the reader to the book .

In matrix analysis, a decomposition (splitting) of a matrix is considered in order to study the convergence of iterative schemes that are used in the solution of linear systems of algebraic equations. As mentioned earlier, regular splittings are useful in characterizing matrices with nonnegative inverses, whereas, proper splittings are used for studying singular systems of linear equations. Let us next recall this notion. For a matrix Am×n, a decomposition A=U-V is called a proper splitting  if R(A)=R(U) and N(A)=N(U). It is rather well-known that a proper splitting exists for every matrix and that it can be obtained using a full-rank factorization of the matrix. For details, we refer to . Certain properties of a proper splitting are collected in the next result.

Theorem 3 (see [<xref ref-type="bibr" rid="B2">9</xref>, Theorem 1]).

Let A=U-V be a proper splitting of Am×n. Then,

A=U(I-UV),

I-UV is nonsingular and,

A=(I-UV)-1U.

The following result by Berman and Plemmons  gives a characterization for A to be nonnegative when A has a proper splitting. This result will be used in proving our first main result.

Theorem 4 (see [<xref ref-type="bibr" rid="B2">9</xref>, Corollary 4]).

Let A=U-V be a proper splitting of Am×n, where U0 and UV0. Then A0 if and only if ρ(UV)<1.

3. Extensions of the SMW Formula for Generalized Inverses

The primary objects of consideration in this paper are generalized inverses of perturbations of certain types of a matrix A. Naturally, extensions of the SMW formula for generalized inverses are relevant in the proofs. In what follows, we present two generalizations of the SMW formula for matrices. We would like to emphasize that our proofs also carry over to infinite dimensional spaces (the proof of the first result is verbatim and the proof of the second result with slight modifications applicable to range spaces instead of ranks of the operators concerned). However, we are confining our attention to the case of matrices. Let us also add that these results have been proved in  for operators over infinite dimensional spaces. We have chosen to include these results here as our proofs are different from the ones in  and that our intention is to provide a self-contained treatment.

Theorem 5 (see [<xref ref-type="bibr" rid="B6">3</xref>, Theorem 2.1]).

Let Am×n, Xm×k, and Yn×k be such that (3)R(X)R(A),R(Y)R(AT). Let Gk×k, and Ω=A-XGYT. If (4)R(XT)R((G-YTAX)T),R(YT)R(G-YTAX),R(XT)R(G),R(YT)R(GT) then (5)Ω=A+AX(G-YTAX)YTA.

Proof.

Set Z=A+AXHYTA, where H=G-YTAX. From the conditions (3) and (4), it follows that AAX=X, AAY=Y, HHXT=XT, HHYT=YT, GGXT=XT, and GGYT=YT.

Now, (6)AXHYT-AXHYTAXGYT=AXH(GGYT-YTAXGYT)=AXHHGYT=AXGYT. Thus, ZΩ=AA. Since R(ΩT)R(AT), it follows that ZΩ(x)=x, xR(ΩT).

Let yN(ΩT). Then, ATy-YGTXTy=0 so that XTAT(AT-YGTXT)y=0. Substituting XT=GGXT and simplifying it, we get HTGTXTy=0. Also, ATy=YGTXTy=YHHGTXTy=0 and so Ay=0. Thus, Zy=0 for yN(ΩT). Hence, by Theorem 2, Z=Ω.

The result for the group inverse follows.

Theorem 6.

Let An×n be such that A# exists. Let X, Yn×k and G be nonsingular. Assume that R(X)R(A), R(Y)R(AT) and G-1-YTA#X is nonsingular. Suppose that rank(A-XGYT)=rank(A). Then, (A-XGYT)# exists and the following formula holds: (7)(A-XGYT)#=A#+A#X(G-1-YTA#X)-1YTA#. Conversely, if (A-XGYT)# exists, then the formula above holds, and we have rank(A-XGYT)=rank (A).

Proof.

Since A# exists, R(A) and N(A) are complementary subspaces of n×n.

Suppose that rank (A-XGYT)=rank (A). As R(X)R(A), it follows that R(A-XGYT)R(A). Thus R(A-XGYT)=R(A). By the rank-nullity theorem, the nullity of both A-XGYT and A are the same. Again, since R(Y)R(AT), it follows that N(A-XGYT)=N(A). Thus, R(A-XGYT) and N(A-XGYT) are complementary subspaces. This guarantees the existence of the group inverse of A-XGYT.

Conversely, suppose that (A-XGYT)# exists. It can be verified by direct computation that Z=A#+A#X(G-1-YTA#X)-1YTA# is the group inverse of A-XGYT. Also, we have (A-XGYT)(A-XGYT)#=AA#, so that R(A-XGYT)=R(A), and hence the rank (A-XGYT)=rank (A).

We conclude this section with a fairly old result  as a consequence of Theorem 5.

Theorem 7 (see [<xref ref-type="bibr" rid="B9">2</xref>, Theorem 15]).

Let Am×n of rankr, Xm×r and  Yn×r. Let G be an r×r nonsingular matrix. Assume that R(X)R(A),  R(Y)R(AT), and G-1-YTAX is nonsingular. Let Ω=A-XGYT. Then (8)  (A-XGYT)=A+AX(G-1-YTAX)-1YTA.

4. Nonnegativity of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M219"><mml:mo mathvariant="bold">(</mml:mo><mml:mi>A</mml:mi><mml:mo mathvariant="bold">-</mml:mo><mml:mi>X</mml:mi><mml:mi>G</mml:mi><mml:msup><mml:mrow><mml:mi>Y</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi></mml:mrow></mml:msup><mml:msup><mml:mrow><mml:mo mathvariant="bold">)</mml:mo></mml:mrow><mml:mrow><mml:mo mathvariant="bold">†</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula>

In this section, we consider perturbations of the form A-XGYT and derive characterizations for (A-XGYT) to be nonnegative when A0,  X0,  Y0 and G0. In order to motivate the first main result of this paper, let us recall the following well known characterization of M-matrices .

Theorem 8.

Let A be a Z-matrix with the representation A=sI-B, where B0 and s0. Then the following statements are equivalent:

A-1 exists and A-10.

There exists x>0 such that Ax>0.

ρ(B)<s.

Let us prove the first result of this article. This extends Theorem 8 to singular matrices. We will be interested in extensions of conditions (a) and (c) only.

Theorem 9.

Let A=U-V be a proper splitting of Am×n with U0, UV0 and ρ(UV)<1. Let Gk×k be nonsingular and nonnegative, Xm×k, and Yn×k be nonnegative such that R(X)R(A), R(Y)R(AT) and G-1-YTAX is nonsingular. Let Ω=A-XGYT. Then, the following are equivalent

Ω0.

(G-1-YTAX)-10.

ρ(UW)<1 where W=V+XGYT.

Proof.

First, we observe that since R(X)R(A), R(Y)R(AT), and G-1-YTAX is nonsingular, by Theorem 7, we have (9)Ω=A+AX(G-1-YTAX)-1YTA. We thus have ΩΩ=AA and ΩΩ=AA. Therefore, (10)R(Ω)=R(A),N(Ω)=N(A). Note that the first statement also implies that A0, by Theorem 4.

( a ) ( b ) : By taking E=A and F=XGYT, we get Ω=E-F as a proper splitting for Ω such that E=A0 and EF=AXGYT0 (since XGYT0). Since Ω0, by Theorem 4, we have ρ(AXGYT)=ρ(EF)<1. This implies that ρ(YTAXG)<1. We also have YTAXG0. Thus, by Lemma 1, (I-YTAXG)-1 exists and is nonnegative. But, we have I-YTAXG=(G-1-YTAX)G. Now, 0(I-YTAXG)-1=G-1(G-1-YTAX)-1. This implies that (G-1-YTAX)-10 since G0. This proves (b).

( b ) ( c ) : We have U-W=U-V-XGYT=A-XYT=Ω. Also R(Ω)=R(A)=R(U) and N(Ω)=N(A)=N(U). So, Ω=U-W is a proper splitting. Also U0 and UW=U(V+XGYT)UV0. Since (G-1-YTAX)-10, it follows from (9) that Ω0. (c) now follows from Theorem 4.

( c ) ( a ) : Since A=U-V, we have Ω=U-V-XGYT=U-W. Also we have U0, UV0. Thus, UW0, since XGYT0. Now, by Theorem 4, we are done if the splitting Ω=U-W is a proper splitting. Since A=U-V is a proper splitting, we have R(U)=R(A) and N(U)=N(A). Now, from the conditions in (10), we get that R(U)=R(Ω) and N(U)=N(Ω). Hence Ω=U-W is a proper splitting, and this completes the proof.

The following result is a special case of Theorem 9.

Theorem 10.

Let A=U-V be a proper splitting of Am×n with U0,  UV0 and ρ(UV)<1. Let Xm×k and Yn×k be nonnegative such that R(X)R(A), R(Y)R(AT), and I-YTAX is nonsingular. Let Ω=A-XYT. Then the following are equivalent:

Ω0.

(I-YTAX)-10.

ρ(UW)<1, where W=V+XYT.

The following consequence of Theorem 10 appears to be new. This gives two characterizations for a perturbed M-matrix to be an M-matrix.

Corollary 11.

Let A=sI-B where, B0 and ρ(B)<s (i.e., A is an M-matrix). Let Xm×k and Yn×k be nonnegative such that I-YTAX is nonsingular. Let Ω=A-XYT. Then the following are equivalent:

Ω-10.

(I-YTAX)-10.

ρ(B(I+XYT))<s.

Proof.

From the proof of Theorem 10, since A is nonsingular, it follows that ΩΩ=I and ΩΩ=I. This shows that Ω is invertible. The rest of the proof is omitted, as it is an easy consequence of the previous result.

In the rest of this section, we discuss two applications of Theorem 10. First, we characterize the least element in a polyhedral set defined by a perturbed matrix. Next, we consider the following Suppose that the “endpoints” of an interval matrix satisfy a certain positivity property. Then all matrices of a particular subset of the interval also satisfy that positivity condition. The problem now is if that we are given a specific structured perturbation of these endpoints, what conditions guarantee that the positivity property for the corresponding subset remains valid.

The first result is motivated by Theorem 12 below. Let us recall that with respect to the usual order, an element x*𝒳n is called a least element of 𝒳 if it satisfies x*x for all x𝒳. Note that a nonempty set may not have the least element, and if it exists, then it is unique. In this connection, the following result is known.

Theorem 12 (see [<xref ref-type="bibr" rid="B10">11</xref>, Theorem 3.2]).

For Am×n and bm, let (11)𝒳b={xn:Ax+yb,PN(A)x=0,PR(A)y=0,forsome  ym}. Then, a vector x* is the least element of 𝒳b if and only if x*=Ab𝒳b with A0.

Now, we obtain the nonnegative least element of a polyhedral set defined by a perturbed matrix. This is an immediate application of Theorem 10.

Theorem 13.

Let Am×n be such that A0. Let Xm×k, and let Yn×k be nonnegative, such that R(X)R(A), R(Y)R(AT), and I-YTAX is nonsingular. Suppose that (I-YTAX)-10. For bm, b0, let (12)𝒮b={xn:Ωx+yb,PN(Ω)x=0,PR(Ω)y=0,forsome  ym}, where Ω=A-XYT. Then, x*=(A-XYT)b is the least element of 𝒮b.

Proof.

From the assumptions, using Theorem 10, it follows that Ω0. The conclusion now follows from Theorem 12.

To state and prove the result for interval matrices, let us first recall the notion of interval matrices. For A, Bm×n, an interval (matrix) J=[A,B] is defined as J=[A,B]={C:ACB}. The interval J=[A,B] is said to be range-kernel regular if R(A)=R(B) and N(A)=N(B). The following result  provides necessary and sufficient conditions for C0 for CK, where K={CJ:R(C)=R(A)=R(B),N(C)=N(A)=N(B)}.

Theorem 14.

Let J=[A,B] be range-kernel regular. Then, the following are equivalent:

C0 whenever CK,

A0 and B0.

In such a case, we have C=j=0(B(B-C))jB.

Now, we present a result for the perturbation.

Theorem 15.

Let J=[A,B] be range-kernel regular with A0 and B0. Let X1, X2, Y1, and Y2 be nonnegative matrices such that R(X1)R(A), R(Y1)R(AT), R(X2)R(B), and R(Y2)R(BT). Suppose that I-Y1TAX1 and I-Y2TAX2 are nonsingular with nonnegative inverses. Suppose further that X2Y2TX1Y1T. Let J~=[E,F], where E=A-X1Y1T and F=B-X2Y2T. Finally, let K~={ZJ~:R(Z)=R(E)=R(F)andN(Z)=N(E)=N(F)}. Then,

E0 and F0.

Z0 whenever ZK~.

In that case, Z=j=0(BB-FZ)jF.

Proof.

It follows from Theorem 7 that (13)E=A+AX1(I-Y1TAX1)-1Y1TA,F=B+BX2(I-Y2TBX2)-1Y2TB. Also, we have EE=AA, EE=AA, FF=BB, and FF=BB. Hence, R(E)=R(A), N(E)=N(A), R(F)=R(B), and N(F)=N(B). This implies that the interval J~ is range-kernel regular. Now, since E and F satisfy the conditions of Theorem 10, we have E0 and F0 proving (a). Hence, by Theorem 14, Z0 whenever ZK~. Again, by Theorem 14, we have Z=j=0(F(F-Z))jF=j=0(BB-FZ)jF.

5. Iterations That Preserve Nonnegativity of the Moore-Penrose Inverse

In this section, we present results that typically provide conditions for iteratively defined matrices to have nonnegative Moore-Penrose inverses given that the matrices that we start with have this property. We start with the following result about the rank-one perturbation case, which is a direct consequence of Theorem 10.

Theorem 16.

Let An×n be such that A0. Let xm and yn be nonnegative vectors such that xR(A), yR(AT), and 1-yTAx0. Then (A-xyT)0 if and only if yTAx<1.

Example 17.

Let us consider A=(1/3)(1-11-14-11-11). It can be verified that A can be written in the form A=(I2e1T)(I2+e1e1T)-1C-1(I2+e1e1T)-1(I2e1) where e1=(1,0)T and C is the 2×2 circulant matrix generated by the row (1,1/2). We have A=(1/2)(212121212)0. Also, the decomposition A=U-V, where U=(1/3)(1-12-14-21-12), and V=(1/3)(00102-1001) is a proper splitting of A with U0,  UV0, and ρ(UV)<1.

Let x=y=(1/3,1/6,1/3)T. Then, x and let y are nonnegative, xR(A) and yR(AT). We have 1-yTAx=5/12>0 and ρ(UW)<1 for W=V+xyT. Also, it can be seen that (A-xyT)=(1/20)(472847283228472847)0. This illustrates Theorem 10.

Let x1,x2,,xkm and y1,y2,,ykn be nonnegative. Denote Xi=(x1,x2,,xi) and Yi=(y1,y2,,yi) for i=1,2,,k. Then A-XkYkT=A-i=1kxiyiT. The following theorem is obtained by a recurring application of the rank-one result of Theorem 16.

Theorem 18.

Let Am×n and let Xi,Yi be as above. Further, suppose that xiR(A-Xi-1Yi-1T), yiR(A-Xi-1Yi-1T)T and 1-yiT(A-Xi-1Yi-1T)xi be nonzero. Let (A-Xi-1Yi-1T)0, for all i=1,2,,k. Then (A-XkYkT)0 if and only if yiT(A-Xi-1Yi-1T)xi<1, where A-X0Y0T is taken as A.

Proof.

Set Bi=A-XiYiT for i=0,1,,k, where B0 is identified as A. The conditions in the theorem can be written as: (14)xiR(Bi-1),yiR(Bi-1T),1-yiTBi-1xi0,i=1,2,k. Also, we have Bi0, for all i=0,1,,k-1.

Now, assume that Bk0. Then by Theorem 16 and the conditions in (14) for i=k, we have ykBk-1xk<1. Also since by assumption Bi0 for all i=0,1,k-1, we have yiTBi-1xi<1 for all i=1,2,k.

The converse part can be proved iteratively. Then condition y1TB0x1<1 and the conditions in (14) for i=1 imply that B10. Repeating the argument for i=2 to k proves the result.

The following result is an extension of Lemma 2.3 in , which is in turn obtained as a corollary. This corollary will be used in proving another characterization for (A-XkYkT)0.

Theorem 19.

Let An×n, b,cn be such that -b0, -c0 and α()>0. Further, suppose that bR(A) and cR(AT). Let A^=(AbcTα)(n+1)×(n+1). Then, A^0 if and only if A0 and cTAb<α.

Proof.

Let us first observe that AAb=b and cTAA=cT. Set (15)X=(A+AbcTAα-cTAb-Abα-cTAb-cTAα-cTAb1α-cTAb). It then follows that A^X=(AA00T1) and XA^=(AA00T1). Using these two equations, it can easily be shown that X=A^.

Suppose that A0 and β=α-cTAb>0. Then, A^0. Conversely suppose that A^0. Then, we must have β>0. Let {e1,e2,,en+1} denote the standard basis of n+1. Then, for i=1,2,,n, we have 0A^(ei)=(Aei+(AbcTAei/β), -(cTAei/β))T. Since c0, it follows that Aei0 for i=1,2,,n. Thus, A0.

Corollary 20 (see [<xref ref-type="bibr" rid="B8">1</xref>, Lemma  2.3]).

Let An×n be nonsingular. Let b,cn be such that -b0, -c0 and α()>0. Let A^=(AbcTα). Then, A^-1(n+1)×(n+1) is nonsingular with A^-10 if and only if A-10 and cTA-1b<α.

Now, we obtain another necessary and sufficient condition for (A-XkYkT) to be nonnegative.

Theorem 21.

Let Am×n be such that A0. Let xi,yi be nonnegative vectors in n and m respectively, for every i=1,,k, such that (16)xiR(A-Xi-1Yi-1T),yiR(A-Xi-1Yi-1T)T, where Xi=(x1,,xi), Yi=(y1,,yi) and A-X0Y0T is taken as A. If Hi=Ii-YiTAXi is nonsingular and 1-yiTAxi is positive for all i=1,2,,k, then (A-XkYkT)0 if and only if (17)yiTAxi<1-yiTAXi-1Hi-1-1Yi-1TAxi,i=1,,k.

Proof.

The range conditions in (16) imply that R(Xk)R(A) and R(Yk)R(AT). Also, from the assumptions, it follows that Hk is nonsingular. By Theorem 10, (A-XkYkT)0 if and only if Hk-10. Now, (18)Hk=(Hk-1-Yk-1TAxk-ykTAXk-11-ykTAxk). Since 1-ykTAxk>0, using Corollary 20, it follows that Hk-10 if and only if ykTAXk-1Hk-1-1Yk-1TAxk<1-ykTAxk and Hk-1-10.

Now, applying the above argument to the matrix Hk-1, we have that Hk-1-10 holds if and only if Hk-2-10 holds and yk-1TAXk-2(Ik-2-Yk-2TAXk-2)-1Yk-2TAxk-1<1-yk-1TAxk-1. Continuing the above argument, we get, (A-XkykT)0 if and only if yiTAXi-1Hi-1-1Yi-1TAxi<1-yiTAxi, i=1,,k. This is condition (17).

We conclude the paper by considering an extension of Example 17.

Example 22.

For a fixed n, let C be the circulant matrix generated by the row vector (1,(1/2),(1/3),(1/(n-1))). Consider (19)A=(Ie1T)(I+e1e1T)-1C-1(I+e1e1T)-1(Ie1), where I is the identity matrix of order n-1,  e1 is (n-1)×1 vector with 1 as the first entry and 0 elsewhere. Then, An×n. A is the n×n nonnegative Toeplitz matrix (20)A=(11n-11n-21211211n-113121n-11n-21n-311n-111n-11n-2121).

Let xi be the first row of Bi-1 written as a column vector multiplied by 1/ni and let yi be the first column of Bi-1 multiplied by 1/ni, where Bi=A-XiYiT, with B0 being identified with A, i=0,2,,k. We then have xi0, yi0, x1R(A) and y1R(AT). Now, if Bi-10 for each iteration, then the vectors xi and yi will be nonnegative. Hence, it is enough to check that 1-yiTBi-1xi>0 in order to get Bi0. Experimentally, it has been observed that for n>3, the condition 1-yiTBi-1xi>0 holds true for any iteration. However, for n=3, it is observed that B10,  B20,  1-y1TAx1>0, and 1-y2B1x2>0. But, 1-y3TB2x3<0, and in that case, we observe that B3 is not nonnegative.

Ding J. Pye W. Zhao L. Some results on structured M-matrices with an application to wireless communications Linear Algebra and its Applications 2006 416 2-3 608 614 10.1016/j.laa.2005.12.007 MR2242448 Mitra S. K. Bhimasankaram P. Generalized inverses of partitioned matrices and recalculation of least squares estimates for data or model changes Sankhyā A 1971 33 395 410 MR0314208 ZBL0236.62049 Deng C. Y. A generalization of the Sherman-Morrison-Woodbury formula Applied Mathematics Letters 2011 24 9 1561 1564 10.1016/j.aml.2011.03.046 MR2803709 ZBL1241.47003 Berman A. Neumann M. Stern R. J. Nonnegative Matrices in Dynamical Systems 1989 New York, NY, USA John Wiley & Sons MR1019319 Berman A. Plemmons R. J. Nonnegative Matrices in the Mathematical Sciences 1994 Society for Industrial and Applied Mathematics (SIAM) 10.1137/1.9781611971262 MR1298430 Collatz L. Functional Analysis and Numerical Mathematics 1966 New York, NY, USA Academic Press MR0205126 Desoer C. A. Whalen B. H. A note on pseudoinverses Society for Industrial and Applied Mathematics 1963 11 442 447 MR0156199 ZBL0123.09603 Ben-Israel A. Greville T. N. E. Generalized Inverses:Theory and Applications 2003 New York, NY, USA Springer MR1987382 ZBL1026.15004 Berman A. Plemmons R. J. Cones and iterative methods for best least squares solutions of linear systems SIAM Journal on Numerical Analysis 1974 11 145 154 MR0348984 10.1137/0711015 ZBL0244.65024 Mishra D. Sivakumar K. C. On splittings of matrices and nonnegative generalized inverses Operators and Matrices 2012 6 1 85 95 10.7153/oam-06-06 MR2952437 ZBL1247.15003 Mishra D. Sivakumar K. C. Nonnegative generalized inverses and least elements of polyhedral sets Linear Algebra and its Applications 2011 434 12 2448 2455 10.1016/j.laa.2010.12.031 MR2781990 ZBL1213.15006 Kannan M. R. Sivakumar K. C. Moore-Penrose inverse positivity of interval matrices Linear Algebra and its Applications 2012 436 3 571 578 10.1016/j.laa.2011.07.012 MR2854892 ZBL1236.15012