On Nonnegative Moore-Penrose Inverses of Perturbed Matrices

We consider the problem of characterizing nonnegativity of the Moore-Penrose inverse for matrix perturbations of the type A − XGY, when the Moore-Penrose inverse of A is nonnegative. Here, we say that a matrix B = (b ij ) is nonnegative and denote it by B ≥ 0 if b ij ≥ 0, ∀i, j. This problemwasmotivated by the results in [1], where the authors consider an M-matrix A and find sufficient conditions for the perturbed matrix (A − XY) to be anM-matrix. Let us recall that a matrix B = (b ij ) is said to Z-matrix if b ij ≤ 0


Introduction
We consider the problem of characterizing nonnegativity of the Moore-Penrose inverse for matrix perturbations of the type  −   , when the Moore-Penrose inverse of  is nonnegative.Here, we say that a matrix  = (  ) is nonnegative and denote it by  ≥ 0 if   ≥ 0, ∀, .This problem was motivated by the results in [1], where the authors consider an -matrix  and find sufficient conditions for the perturbed matrix ( −   ) to be an -matrix.Let us recall that a matrix  = (  ) is said to -matrix if   ≤ 0 for all , ,  ̸ = .An -matrix is a nonsingular -matrix with nonnegative inverse.The authors in [1] use the well-known Sherman-Morrison-Woodbury (SMW) formula as one of the important tools to prove their main result.The SMW formula gives an expression for the inverse of (−  ) in terms of the inverse of , when it exists.When  is nonsingular,  −   is nonsingular if and only  −    −1  is nonsingular.In that case, The main objective of the present work is to study certain structured perturbations  −   of matrices  such that the Moore-Penrose inverse of the perturbation is nonnegative whenever the Moore-Penrose inverse of  is nonnegative.Clearly, this class of matrices includes the class of matrices that have nonnegative inverses, especially -matrices.In our approach, extensions of SMW formula for singular matrices play a crucial role.Let us mention that this problem has been studied in the literature.(See, for instance [2] for matrices and [3] for operators over Hilbert spaces).We refer the reader to the references in the latter for other recent extensions.
In this paper, first we present alternative proofs of generalizations of the SMW formula for the cases of the Moore-Penrose inverse (Theorem 5) and the group inverse (Theorem 6) in Section 3. In Section 4, we characterize the nonnegativity of ( −   ) † .This is done in Theorem 9 and is one of the main results of the present work.As a consequence, we present a result for -matrices which seems new.We present a couple of applications of the main result in Theorems 13 and 15.In the concluding section, we study iterative versions of the results of the second section.We prove two characterizations for (−  ) † to be nonnegative in Theorems 18 and 21.
Before concluding this introductory section, let us give a motivation for the work that we have undertaken here.It is a well-documented fact that -matrices arise quite often in solving sparse systems of linear equations.An extensive theory of -matrices has been developed relative to their role in numerical analysis involving the notion of splitting in iterative methods and discretization of differential equations, in the mathematical modeling of an economy, optimization, and Markov chains [4,5].Specifically, the inspiration for the present study comes from the work of [1], where the authors consider a system of linear inequalities arising out of a problem in third generation wireless communication systems.The matrix defining the inequalities there is an -matrix.In the likelihood that the matrix of this problem is singular (due to truncation or round-off errors), the earlier method becomes inapplicable.Our endeavour is to extend the applicability of these results to more general matrices, for instance, matrices with nonnegative Moore-Penrose inverses.Finally, as mentioned earlier, since matrices with nonnegative generalized inverses include in particular -matrices, it is apparent that our results are expected to enlarge the applicability of the methods presently available for -matrices, even in a very general framework, including the specific problem mentioned above.

Preliminaries
Let R, R  , and R × denote the set of all real numbers, the -dimensional real Euclidean space, and the set of all  ×  matrices over R. For  ∈ R × , let (), (), () ⊥ , and   denote the range space, the null space, the orthogonal complement of the range space, and the transpose of the matrix , respectively.For x = ( 1 ,  2 , . . .,   )  ∈ R  , we say that x is nonnegative, that is, x ≥ 0 if and only if x  ≥ 0 for all  = 1, 2, . . ., .As mentioned earlier, for a matrix  we use  ≥ 0 to denote that all the entries of  are nonnegative.Also, we write  ≤  if  −  ≥ 0.
Let () denote the spectral radius of the matrix .If () < 1 for  ∈ R × , then  −  is invertible.The next result gives a necessary and sufficient condition for the nonnegativity of ( − ) −1 .This will be one of the results that will be used in proving the first main result.
One of the frequently used tools in studying monotone matrices is the notion of a regular splitting.We only refer the reader to the book [5] for more details on the relationship between these concepts.
The notion of monotonicity has been extended in a variety of ways to singular matrices using generalized inverses.First, let us briefly review the notion of two important generalized inverses.
The following theorem by Desoer and Whalen, which is used in the sequel, gives an equivalent definition for the Moore-Penrose inverse.Let us mention that this result was proved for operators between Hilbert spaces.
Now, for  ∈ R × , any  ∈ R × satisfying the equations  = ,  = , and  =  is called the group inverse of .The group inverse does not exist for every matrix.But whenever it exists, it is unique.A necessary and sufficient condition for the existence of the group inverse of  is that the index of  is 1, where the index of a matrix is the smallest positive integer  such that rank ( +1 ) = rank (  ).
In matrix analysis, a decomposition (splitting) of a matrix is considered in order to study the convergence of iterative schemes that are used in the solution of linear systems of algebraic equations.As mentioned earlier, regular splittings are useful in characterizing matrices with nonnegative inverses, whereas, proper splittings are used for studying singular systems of linear equations.Let us next recall this notion.For a matrix  ∈ R × , a decomposition  =  −  is called a proper splitting [9] if () = () and () = ().It is rather well-known that a proper splitting exists for every matrix and that it can be obtained using a fullrank factorization of the matrix.For details, we refer to [10].Certain properties of a proper splitting are collected in the next result.
Theorem 3 (see [9,Theorem 1]).Let  =  −  be a proper splitting of  ∈ R × .Then, The following result by Berman and Plemmons [9] gives a characterization for  † to be nonnegative when  has a proper splitting.This result will be used in proving our first main result.

Extensions of the SMW Formula for Generalized Inverses
The primary objects of consideration in this paper are generalized inverses of perturbations of certain types of a matrix .Naturally, extensions of the SMW formula for generalized inverses are relevant in the proofs.In what follows, we present two generalizations of the SMW formula for matrices.We would like to emphasize that our proofs also carry over to infinite dimensional spaces (the proof of the first result is verbatim and the proof of the second result with slight modifications applicable to range spaces instead of ranks of the operators concerned).However, we are confining our attention to the case of matrices.Let us also add that these results have been proved in [3] for operators over infinite dimensional spaces.We have chosen to include these results here as our proofs are different from the ones in [3] and that our intention is to provide a self-contained treatment.
We conclude this section with a fairly old result [2] as a consequence of Theorem 5.

Nonnegativity of (𝐴−𝑋𝐺𝑌 𝑇 ) †
In this section, we consider perturbations of the form  −   and derive characterizations for ( −   ) † to be nonnegative when  † ≥ 0,  ≥ 0,  ≥ 0 and  ≥ 0. In order to motivate the first main result of this paper, let us recall the following well known characterization of -matrices [5].
The following result is a special case of Theorem 9.
Corollary 11.Let  = − where,  ≥ 0 and () <  (i.e.,  is an -matrix).Let  ∈ R × and  ∈ R × be nonnegative such that  −    †  is nonsingular.Let Ω =  −   .Then the following are equivalent: Proof.From the proof of Theorem 10, since  is nonsingular, it follows that ΩΩ † =  and Ω † Ω = .This shows that Ω is invertible.The rest of the proof is omitted, as it is an easy consequence of the previous result.
In the rest of this section, we discuss two applications of Theorem 10.First, we characterize the least element in a polyhedral set defined by a perturbed matrix.Next, we consider the following Suppose that the "endpoints" of an interval matrix satisfy a certain positivity property.Then all matrices of a particular subset of the interval also satisfy that positivity condition.The problem now is if that we are given a specific structured perturbation of these endpoints, what conditions guarantee that the positivity property for the corresponding subset remains valid.
The first result is motivated by Theorem 12 below.Let us recall that with respect to the usual order, an element x * ∈ X ⊆ R  is called a least element of X if it satisfies x * ≤ x for all x ∈ X.Note that a nonempty set may not have the least element, and if it exists, then it is unique.In this connection, the following result is known.
Theorem 12 (see [11,Theorem 3.2]).For  ∈ R × and b ∈ R  , let Then, a vector x * is the least element of X b if and only if Now, we obtain the nonnegative least element of a polyhedral set defined by a perturbed matrix.This is an immediate application of Theorem 10.
Proof.From the assumptions, using Theorem 10, it follows that Ω † ≥ 0. The conclusion now follows from Theorem 12.

Iterations That Preserve Nonnegativity of the Moore-Penrose Inverse
In this section, we present results that typically provide conditions for iteratively defined matrices to have nonnegative Moore-Penrose inverses given that the matrices that we start with have this property.We start with the following result about the rank-one perturbation case, which is a direct consequence of Theorem 10.
The converse part can be proved iteratively.Then condition y  1  0 † x 1 < 1 and the conditions in (14) for  = 1 imply that  1 † ≥ 0. Repeating the argument for  = 2 to  proves the result.
The following result is an extension of Lemma 2.3 in [1], which is in turn obtained as a corollary.This corollary will be used in proving another characterization for (−     ) † ≥ 0.