A higher-order
convergent iterative method is provided for calculating the
generalized inverse over Banach spaces. We also use this iterative
method for computing the generalized Drazin inverse ad in Banach algebra. Moreover, we estimate the error bounds of the iterative
methods for approximating AT,S2 or ad.

1. Introduction

It is well known that the outer generalized inverse has been widely used in various fields, for instance, in statistics, control theory, power systems, nonlinear equations, optimization and numerical analysis, and so on (see [1–15]). Recently, in [16], the authors discussed the iteration (1) for computing AT,S(2) of a given matrix.

Throughout this paper, let X and Y be arbitrary Banach spaces. Then, the symbol ℬ(𝒳,𝒴) denotes the set of all bounded linear operators from 𝒳 to 𝒴, in particular, ℬ(𝒳):=ℬ(𝒳,𝒳). For any A∈ℬ(𝒳,𝒴), we denote its range, null space, and norm by ℛ(A), 𝒩(A), and ∥A∥, respectively. Further, we say that A is regular if there exists an X∈ℬ(𝒴,𝒳) such that AXA=A and that A has a {2} (or outer) inverse if there exists an X∈ℬ(𝒴,𝒳) such that XAX=X. If A∈ℬ(𝒳), then we denote its spectrum and spectral radius by σ(A) and ρ(A), respectively. Let the symbol L⊂𝒳 denote that L is a subspace of 𝒳. If A∈ℬ(𝒳,𝒴) and L⊂𝒳, then the restriction A|L of A on L is defined by x↦Ax, x∈L. Let L,M⊂𝒳 with L⊕M=𝒳. Then, the symbol PL,M stands for an operator that is called a projection from 𝒳 onto L if it is a bounded linear map from 𝒳 onto L and PL,M2=PL,M. It is well known that a closed subspace L of a Banach space 𝒳 is complemented in 𝒳 if and only if there exists a projection from 𝒳 onto L.

Let A∈ℬ(𝒳,𝒴) be close; there exists a unique operator X∈ℬ(𝒴,𝒳) such that
(1)(1)AXA=A(2)XAX=X(3)(AX)*=AX(4)(XA)*=XA.
Then, X is called the Moore-Penrose inverse of A, denoted by X=A†. It is well known that A is regular ⇔R(A) is closed ⇔A† exists.

Throughout this paper, let 𝒜 be a complex Banach algebra with the unit 1. The symbols annl(a) and annr(a), respectively, stand for the left and right annihilators of a in 𝒜. Let p∈𝒜 be idempotent. Then, p𝒜p={pap:a∈𝒜} is a subalgebra of 𝒜 with unit p. Thus, for a∈𝒜, if there exists an element b∈p𝒜p such that ab=ba=p, then we say that a is invertible in p𝒜p, and b is denoted by a|p𝒜p-1. Recall that an element b∈𝒜 is the generalized Drazin inverse of a (or Koliha-Drazin inverse of a), if the following hold:
(2)bab=b,ba=ab,a(1-ab)isquasinilpotent.
If the generalized Drazin inverse of a exists, then it is denoted by ad (see [15] for more details). In particular, if b=ad and a(1-ab)=0, then b is called the group inverse of a and is denoted by ag.

In [17], W. G. Li and Z. Li defined the iterative formula
(3)Xk+1=Xk[kI-k(k-1)2AXk+⋯+(-1)k-1(AXk)k-1],=XkkI-k(k-1)2AXk⋯+01+(-1)k-1k=2,3,….
In [18], Chen and Wang extended the iterative method (3) proposed by W. G. Li and Z. Li to compute the Moore-Penrose inverse of a matrix. In [19], Liu et al provided the higher-order convergent iterative method (3) in order to calculate the generalized inverse AT,S(2) of a given matrix. In this paper, we will extend the iterative method proposed by W. G. Li and Z. Li in [17] to compute the {2}-inverse, generalized inverse AT,S(2) over Banach space and also consider the iterative scheme for computing the generalized Drazin inverse ad in Banach algebra.

The paper is organized as follows. Some lemmas will be presented in the remainder of this section. In Section 2, we consider iterative scheme of [19] to compute the generalized inverses AT,S(2) in Banach space. In Section 3, we present iterative formulas for computing the generalized Drazin inverse ad of Banach algebra element a.

The following lemmas are needed in what follows.

Lemma 1 (see [<xref ref-type="bibr" rid="B14">14</xref>, Chapter 1]).

Let a∈𝒜. Then

σ(a) is a nonempty closed subset of ℂ.

(Spectral mapping theorem for polynomials) if f is a polynomial, then
(4)σ(f(a))=f(σ(a)).

limn→0an=0 if and only if ρ(a)<1.

Lemma 2 (see [<xref ref-type="bibr" rid="B15">15</xref>, Section 4]).

Let X and Y be Banach spaces, and let A∈ℬ(𝒳,𝒴), T and S, respectively, be closed subspaces of X and Y. Then, the following statements are equivalent.

A has a {2}-inverse B∈B(Y,X) such that ℛ(B)=T and 𝒩(B)=S.

T is a complemented subspace of X, A(T) is closed, A|T:T→A(T) is invertible, and A(T)⊕S=Y.

In the case when (i) or (ii) holds, B is unique and is denoted by AT,S(2).

Lemma 3.

Suppose that the conditions of Lemma 2 are satisfied. Then, AAT,S(2)=PA(T),S and AT,S(2)A=PT,T1 where T1=𝒩(AT,S(2)A). Moreover, for any G∈ℬ(𝒴,𝒳), PT,T1G=G⇔ℛ(G)⊂T; GPA(T),S=G⇔𝒩(G)⊃S.

2. Higher-Order Convergent Iterative Method for Computing the Generalized Inverse over Banach Spaces

In this section, we will consider higher-order convergent iterative method for computing the generalized inverse AT,S(2) over Banach spaces. First, we deduce convergent conditions and error bounds of our iterative methods.

Theorem 4.

Let A∈ℬ(𝒳,𝒴), Y∈ℬ(𝒴,𝒳), and let T⊂𝒳 and S⊂𝒴 both be complemented subspaces, respectively, with ℛ(Y)=T, 𝒩(Y)=S. Define the sequence {Xk} in ℬ(𝒴,𝒳) in the following way:
(5)X0=αY,Xk=[Ct1I-Ct2Xk-1A+⋯+(-1)t-1Ctt(Xk-1A)t-1]Xk-1;
it converges to X∞ and X∞∈A{2} with R(X∞)=T if and only if ρ(αYA-P)<1 for some scalar α∈ℂ∖{0}, where t≥2 is an arbitrary positive integer, X∞=limXk, and P is projection from 𝒳 onto T. Moreover,

if 𝒩(X∞)=S, then AT,S(2) exists if and only if ρ(αYA-P)<1 for α∈ℂ∖{0};

if 𝒩(X∞)=S, then AT,S(2) exists.

In particular, if AT,S(2) exists, limXk=AT,S(2) and q=∥αYA-P∥, then
(6)∥AT,S(2)-Xk∥∥AT,S(2)∥≤qtk,k≥0.

Proof.

From (5), we obtain
(7)[Ct1I-Ct2Xk-1A+⋯+(-1)t-1Ctt(Xk-1A)t-1]Xk-1=Xk-1[Ct1I-Ct2AXk-1+⋯+(-1)t-1Ctt(AXk-1)t-1].
Note that ℛ(Xk)⊂ℛ(Xk-1), k≥1 from (7). Similarly, it is easy to prove that N(Xk)⊇N(Xk-1), k≥1.

Since ℛ(X0)=ℛ(αY)=T and 𝒩(X0)=𝒩(αY)=S, then
(8)ℛ(Xk)⊂T,𝒩(Xk)⊃S,
for k≥0.

From (5), we have
(9)XkA-I=(-1)t+1(Xk-1A-I)t=(-1)t+1(X0A-I)tk.
By (8), we get PXk=Xk. Premultiplying (9) by P, then (9) yields
(10)XkA-P=(-1)t+1(X0A-P)tk.

Next, we will investigate the necessary and sufficient condition for the convergent property of the iterative scheme (5). Assume that limXk exists; denote by X∞∈A{2} and ℛ(X∞A)=T. Then, ℛ(X∞)=ℛ(X∞AX∞)⊂ℛ(X∞A)⊂ℛ(X∞). Thus, ℛ(X∞A)=T and 𝒳=T⊕N(X∞A); we obtain X∞A=PT,N(X∞A), a projection from 𝒳 onto T, and X∞AXk=Xk by (8).

Since PT,N(X∞A)X0=X0, and XkA-PT,N(X∞A)=(-1)t+1(X0A-PT,N(X∞A))tk by (10). Thus,
(11)0=limXkA-X∞A=limXkA-PT,N(X∞A)=lim(-1)t-1(X0A-PT,N(X∞A))tk,
and then ρ(αYA-PT,N(X∞A))<1.

Conversely, suppose that ρ(αYA-P)<1 for some scalar α∈ℂ∖{0}, where P denotes a projection from 𝒳 to T and X is complement. Then, limXkA=P by (10), and therefore limk→∞Xk=(A|T)-1 and T=ℛ(P)⊂ℛ(limXk).

By (8), R(limXk)⊂T because T is close, and then ℛ(X∞)=T. Hence, we obtain limXkAlimXk=limXk. Thus, limXk∈A{2}. It is easy to know that if N(limXk)=S, then limXk=AT,S(2). Thus, AT,S(2) exists.

Assume that AT,S(2) exists. By (8), N(limXk)⊃S because S is closed complement. If y∈N(limXk)∪AT, then y=Az for some z∈T. Thus, 0=limXky=limXkAz=Pz=z. Thus, y=0. Therefore, N(limXk)∪AT={0} and then N(limXk)=S by Lemma 2. Consequently, limXk=AT,S(2).

Since N(Xk)=S, XkAAT,S(2)=Xk. Thus, postmultiplying (10) by AT,S(2) yields
(12)Xk-AT,S(2)=(-1)t+1(αYA-P)tkAT,S(2).
Since AT,S(2)=PAT,S(2), we have
(13)∥AT,S(2)-Xk∥=∥(αYA-P)tkAT,S(2)∥≤∥αYA-P∥tk∥AT,S(2)∥=qtk∥AT,S(2)∥.
Hence, we get (6).

Similarly, we have the dual result as below.

Theorem 5.

Let A∈ℬ(𝒳,𝒴), Y∈ℬ(𝒴,𝒳), and let T⊂𝒳 and S⊂𝒴 both be closed, respectively, with ℛ(Y)=T, 𝒩(Y)=S. Define the sequence {Xk}∈ℬ(𝒴,𝒳) such that
(14)X0=αY,Xk=Xk-1[Ct1I-Ct2AXk-1+⋯+(-1)t-1Ctt(AXk-1)t-1];
it converges to X∞ and X∞∈A{2} with 𝒩(X∞)=S if and only if ρ(αAY-Q)<1 for some scalar α∈ℂ∖{0}, where t≥2 is an arbitrary positive integer, X∞=limXk, and Q is a projection from 𝒴 onto S. Moreover,

if ℛ(Ψ)=T, then AT,S(2) exists if and only if ρ(αAY-Q)<1 for α∈ℂ∖{0};

if ℛ(Ψ)=T, then AT,S(2) exists.

In particular, if AT,S(2) exists, X∞=AT,S(2) and q=∥αAY-Q∥, then ∥AT,S(2)-Xk∥/∥AT,S(2)∥≤qtk, k≥0⋯.

Remark 6.

Now, we consider how to choose a suitable scalar α∈ℂ∖{0} for the iterative scheme (5) such that it converges more faster to AT,S(2).

Since ℛ(YA)⊂T and for any α∈ℂ∖{0}, ρ(P-αYA)=ρ(P-α(YA)|T)=max|1-αμ|(μ∈σ(YA)|T). Therefore, ρ(P-αYA)<1 if and only if 0∉σ((YA)|T) and maxμ∈(YA)∖{0}|1-αμ|<1. Thus, there exists λ0∈(YA)∖{0} with |1-αλ0|=ρ(P-αYA).

Let λ0=|λ0|(cosθ+isinθ) and α=|α|(cosφ+isinφ), where θ=arg(λ0), φ=arg(α). Then, ρ(P-αYA)=[|αλ0|2+1-2|αλ0|cos(θ+φ)]1/2. Thus, ρ(P-αYA)<1 if and only if 0<|αλ0|<2cos(θ+φ) and 0∉σ((YA)|T).

Hence, by 0∉σ((YA)|T) and α satisfing 0<|α|<2cos(θ+φ)/ρ(YA), we have ρ(P-αYA)<1. In practice, once such a λ0 is determined, α is taken to satisfy arg(α)=-arg(λ0) and 0<|α|<2/ρ(YA). If σ(YA) is a subset of ℝ, then we take α satisfying 0<|α|<2/ρ(YA) and sgnα=sgnλ0, where λ0∈σ(YA), so as to ensure that ρ(P-αYA)<1.

Assume that 0∉σ((YA)|T) hold. In the following, we will obtain the best value αopt such that ρ(P-αYA) minimizes for achieving good convergence. Unfortunately, it may be rather difficult. If σ(YA) is a subset of ℝ and λmin=min{λ:λ∈σ(YA)|T}>0 analogous to [8, Example 4.1], we can have
(15)αopt=2λmin+ρ(YA).
In practice, because ρ(YA) is not easily obtained, we often utilize ∥YA∥ instead of it in the above inequations and (15) to choose α, which is followed from ρ(YA)≤∥YA∥.

3. Higher-Order Convergent Iterative Method for Computing the Generalized Inverse over Banach Algebra

In the section, we will investigate a higher-order convergent iterative method for computing the generalized Drazin inverse ad over Banach algebra.

Theorem 7.

Let a∈𝒜, p∈𝒜 be idempotents with ap=pa, and y∈𝒜 with (1-p)y=y(1-p)=y. Define the sequence {xk} in 𝒜 such that
(16)x0=αy,∀x0∈𝒜,xk=[Ct1-Ct2xk-1a+⋯+(-1)t-1Ctt(xk-1a)t-1]xk-1,
where α∈ℂ∖{0} and t≥2. Then the iteration (16) converges to limxk and px0=0 if and only if ρ(1-p-αya)<1. In this case, assume that annl(y)∩(1-p)𝒜(1-p)={0}. Then

ad exists and the iteration (16) converges to ad if and only if ap is quasinilpotent in 𝒜;

if q=(1-p-αya)<1, then ∥ad-xk∥≤qtk∥y∥·∥α∥/∥p+αya∥.

Proof.

(i) By (1-p)y=y(1-p)=y and x0=αy, it implies that (1-p)x0=x0. By induction on k, we have
(17)(1-p)xk=(1-p)[Ct1-Ct2xk-1a+⋯+(-1)t-1Ctt(xk-1a)t-1]xk-1=xk.
By (16), we obtain
(18)xka-1=(-1)t-1(xk-1a-1)t=(-1)(t-1)k(x0a-1)tk.
From (17) and (18), we get
(19)(1-p)(xka-1)=xka-(1-p)=(-1)(t-1)k(x0a-(1-p))tk.

The right-hand side of the last equality of (19) implies that
(20)0=limt→∞(-1)t-1(x0a-(1-p))tk.
By (20), we easily have ρ(x0a-(1-p))=ρ(1-p-αya)<1.

Conversely, assume that ρ(1-p-αya)<1. Since pa=ap and (1-p)y=y(1-p)=y, (1-p-αya)∈(1-p)𝒜(1-p), and then ya is invertible in (1-p)𝒜(1-p). We will show that ay is invertible in (1-p)𝒜(1-p). Clearly, ay∈(1-p)𝒜(1-p), if ayc=0 for some c∈(1-p)𝒜(1-p), then yc=[(ya)](1-p)𝒜-1yac. Hence, c∈annr(y)∩(1-p)𝒜(1-p)={0} and c=0. Hence, 0∉[(ay)](1-p)𝒜(1-p), and then ay is invertible in (1-p)𝒜(1-p).

In the following, we will consider the result (i). It is similar to the deduction of (10), we can write (16) as
(21)xka=(1-p)+(-1)t-1(x0a-(1-p))tk.

Thus, postmultiplying (21) by y yields to
(22)xkay=(1-p)y+(-1)t-1(x0a-(1-p))tky.
By Lemma 1 and (22), we prove that xk converges to y[(ay)](1-p)𝒜(1-p)-1 and is denoted by x∞=y[(ay)](1-p)𝒜(1-p)-1. Therefore, y[(ay)]-1=[(ya)]-1y in (1-p)𝒜(1-p); then
(23)x∞a=y[(ay)](1-p)𝒜(1-p)-1a=(ya)(1-p)𝒜(1-p)-1ya=1-p=ay[(ay)](1-p)𝒜(1-p)-1=ax∞.
Thus, we obtain a-a2x∞=ap. Since x∞ax∞=ay[(ay)](1-p)𝒜(1-p)-1x∞=x∞, we have that x∞=ad if and only if ap is quasinilpotent in 𝒜.

Since p is idempotent and ap=pa, and
(24)(1-p)y=y(1-p)=y,p(p+αay)ay=p(p+αya),

then
(25)α(p+αay)-1ay=1-(p+αay)-1p=1-p=1-p(p+αay)=αay(p+αay)-1.
Therefore, we obtain (ay)-1=α(p+αay)-1 in (1-p)𝒜(1-p). By (10), we have
(26)xkay=[(1-p)+(-1)t+1[αay-(1-p)]tk]y.
Hence, by the argument in (i) and (26), we have
(27)ad-xk=x∞-xk=y[(ay)](1-p)𝒜(1-p)-1-[(1-p)+(-1)t+1[αay-(1-p)]tk]y×[(ay)](1-p)𝒜(1-p)-1=(-1)t+2[αay-(1-p)]tky×[(ay)](1-p)𝒜(1-p)-1.
Taking limit in (27), then it reduces to (ii).

Similarly, we have the following.

Theorem 8.

Let a∈𝒜,p∈𝒜 be idempotents with ap=pa, and y∈𝒜 with (1-p)y=y(1-p)=y. Define the sequence {xk} in 𝒜 such that
(28)x0=αy,∀x0∈𝒜,xk=xk-1[Ct1-Ct2axk-1+⋯+(-1)t-1Ctt(axk-1)t-1],
where α∈ℂ∖{0} and t≥2. Then, the iteration (28) converges to limxk and px0=0 if and only if ρ(1-p-aαy)<1. In this case, assume that annl(y)∩(1-p)𝒜(1-p)={0}. Then

ad exists and the iteration (28) converges to ad if and only if ap is quasinilpotent in 𝒜;

if q=(1-p-αya)<1, then ∥ad-xk∥≤qtk∥y∥·∥α∥/∥p+αya∥.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (11061005) and the Ministry of Education Science and Grants (HCIC201103) of Guangxi Key Laboratory of Hybrid Computational and IC Design Analysis Open Fund.

ChenY.ChenX.Representation and approximation of the outer inverse AT,S2 of a matrix ADjordjevićD. S.StanimirovićP. S.WeiY.The representation and approximations of outer generalized inversesLiuX. J.HuC. M.YuY. M.Further results on iterative methods for computing generalized inversesLiuX. J.YuY. M.HuC. M.The iterative methods for computing the generalized inverse AT,S2 of the bounded linear operator between Banach spacesPenroseR.A generalized inverse for matricesSaadY.ShengX.ChenG.Several representations of generalized inverse AT,S2 and their applicationShengX.ChenG.GongY.The representation and computation of generalized inverse AT,S2WangG.WeiY.QiaoS.WeiY.A characterization and representation of the generalized inverse AT,S2 and its applicationsWeiY.WuH.The representation and approximation for the generalized inverse AT,S2YuY.WeiY.The representation and computational procedures for the generalized inverse AT,S2 of an operator a in hilbert spacesKolihaJ. J.A generalized drazin inverseMülerV.DjordjevićD. S.StanimirovićP. S.On the generalized Drazin inverse and generalized resolventGetsonA. J.HsuanF. C.LiW. G.LiZ.A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrixChenH.WangY.A Family of higher-order convergent iterative methods for computing the Moore-Penrose inverseLiuX.JinH.YuY.Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices