TSWJ The Scientific World Journal 1537-744X Hindawi Publishing Corporation 708647 10.1155/2013/708647 708647 Research Article A Higher Order Iterative Method for Computing the Drazin Inverse Soleymani F. 1 Stanimirović Predrag S. 2 Mohiuddine S. A. Xie Q. 1 Department of Mathematics Zahedan Branch Islamic Azad University Zahedan Iran iau.ir 2 Faculty of Sciences and Mathematics University of Niš Višegradska 33 18000 Niš Serbia ni.ac.rs 2013 30 9 2013 2013 06 08 2013 27 08 2013 2013 Copyright © 2013 F. Soleymani and Predrag S. Stanimirović. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper.

1. Introduction

Computing the matrix inverse of nonsingular matrices of higher sizes is difficult and is a time consuming task. Application of higher order algorithms to solve this problem is very desirable. Generally speaking, in wide variety of topics, one must compute the inverse or particularly the generalized inverses to comprehend and realize significant features of the involved problems . An example could be in phased-array radar whereas the target tracking is a recursive prediction correction process, when Kalman filtering is extensively consumed; see [2, 3]. Target equations are modeled explicitly such that the position and velocity and potentially higher derivatives of each measurement are approximated by the track filter as a state vector. The approximated error with the state vector is modeled by taking into account a covariance matrix, which is then used in subsequent computations. To be more precise, this matrix gets updated in each iteration of the track filter. Finding the inverse in the next iteration could make use of the inverse in the present iteration. In this circumstance, fast and efficient iterative algorithms are required.

There are some techniques to tackle this problem, which are basically divided into two main parts: the direct solvers such as Gaussian Elimination with Partial Pivoting (GEPP), which requires a massive load of computations and memory for large scale problems, and the iterative methods of the class of Schulz-type iterations, in which an approximation of the matrix inverse (by using a threshold) can be found per step up to the desired accuracy.

Almost all of the direct methods for matrix inversion require high accuracy in the computations to attain proper solutions as they are not tolerant to errors in the computed matrices. In contrast, iterative method compensates for individual and accumulation of round-off errors as it is a process of successive refinement.

In this paper, we focus on presenting and demonstrating a new iterative method to find approximate inverse matrices as fast as possible with a close attention in reducing the computational time. Toward this goal, a theoretical discussion will also be given to show the behavior of the proposed scheme. An interesting point in the contribution is that it could easily be applied to complex matrices as well as for finding the Drazin inverse.

To clarify the procedure, we now remind of some of the well-known methods in what follows. Perhaps, the most common technique to compute the inverse of a nonsingular complex matrix A, is the Schulz method given in  as follows: (1)Vn+1=Vn(2I-AVn),  n=0,1,2,, where I is the identity matrix with the same dimension that of the matrix A. The scheme (1) has become popular in the 1980s due to the emerging of parallel machines.

Such iterative methods are sensitive for the initial guess/value (V0) to start the process and converge to A-1. In practice, the Schulz-type methods are efficient (specially for structured matrices) but a difficulty lies in the initial approximation of the A-1. This need was fulfilled by providing some appropriate initial approximations in the literature. For example, Rajagopalan in  gave some initial approximations for the inverse matrix by considering different norms as: (2)V0=AA2. In fact, by choosing (2), we attain (3)V0A-1=AA2A-1. Based on the fact that κ(A)=AA-1I1, we could have (4)V0A-1=1κ(A)1, which suggests that by choosing (2), the iterative scheme (1) will be almost always convergent for regular matrices.

Some other ways to choose the initial approximation V0 have been listed in Table 1, where AT and A* are the transpose and the conjugate transpose of the complex matrix A, respectively, and N stands for the size of the square matrix.

Some of the general ways to choose V0.

Forms Form 1 Form 2 Form 3 Form 4 Form 5
Initial formulation A / A 1 2    A / A 2    A / A F 2    A ^ T / ( N A 1 A    ) A ^ * / A 2 2

A vast discussion on choosing the initial approximation V0 is given in . For instance, Pan and his coworkers discussed the possible ways to reduce the computational load of Schulz-type iteration methods for structured matrices such as Toeplitz or Hankel matrices; see, for example, . To illustrate further, for a symmetric positive definite (SPD) matrix A, one can choose V0 as follows: (5)V0=IAF.

Another interesting choice is based on  for diagonally dominant matrices, which is fruitful when dealing with large scale sparse systems arising from the discretization of differential equations as follows: (6)V0=diag(1a11,1a22,,1aNN), with aii as the diagonal elements of A.

Let us now review some of the high order iterative methods for matrix inversion. The perception of the need for higher order methods is the fact that (1) is too slow at the beginning of the process before arriving at the convergence phase for general matrices, and this would increase the computational load of the whole algorithm used for matrix inversion .

Li et al. in  investigated an iteration of the form (7)Vn+1=Vn(3I-AVn(3I-AVn)),  n=0,1,2, of third-order and also proposed another iterative method for finding A-1 of the same order as it is given in (8)Vn+1=Vn[I+12(I-AVn)(I+(2I-AVn)2)],n=0,1,2,.

Note that a general procedure for constructing such methods was given in (, Chapter 5). Krishnamurthy and Sen provided the following fourth-order method: (9)Vn+1=Vn(I+Yn(I+Yn(I+Yn))),n=0,1,2,, in which Yn=I-AVn. As another example, a ninth-order method could be presented as (10)Vn+1=Vn(I+Yn(I+Yn(I+Yn(I+Yn(I+Yn(I+Yn(I+Yn(I+Yn)))))))),n=0,1,2,.

In the sequel, we present a new iteration for matrix inversion. Actually, the following section covers our main contribution as a new ninth-order efficient inverse-finding iterative method. We also prove the main theorem therein. Next in Section 3, we discuss the complexity of the iterative methods to theoretically find the most efficient method. In Section 4, we analytically discuss the application of the new algorithm in the computation of the Drazin inverse, which is of interest in numerical analysis. Section 5 applies the suggested iteration in finding robust approximate inverses for large sparse matrices with real or complex entries in details, while the application of the new scheme in preconditioning of the practical problems will be also given. A clear reduction in the computational time to attain the desired accuracy will be observed therein. Finally, in Section 6 our concluding remarks will be furnished.

2. A New Method for Matrix Inversion

A new scheme must be designed by applying an efficient nonlinear (scalar) equation solver to the matrix equation F(V)=V-1-A. Then, one may obtain an iterative process using proper factorization. In this way, by applying the iterative scheme (11)Yn=Vn-F(Vn)F(Vn),Zn=Vn-F(Vn)2(1F(Vn)+1F(Yn)),Vn+1=Zn-F(Zn)2F(Zn)(2+F′′(Zn)F(Zn)F(Zn)2),n=0,1,2, on the matrix equation V-1-A=0, we are able to find the fixed-point iteration(12)Vn+1=-18Vn(-7I+9AVn-5(AVn)2+(AVn)3)×(12I-42AVn+103(AVn)2-156(AVn)3+157(AVn)4-104(AVn)5+43(AVn)6-10(AVn)7+(AVn)8). Now, by simplification, we suggest the following matrix iteration: (13)χn=-7I+ψn(9I+ψn(-5I+ψn)),ϑn=ψnχn,Vn+1=-18Vnχn(12I+ϑn(6I+ϑn)),n=0,1,2,, wherein ψn=AVn,  I is the identity matrix, and the sequence of iterates {Vn}n=0n= converges to A-1 under some condition.

In numerical mathematics, it is very useful and essential to know the behavior of an approximate method. Therefore, we are about to prove its order of convergence in Theorem 1.

Theorem 1.

Let A=[aij]N×N be a nonsingular complex matrix. If the initial approximation V0 satisfies (14)I-AV0<1, then the iterative method (13) converges with ninth order to A-1.

Proof.

We use notations that E0=I-AV0 and subsequently En=I-AVn. Then, (15)  En+1=I-AVn+1=I-A(-18Vn(-7I+AVn(9I+AVn(-5I+AVn)))×(12I+AVn(-7I+AVn(9I+AVn(-5I+AVn)))×(6I+AVn(-7I+AVn(9I+AVn(-5I+AVn)))))18)=I-A(-18Vn(-7I+9AVn-5(AVn)2+(AVn)3)×(12I-42(AVn)1+103(AVn)2-156(AVn)3+157(AVn)4-104(AVn)5+43(AVn)6-10(AVn)7+(AVn)8)18)=18(-2I+AVn)3(-I+AVn)9=18(I+I-AVn)3(I-AVn)9=18(I+En)3En9=18(En9+3En10+3En11+En12).  Hence, by taking arbitrary matrix norm on both sides of (15), we attain (16)En+118(En9+3En10+3En11+En12). In addition, since E0<1, by relation (16), we obtain that (17)E118(E09+3E010+3E011+E012)E09<1. Now if we consider En<1, therefore (18)En+118(En9+3En10+3En11+En12)En9. Using mathematical induction, we obtain (19)En+1En9,  n0. Furthermore, we get that (20)En+1En9E09n+1. That is, I-AVn0, when n and (21)VnA-1,as  n. Thus, the new method (13) converges to the inverse of the matrix A in the case ρ(AV0)<1, where ρ is the spectral radius. Now, we prove that the order of convergence for the sequence {Vn}n=0n= is at least nine. Let εn denotes the error matrix εn=A-1-Vn; afterwards (22)Aεn=I-AVn=En. The identity (22) in conjunction with (15) implies that (23)Aεn+1=18((Aεn)9+3(Aεn)10+3(Aεn)11+(Aεn)12). Therefore, using invertibility of A, it follows immediately that (24)εn+1=18(εn(Aεn)8+3εn(Aεn)9+3εn(Aεn)10+εn(Aεn)11). By taking any subordinate norm of (24), we obtain (25)εn+1(18(A8+3A9εn+3A10εn2+A11εn3)18)εn9. Consequently, it is proved that the iterative formula (13) converges to A-1, and the order of this method is at least nine.

The Schulz-type iterations are strongly numerically stable, that is, they have the self-correcting characteristic and are essentially based upon matrix multiplication per an iterative step. Multiplication is effectively parallelizable for structured matrices represented in compressed form.

The iterative scheme (13) could efficiently be combined with sparse techniques in order to reduce the computational load of matrix-by-matrix multiplications per step. We should also point out that even if the matrix A is singular, the Schulz-type methods, including the scheme (13), converge to the Moore-Penrose inverse using a proper initial matrix. A full discussion on this feature of this type of iterative methods has been given in .

Note that {Vn}n=0n= produced from (13), under a certain condition (when AV0=V0A), may be applied not only to the left preconditioned linear system VnAx=Vnb but also to the right preconditioned linear system AVny=b, where y=Vnx. In fact, an important application of the new method (13) is in preconditioning of the linear system of equations. Practically, experimental results in Section 5 will show that the preconditioner obtained from (13) may lead to nicely clustered eigenvalue distributions of the preconditioned matrices and, hence, results in fast convergence of the preconditioned Krylov subspace iteration methods, such as GMRES and BiCGSTAB for solving some classes of large sparse system of linear equations.

3. Complexity of the Methods

Let us consider the computational complexity of the existing iterative processes (1), (7), (8), (9), (10), and (13), since they are all convergent to A-1 in the same condition. From a theoretical analysis, and by assuming a uniform cost for the arithmetic operations, typical of the floating point computations, we consider the inverse-finder informational efficiency index. It uses two parameters ρ and κ which stand for the rate of convergence and the number of matrix-by-matrix multiplications in floating point arithmetics, respectively. Then the comparative index could be expressed by (26)IIEI=ρκ.

Hence, a favorable method in theoretical point of view must reach an order ρ with fewer matrix multiplications κ, (i.e., κρ).

In Table 2, we furnish a comparison on the order along the number of matrix multiplications, rate of convergence, and the index (26) for different methods. The results show that the new established method in Section 2 is better than the others. In fact, by comparing these results, one can see that the iterative process (13) reduces the computational complexity by using less number of basic operations and leads to the better equilibrium between the high speed and the operational cost.

Comparison of the computational complexity for different methods.

Methods (1) (7) (8) (9) (10) (13)
Rate of convergence 2 3 3 4 9 9
Number of matrix multiplications 2 3 4 4 9 7
IIEI 2 / 2 = 1 3 / 3 = 1 3 / 4 = 0.75 4 / 4 = 1 9 / 9 = 1 9 / 7 1.285
4. Application in Finding the Drazin Inverse

In 1958, Drazin in  introduced a different kind of generalized inverse in associative rings and semigroups that does not have the reflexivity property but commutes with the element. The importance of this kind of inverse and its computation was later expressed away fully by Wilkinson in . This was the motivation of many authors to develop direct or iterative methods for this important problem; see, for example, .

Definition 2.

The smallest nonnegative integer k, such that rank(Ak+1)=rank(Ak), is called the index of A and denoted by ind(A).

Definition 3.

Let A be an N×N complex matrix; the Drazin inverse of A, denoted by AD, is the unique matrix V satisfying (27)(1k)AkVA=Ak,(2)VAV=V,(5)AV=VA, where k=ind(A) is the index of A.

Note that if ind(A)=1, the matrix V is called the group inverse of A. Also, if A is nonsingular, then it is easily seen that ind(A)=0 and AD=A-1. Note that the idempotent matrix AAD is the projector on (Ak) along 𝒩(Ak), where (Ak) denotes the range of Ak and 𝒩(Ak) is the null space of Ak.

Wei in  proved that the general solution of the square singular linear system Ax=b can be obtained using the Drazin inverse as x=ADb+(I-AAD)z, where z(Ak-1)+𝒩(A).

In 2004, Li and Wei in  proved that the matrix method of Schulz (1) can be used for finding the Drazin inverse of square matrices both possessing real or complex spectra. They proposed the following initial matrix: (28)V0=W0=αAl,lind(A)=k, where the parameter α must be chosen so that the condition I-AV0<1 is satisfied. Using the initial matrix of the form (28) yields to a matrix method for finding the famous Drazin inverse with quadratical convergence.

As a consequence, we could present the iterative method of the form (13) with ninth order of convergence for finding the Drazin inverse, where the initial approximation is chosen as (29)V0=W0=2Tr(Ak+1)Ak, wherein Tr(·) stands for the trace of an arbitrary square matrix.

In what follows, we use the following auxiliary results.

Proposition 4 (see [<xref ref-type="bibr" rid="B7">17</xref>]).

Let MN×N and ε>0 be given. There is at least one matrix norm · such that (30)ρ(M)Mρ(M)+ϵ, where ρ(M) denotes the spectral radius of M.

Proposition 5 (see [<xref ref-type="bibr" rid="B20">18</xref>]).

If PL,M denotes the projector on a space L along a space M, then

PL,MQ=Q if and only if (Q)L;

QPL,M=Q if and only if 𝒩(Q)M.

Theorem 6.

Let AN×N be singular square matrix. Also, suppose that the initial approximation W0 is chosen by means of (28). Then the sequence {Wn}n=0n= defined by the iterative method (13) satisfies the following error estimate when finding the Drazin inverse: (31)AD-WnADI-AW09n.

Proof.

Consider the notation F0=I-AW0 and subsequently the residual matrix as Fn=I-AWn. Then similarly as in (15), we get(32)Fn+1=I-AWn+1=I-A(-18Wn(-7I+AWn(9I+AWn(-5I+AWn)))×(12I+AWn(-7I+AWn(9I+AWn(-5I+AWn)))×(6I+AWn(-7I+AWn(9I+AWn(-5I+AWn)))))18)=18(I+I-AWn)3(I-AWn)9=18(Fn9+3Fn10+3Fn11+Fn12).By taking an arbitrary matrix norm on both sides of (32), we attain (33)Fn+118(Fn9+3Fn10+3Fn11+Fn12). In addition, since F0<1, by relation (33), we obtain that F1(1/8)(F09+3F010+3F011+F012)F09. Similarly, Fn+1(1/8)(Fn9+3Fn10+3Fn11+Fn12)Fn9. Using mathematical induction, we obtain Fn+1Fn9 for each n0. This implies that (34)Fn9F09n,n0.

The following result is a consequence of Theorem 6.

Corollary 7.

If the conditions of Theorem 6 are satisfied and the initial iteration W0 is chosen such that (42)F0=I-AW0<1, the iterative method (13) converges to AD.

Therefore, our goal is to find initial approximations W0 satisfying (42). In accordance with Proposition 4, W0 must satisfy the following inequality to ensure the convergence in the Drazin inverse case: (43)max1it|1-λi(AW0)|<1, where rank(AW0)=t and λi(AW0),i=1,,t are eigenvalues of AW0.

Theorem 8.

Let AN×N be a singular square matrix with ind (A)=k, and the sequences Fn, δn are defined as in Theorem 6. The sequence {Wn}n=0n= defined by the iterative method (13) converges with ninth order to AD if the initial approximation W0 is in accordance with (42).

Proof.

Therefore, since (42) is satisfied, from (31) follows δn0. Furthermore, the inequalities in (50) immediately lead to the conclusion that WnAD as n+ with the ninth order of convergence.

5. Numerical Aspects

Using the programming package Mathematica 8  in this section, we apply our iterative method on some practical numerical tests and compare it with the existing methods in order to manifest the applicability and the consistency of numerical results with the theoretical aspects illustrated in Sections 24.

For numerical comparisons in this section, we have used the methods (1), (7), (8), (9), (10), and (13) denoted by “Schulz”, “Li et al. I”, “Li et al. II”, “KMS4”, “KMS9,” and the “Proposed method”, respectively. We have carried out the numerical tests with machine precision on a computer with Pentium 4. In fact, the computer characteristics are Microsoft Windows XP Intel(R), Pentium(R) 4 CPU, 3.20 GHz with 4 GB of RAM. In all computations, the running time in seconds using AbsoluteTiming[] was attained.

In sparse-matrix algebra, the iteration methods such as (1), (7), (8), (9), (10) and (13) should be coded in sparse form using some well-known commands such as SparseArray[] to reduce the computational burden and preserve the sparsity feature of the approximate inverse per computing step. This is done herein along with a threshold by applying Chop[] on each approximate inverse Vi.

Now, we apply the above inverse finders for finding the approximate inverses of the following large sparse matrices.

Test Problem 1.

In this test, 10 large sparse random complex matrices of the dimension 5000 are considered as shown in Algorithm 1.

<bold>Algorithm 1</bold>

n    = 5000; number = 10; SeedRandom [ 123 ]

Table [ A [ j ]    = SparseArray [ { Band [ { −100, 1100 } ]   -> RandomReal,

Band[{1, 1}]  -> 2., Band[{1000, −50}, {n   20, n   25}]  -> {2.8, RandomReal[]  + I},

Band[{600, 150}, {n   100, n   400]  -> {-RandomReal, 3. + 3 I}}, {n, n},

0.], {j, number}];

Note that I=-1. In this test, the stopping criterion is Vn+1-Vn110-6 and the maximum number of iterations allowed is set to 75. Note that in this test the initial choice has been constructed using different available ways as discussed in Table 1.

The results of comparisons for this test have been presented in Figures 1, 2, 3, and 4. In Figure 1, we show the number of iterations required by different methods to attain the desired accuracy when V0=A/A12. As it is obvious that higher order methods require lower number of iterations to converge, we then put our focus on the computational time needed to satisfy the desired tolerance using three different initial approximations V0=A/A12, V0=A/A2, and V0=A/AF2. As could be observed for all the three forms of the initial guesses and in all the 10 test problems (given in Figures 24), our iterative method (13) beats the other existing schemes, which immediately follows the theoretical results of Table 2.

Comparison of the number of evaluations for solving the Test Problem 1 using V0=A/A12.

Comparison of the elapsed time for solving the Test Problem 1 using V0=A/A12.

Comparison of the elapsed time for solving the Test Problem 1 using V0=A/A2.

Comparison of the elapsed time for solving the Test Problem 1 using V0=A/AF2.

The attained results have reverified the robustness of the proposed iterative method (13) by a clear reduction in the number of iterations and the elapsed time.

The practical application of the new scheme (13) falls within many problems as discussed in Section 1. For instance, in solving second-kind integral equations by Wavelet-like approach, the problem will be reduced to find that the inverse of a large sparse matrix possesses a sparse inverse as well . In the rest of this section, we apply our new iterative method as a robust technique to produce accurate preconditioners for accelerating modern iterative solvers such as GMRES or BiCGSTAB for solving large scale sparse linear systems; see, for example, .

Test Problem 2.

Consider solving the following Boundary Value Problem (BVP) using discretization. In such problems, and in order to capture all the shocks and the behavior of the solution, a very fine grid of points is required in 𝔽𝔻 approach: (51)y′′=3y-2y,    t[a,b],y(a)=e3,y(b)=e-3, where the domain of the solution is a=0 and b=2. To solve (51) using 3-point 𝔽𝔻 discretization, we assume that in the grid points ti, the exact solution is denoted by y(ti) and the approximate solution is defined by wi, for any i=0,1,,n, n+1.

Thus, (51) can be written in discretized form in Mathematica language (see Algorithm 2), wherein n is the number of grid points which result in an n×n sparse linear system of equations. In this test, we consider the tolerance as 10-8 for the final solution of the linear systems. The results of solving this system “without preconditioning” and “with left preconditioning” are given in Tables 3 and 4. Note that in Tables 3 and 4, for example, PBiCGSTAB-(8)-II stands for the left preconditioned linear system (VAx=Vb) applying the preconditioner (approximative inverse) obtained by the scheme (8) after 2 iterations, while it is solved by the iterative solver BiCGSTAB.

Comparison of the computational time in solving the linear system resulting of discretization of (51) when n=1500.

Methods GMRES PGMRES-(1)-VI PGMRES-(7)-IV PGMRES-(8)-III PGMRES-(9)-III PGMRES-(13)-II
Total time Fail 3.50 1.78 1.53 2.62 1.31

Comparison of the computational time in solving the linear system resulting of discretization of (51) when n=2000.

Methods BiCGSTAB PBiCGSTAB-(1)-III PBiCGSTAB-(7)-III PBiCGSTAB-(8)-II PBiCGSTAB-(9)-II PBiCGSTAB-(13)-I
Total time 1.98 0.53 0.60 0.53 0.54 0.46

<bold>Algorithm 2</bold>

h  = (2.   0)/(n + 1); Subscript[t, 0]  = 0; Subscript[w, 0]  =

E3; Subscript[w, n + 1]  = Exp;

Table [ N [ (Subscript [ w, i    1 ]       2 Subscript [ w, i ]    + Subscript [ w, i + 1 ] )/

h2   3 Subscript[w, i]  +

2 (Subscript[w, i + 1]    Subscript[w, i 1])/(2 h) == 0], {i, 1, n}];

The numerical results clearly support the efficiency of the method (13). A clear reduction in the elapsed time is observable. Even for the case of GMRES solver which had failed (not convergent after 1500 iterations to the considered tolerance), a simple preconditioner obtained by the new method (13) significantly improved the problem. Note that in this test, we have constructed V0 for the compared methods using (6).

Test Problem 3.

The aim of this example is to apply the discussions of Section 4, for finding the Drazin inverse of the following square matrix (taken from ): (52)A=[20.40000000000-20.40000000000-1-11-10000-1000-1-1-1100000000000011-1-100-10000011-1-10000000-1-20.4000000000020.40000000-10000001-1-1-100000000-11-1-100000000000.4-200000000000.42], with k=ind(A)=3. To simplify the process, we write a general code in the programming package Mathematica for the iterative process (13), to find the (pseudo-)inverse or the Drazin inverse of arbitrary matrices (see Algorithm 3).

<bold>Algorithm 3</bold>

P M [ X_ ]    :=

With [ { Id = SparseArray [ { { i_, i_}  -> 1.}, {n, n}]],

X1 = A.X; X2 = −7 Id + X1.(9 Id + X1.(−5 Id + X1)); X3 = X1.X2;

(−1/8) X.X2.(12 Id + X3.(6 Id + X3))

];

InitialMatrix [ A_ ]    := 1/SingularValueList [ A, 1 ] [ [1 ] ] 2    ConjugateTranspose [ A ] ;

InitialDrazin [ A_ ]    := 2/Tr [ MatrixPower [ A, k + 1 ] ]    MatrixPower [ A, k ] ;

DrazinInverse [ A_, tolerance_ ]    := If [ k == 0,

Module[{X0 = InitialMatrix[A]}, FixedPoint[(PM[#]  &), X0,

SameTest -> (Norm[#1 - #2, Infinity]  <= tolerance  &)]],

Module[{X0 = InitialDrazin[A], FixedPoint[(PM[#]  &), X0,

SameTest -> (Norm[#1   #2, Infinity]  <= tolerance  &)]]

];

The two-argument function DrazinInverse[A_,tolerance_] takes the arbitrary matrix A and the tolerance from the user to obtain its Drazin inverse by knowing the index k. In this case, by choosing tolerance=10-8, we obtain(53)AD=[0.25-0.250.0.0.0.0.0.0.0.0.0.1.251.250.0.0.0.0.0.0.0.0.0.-1.66406-0.9921870.25-0.250.0.0.0.-0.0625-0.06250.0.15625-1.19531-0.679687-0.250.250.0.0.0.-0.06250.18750.68751.34375-2.76367-1.04492-1.875-1.25-1.251.251.251.251.484382.578133.320316.64063-2.76367-1.04492-1.875-1.25-1.251.251.251.251.484382.578134.570318.5156314.10946.300786.6253.3755.-3.-5.-5.-4.1875-8.5-10.5078-22.4609-19.3242-8.50781-9.75-5.25-7.54.57.57.56.37512.562515.976633.7891-0.625-0.31250.0.0.0.0.0.0.25-0.25-0.875-1.625-1.25-0.93750.0.0.0.0.0.-0.250.25-0.875-1.6250.0.0.0.0.0.0.0.0.0.1.251.250.0.0.0.0.0.0.0.0.0.-0.250.25].

6. Conclusion

In many engineering applications, extracting the diagonal or the whole entries of the inverse of a given matrix (basically large and sparse) is an important part of the computation, for example, in electronic structure calculation and especially for models based on effective one-electron Hamiltonians such as the tight-binding models, or in modern statistics, and thus, developing high-order efficient Schulz-type methods is of practical interest.

In this paper, we have studied a high-order iteration method (13) for matrix inversion. Convergence analysis of our iterative algorithm has been studied and a discussion on the choice of the initial value in order to start the process and preserve the convergence order has been given. We also discussed that under what conditions a new method could be applied for finding the Drazin inverse of square matrices having real or complex spectra.

As a result, the total time consuming of the suggested iteration (13) is remarkably low in contrast with the existing methods of this type in the case of constructing approximate inverses and in preconditioning.

Working on the extension of the proposed method (13) for interval matrix inversion  can be considered as future works in this field of study.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The second author gratefully acknowledges support from the Research Project 174013 of the Serbian Ministry of Science.

Kyrchei I. Explicit formulas for determinantal representations of the Drazin inverse solutions of some matrix and differential matrix equations Applied Mathematics and Computation 2013 219 7632 7644 Brow R. G. Hwang P. Y. C. Introduction to Random Signals and Applied Kalman Filtering 1992 2nd New York, NY, USA John Wiley & Sons Dorf R. C. The Electrical Engineering Handbook 1993 Ann Arbor, Mich, USA CRC Press Schulz G. Iterative Berechnung der Reziproken matrix Zeitschrift für Angewandte Mathematik und Mechanik 1933 13 57 59 Rajagopalan J. An iterative algortihm for inversion of matrices [M.S. thesis] 1996 Concordia University Pan V. Y. Structured Matrices and Polynomials: Unified Superfast Algorithms 2001 New York, NY, USA BirkhWauser-Springer Pan V. Y. Kunin M. Rosholt R. E. Kodai H. Homotopic residual correction processes Mathematics of Computation 2006 74 253 345 368 2-s2.0-32944474417 10.1090/S0025-5718-05-01771-0 Grosz L. Preconditioning by incomplete block elimination Numerical Linear Algebra with Applications 2000 7 7-8 527 541 2-s2.0-0034360256 Soderstrom T. Stewart G. W. On the numerical properties of an iterative method for computing the Moore-Penrose generalized inverse SIAM Journal on Numerical Analysis 1974 11 1 61 74 2-s2.0-0016034996 Li H.-B. Huang T.-Z. Zhang Y. Liu X.-P. Gu T.-X. Chebyshev-type methods and preconditioning techniques Applied Mathematics and Computation 2011 218 2 260 270 2-s2.0-79960843617 10.1016/j.amc.2011.05.036 Krishnamurthy E. V. Sen S. K. Numerical Algorithms—Computations in Science and Engineering 1986 New Delhi, India Affiliated East-West Press Ben-Israel A. Greville T. N. E. Generalized Inverses: Theory and Applications 2003 2nd New York, NY, USA Springer Drazin M. P. Pseudoinverses in associative rings and semigroups American Mathematical Monthly 1958 65 506 514 Wilkinson J. H. Campbell S. L. Note on the practical significance of the Drazin inverse Recent Applications of Generalized Inverses 1982 Boston, Mass, USA Pitman Advanced Publishing Program 82 99 Research Notes in Mathematics Wei Y. Index splitting for the Drazin inverse and the singular linear system Applied Mathematics and Computation 1998 95 2-3 115 124 2-s2.0-0000409612 Li X. Wei Y. Iterative methods for the Drazin inverse of a matrix with a complex spectrum Applied Mathematics and Computation 2004 147 3 855 862 2-s2.0-0141955268 10.1016/S0096-3003(02)00817-2 Horn R. A. Johnson C. R. Matrix Analysis 1986 New York, NY, USA Cambridge University Press Wang G. Wei Y. Qiao S. Generalized Inverses: Theory and Computations 2004 New York, NY, USA Science Press Wagon S. Mathematica in Action 2010 3rd Berlin, Germany Springer Alpert B. Beylkin G. Coifman R. Rokhlin V. Wavelet-like bases for the fast solution of second-kind integral equations SIAM Journal on Scientific Computing 1993 14 159 184 Saad Y. Iterative Methods for Sparse Linear Systems 2003 2nd SIAM Rohn J. Farhadsefat R. Inverse interval matrix: a survey Electronic Journal of Linear Algebra 2011 22 704 719 2-s2.0-80053152373