This paper presents a computational iterative method to find approximate inverses for the inverse of matrices. Analysis of convergence reveals that the method reaches ninth-order convergence. The extension of the proposed iterative method for computing Moore-Penrose inverse is furnished. Numerical results including the comparisons with different existing methods of the same type in the literature will also be presented to manifest the superiority of the new algorithm in finding approximate inverses.
1. Introduction
The main purpose of this paper is to present an efficient method in terms of speed of convergence, while its convergence can be easily achieved and also be economic for large sparse matrices possessing sparse inverses. We also discuss the extension of the new scheme for finding the Moore-Penrose inverse of singular and rectangular matrices.
Finding a matrix inverse is important in some practical applications such as finding a rational approximation for the Fermi-Dirac functions in the density functional theory [1], because it conveys significant features of the problems dealing with.
We further mention that in some certain circumstances, the computation of a matrix inverse is necessary. For example, there are many ways to encrypt a message, whereas the use of coding has become particularly significant in recent years. One way to encrypt or code a message is to use matrices and their inverses. Indeed, consider a fixed invertible matrix A. Convert the message into a matrix B such that AB is possible to perform. Send the message generated by AB. At the other end, they will need to know A-1 in order to decrypt or decode the message sent.
The direct methods such as Gaussian elimination with partial pivoting or LU decomposition require a reasonable time to compute the inverse when the size of the matrices is high. To illustrate further, the Gaussian elimination with partial pivoting method cannot be highly parallelized and this restricts its applicability in some cases. In contrary, Schulz-type methods, which could be applied for large sparse matrices (possessing sparse inverses [2]) by preserving the sparsity feature and can be parallelized, are in focus in such cases.
Some known methods were proposed for the approximation of a matrix inverse, such as Schulz scheme. The oldest scheme, that is, the Schulz method (see [3]), is defined as
(1)Vn+1=Vn(2I-AVn),n=0,1,2,…,
wherein I is the identity matrix. Note that [4] mentions that Newton-Schulz iterations can also be combined with wavelets or hierarchical matrices to compute the diagonal elements of A-1 independently.
The proposed iteration relies on matrix multiplications, which destroy sparsity, and therefore the Schulz-type methods are less efficient for sparse inputs possessing dense inverses. However, a numerical dropping is usually applied to Vn+1 to keep the approximated inverse sparse. Such a strategy is useful for preconditioning.
To provide more iterations from this classification of methods, we remember that W. Li and Z. Li in [5] proposed
(2)Vn+1=Vn(3I-3AVn+(AVn)2),n=0,1,2,….
In 2011, Li et al. in [6] represented (2) in the following form:
(3)Vn+1=Vn(3I-AVn(3I-AVn)),n=0,1,2,…
and also proposed another iterative method for finding A-1 as comes next:
(4)Vn+1=[I+14(I-VnA)(3I-VnA)2]Vn,n=0,1,2,….
Much of the application of these solvers (specially the Schulz method) for structured matrices was investigated in [7] while the authors in [8] showed that the matrix iteration of Schulz is numerically stable. Further discussion about such iterative schemes might be found at [9–12].
The Schulz-type matrix iterations are dependent on the initial matrix V0 too much. In general, we construct the initial guess for square matrices as follows. For a general matrix Am×m satisfying no structure, we choose as in [13]
(5)V0=ATm∥A∥1∥A∥∞,
or V0=αI, where I is the identity matrix, and α∈ℝ should adaptively be determined such that ∥I-αA∥<1. For diagonally dominant (DD) matrices, we choose as in [14] V0=diag(1/a11,1/a22,…,1/amm), wherein aii is the diagonal entry of A. And for a symmetric positive definite matrix and by using [15], we choose V0=(1/∥A∥F)I, whereas ∥·∥F is the Frobenius norm.
For rectangular or singular matrices, one may choose V0=A*/(∥A∥1∥A∥∞) or V0=AT/(∥A∥1∥A∥∞), based on [16]. Also, we could choose V0:=αAT, where 0<α<2/ρ(ATA) or
(6)V0=αA*,
with 0<α<2/ρ(A*A) [17]. Note that these choices could also be considered for square matrices. However, the most efficient way for producing V0 (for square nonsingular matrices) is the hybrid approach presented in Algorithm 1 of [18].
The rest of this paper is organized as follows. The main contributions of this article are given in Sections 2 and 3. In Section 2, we analyze a new scheme for matrix inversion while a discussion about the applicability of the scheme for Moore-Penrose inverse will be given in Section 3. Section 4 will discuss the performance and the efficiency of the new proposed method numerically in comparison with the other schemes. Finally, concluding remarks are presented in Section 5.
2. Main Result
The derivation of different Schulz-type methods for matrix inversion relies on iterative (one- or multipoint) methods for the solution of nonlinear equations [19, 20]. For instance, imposing Newton’s iteration on the matrix equation AV=I would result in (1), as fully discussed in [21]. Here, we apply the following new nonlinear solver on the matrix equation f(V)=V-1-A=0:
(7)Yn=Vn-f′(Vn)-1f(Vn)-2-1×(f′(Vn)-1f′′(Vn))(f′(Vn)-1f(Vn))2,Zn=Yn-2-1f′(Yn)-1f(Yn),Vn+1=Yn-f′(Zn)-1f(Yn),n=0,1,2,….
Note that (7) is novel and constructed in such a way to produce a new matrix iterative method for finding generalized inverses efficiently. Now, the following iteration method for matrix inversion could be obtained:
(8)Vn+1=14Vn(3I+AVn(-3I+AVn))×((-3+AVn(3+AVn(-3+AVn)))24I-(-I+AVn)3hhhhh×(-3I+AVn(3I+AVn(-3I+AVn)))2),hhhhhhhhhhhhhhhhhhhhhhhhn=0,1,2,….
The obtained scheme includes matrix power, which is certainly of too much cost. To remedy this, we rewrite the obtained iteration as efficiently as possible to also reduce the number of matrix-matrix multiplications in what follows:
(9)ψn=AVn,ζn=3I+ψn(-3I+ψn),υn=ψnζn,Vn+1=-14Vnζn(-13I+υn(15I+υn(-7I+υn))),hhhhhhhhhhhhhhhhhhhhhhhhn=0,1,2,…,
whereas I is the identity matrix and the sequence of matrix iterates {Vn}n=0n=∞ converges to A-1 for a good initial guess.
It should be remarked that the convergence of any order for nonsingular square matrices is generated in Section 6 of Chapter 2 of [22], whereas the general way for the rectangular matrices was discussed in Chapter 5 of [23] and the recent paper of [24]. In those constructions, always a convergence order ρ will be attained by ρ times of matrix-matrix products, such as (1) which reaches the order 2 using two matrix-matrix multiplications.
Two important matters must be mentioned at this moment to ease up the perception of why a higher order (efficient) method such as (9) with 7 matrix-matrix products to reach at least the convergence order 9 is practical. First, by following the comparative index of informational efficiency for inverse finders [25], defined by E=ρ/θ, wherein ρ and θ stand for the convergence order and the number of matrix-matrix products, then the informational index for (9), that is, 9/7≈1.28, beats its other competitors, 2/2=1 of (1), 3/3=1 of (2)-(3), and 3/4=0.75 of (4). And second, the significance of the new scheme will be displayed in its implementation. To illustrate further, such iterations depend totally on the initial matrices. Though there are certain and efficient ways for finding V0, in general such initial approximations take a high number of iterations (see e.g., Figure 3, the blue color) to arrive at the convergence phase. On the other hand, each cycle of the implementation of such Schulz-type methods includes one stopping criterion based on the use of a matrix norm, and this would impose further burden and load in general, for the low order methods in contrast to the high order methods, such as (9). Because the computation of a matrix norm (usually ∥·∥2 for dense complex matrices and ∥·∥F for large sparse matrices) takes time and therefore higher number of steps/iterations (which is the result of lower order methods), it will be costlier than the lower number of steps/iterations of high order methods. Hence, the higher order (efficient) iterations are mostly better solvers in terms of computational time to achieve the desired accuracy.
After this complete discussion of the need and efficacy of the new scheme (9), we are about to prove the convergence behavior of (9) theoretically in what follows.
Theorem 1.
Let A=[aij]m×m be a nonsingular complex or real matrix. If the initial approximation V0 satisfies
(10)∥I-AV0∥<1,
then the iterative method (9) converges with at least ninth order to A-1.
Proof.
Let (10) hold, and for the sake of simplicity assume that E0=I-AV0 and En=I-AVn. It is straightforward to have
(11)En+1=I-AVn+1=14[3En9+En10].
Hence, by taking an arbitrary norm from both sides of (11), we obtain
(12)∥En+1∥≤14(3∥En∥9+∥En∥10).
The rest of the proof is similar to Theorem 2.1 of [26], and it is hence omitted. Finally, I-AVn→0, when n→∞, and thus Vn→A-1, as n→∞. So the sequence {∥En∥2} is strictly monotonic decreasing. Moreover, if we denote ϵn=Vn-A-1, as the error matrix in the iterative procedure (9), then we could easily obtain the following error inequality:
(13)∥ϵn+1∥≤(14[3∥A∥8+∥A∥9∥ϵn∥])∥ϵn∥9.
The error inequality (13) reveals that the iteration (9) converges with at least ninth order to A-1. The proof is now complete.
Some features of the new scheme (9) are as follows.
The new method can be taken into consideration for finding approximate inverse (preconditioners) of complex matrices. Using a dropping strategy, we could keep the sparsity of a matrix to be used as a preconditioner. Such an action will be used throughout this paper for sparse matrices using the command Chop[exp, tolerance] in Mathematica, to preserve the sparsity of the approximate inverses Vi, for any i.
Unlike the traditional direct solvers such as GEPP, the new scheme has the ability to be parallelized.
In order to further reduce the computational burden of the matrix-by-matrix multiplications per computing step of the new algorithm when dealing with sparse matrices, in the new scheme by using Sparse Array[] command in Mathematica, we will preserve the sparsity features of the output approximate inverse in a reasonable computational time. Additionally, for some special matrices, such as Toeplitz or Vandermonde types of matrices, further computational savings are possible as discussed by Pan in [27]. See for more information [28].
If the initial guess commutes with the original matrix, then all matrices in the sequence {Vn}n=0n=∞ satisfy the same property of commutation. Notice that only some initial matrices guarantee that in all cases the sequence commutes with the matrix A.
In the next section, we present some analytical discussion about the fact that the new method (9) could also be used for computing the Moore-Penrose generalized inverse.
3. An Iterative Method for Moore-Penrose Inverse
It is well known in the literature that the Moore-Penrose inverse of a complex matrix A∈ℂm×k (also called pseudo-inverse), denoted by A†∈ℂk×m, is a matrix V∈ℂk×m satisfying the following conditions:
(14)AVA=A,VAV=V,(AV)*=AV,(VA)*=VA,
wherein A* is the conjugate transpose of A. This inverse uniquely exists.
Various numerical solution methods have been developed for approximating Moore-Penrose inverse (see e.g., [29]). The most comprehensive review of such iterations could be found in [17], while the authors mentioned that iterative Schulz-type methods can be taken into account for finding pseudoinverse. For example, it is known that (1) converges to the pseudoinverse in the general case if V0:=αA*, where 0<α<2/ρ(A*A) and ρ(·) denotes the spectral radius.
Accordingly, it is known that the new scheme (9) converges to the Moore-Penrose inverse. In order validate this fact analytically, we provide some important theoretics in what follows.
Lemma 2.
For the sequence {Vn}n=0n=∞ generated by the iterative Schulz-type method (9) and the initial matrix (6), for any n≥0, it holds that
(15)(AVn)*=AVn,(VnA)*=VnA,VnAA†=Vn,A†AVn=Vn.
Proof.
The proof of this lemma is based on mathematical induction. Such a procedure is similar to Lemma 3.1 of [30], and it is hence omitted.
Theorem 3.
For the rectangular complex matrix A∈ℂm×k and the sequence {Vn}n=0n=∞ generated by (9), for any n≥0, using the initial approximation (6), the sequence converges to the pseudoinverse A† with ninth order of convergence.
Proof.
We consider 𝔼n=Vn-A†, as the error matrix for finding the Moore-Penrose inverse. Then, the proof is similar to the proof of Theorem 4 in [18] and it is hence omitted. We finally have
(16)∥𝔼n+1∥≤∥A†∥∥A∥9∥𝔼n∥9.
Thus, ∥Vn-A†∥→0; that is, the obtained sequence of (9) converges to the Moore-Penrose inverse as n→+∞. This ends the proof.
Theorem 4.
Considering the same assumptions as in Theorem 3, then the iterative method (9) has asymptotical stability for finding the Moore-Penrose generalized inverse.
Proof.
The steps of proving the asymptotical stability of (9) are similar to those which have recently been taken for a general family of methods in [31]. Hence, the proof is omitted.
Remark 5.
It should be remarked that the generalization of our proposed scheme for generalized outer inverses, that is, AT,S(2), is straight forward according to the recent work [32].
Remark 6.
The new iteration (9) is free from matrix power in its implementation and this allows one to apply it for finding generalized inverses easily.
4. Computational Experiments
Using the computer algebra system Mathematica 8 [33], we now compare our iterative algorithm with other Schulz-type methods.
For numerical comparisons in this section, we have used the methods (1), (3), (4), and (9). We implement the above algorithms on the following examples using the built-in precision in Mathematica 8.
Example 7.
Let us consider the large complex sparse matrix of the size 30000×30000 with 79512 nonzero elements possessing a sparse inverse as in Algorithm 1 (I=-1), where the matrix plot of A showing its structure has been drawn in Figure 1(a).
The matrix plot (a) and the inverse (b) of the large complex sparse 30000×30000 matrix A in Example 7.
Methods like (9) are powerful in finding an approximate inverse or in finding robust approximate inverse preconditioners in low number of steps, in which the output form of the approximate inverse is also sparse.
In this test problem, the stopping criterion is ∥I-VnA∥1≤10-7. Table 1 shows the number of iterations for different methods. As the programs were running, we found the running time using the command Absolute Timing[] to report the elapsed CPU time (in seconds) for this experiment. Note that the initial guess has been constructed using V0=diag(1/a11,1/a22,…,1/amm) in Example 7. The plot of the approximate inverse obtained by applying (9) has been portrayed in Figure 1(b). The reason which made (4) the worse method is in the fact of computing matrix power which is time consuming for sparse matrices. In the tables of this paper “No. of nonzero ele.” stands for the number of nonzero elements.
Results of comparisons for Example 7.
Methods
(1)
(3)
(4)
(9)
Convergence rate
2
3
3
7
Iteration number
3
2
2
1
Residual norm
8.32717×10-7
1.21303×10-7
5.10014×10-8
9.7105×10-8
No. of nonzero ele.
591107
720849
800689
762847
Time (in seconds)
4.22
4.05
9.32
4.18
In Algorithm 2, we provide the written Mathematica 8 code of the new scheme (9) in this example to clearly reveals the simplicity of the new scheme in finding approximate inverses using a threshold Chop[exp,10-10].
Algorithm 2:
Id = SparseArray[{{i_, i_}-> 1.}, {n, n}];
V = DiagonalMatrix@SparseArray[1/Normal[Diagonal[A]]];
Do[V1 = SparseArray[V];
V2 = Chop[A.V1]; V3 = 3 Id + V2.(−3 Id + V2);
V4 = SparseArray[V2.V3];
V = Chop[−(1/4) V1.V3.SparseArray[−13 Id
+ V4.(15 Id + V4.(−7 Id + V4))]];
Print[V];L[i] = N[Norm[Id − V.A, 1]];
Print["The residual norm is:"
Column[{i}, Frame -> All, FrameStyle -> Directive[Blue]]
Column[{L[i]}, Frame -> All, FrameStyle -> Directive[Blue]]];
, {i, 1}]; // AbsoluteTiming
Example 8.
Let us consider a large sparse 10000×10000 matrix (with 18601 nonzero elements) as shown in Algorithm 3.
Table 2 shows the number of iterations for different methods in order to reveal the efficiency of the proposed iteration with a special attention to the elapsed time. The matrix plot of A in this case showing its structure alongside its approximate inverse has been drawn in Figure 2, respectively. Note that in Example 8, we have used an initial value constructed by V0=A*/(∥A∥1∥A∥∞). The stopping criterion is same as the previous example.
Results of comparisons for Example 8.
Methods
(1)
(3)
(4)
(9)
Convergence rate
2
3
3
7
Iteration number
10
7
6
3
Residual norm
1.29011×10-10
1.63299×10-9
1.18619×10-11
7.68192×10-10
No. of nonzero ele.
41635
42340
41635
41635
Time (in seconds)
2.23
2.15
2.26
1.95
The matrix plot (a) and the inverse (b) of the large sparse 10000×10000 matrix A in Example 8.
The results of comparisons for Example 10 in terms of the number of iterations.
The new method (9) is totally better in terms of the number of iterations and the computational time. In this paper, the computer specifications are Microsoft Windows XP Intel(R), Pentium(R) 4, CPU 3.20 GHz, and 4 GB of RAM.
Matrices used up to now are large and sparse enough to clarify the applicability of the new scheme. Anyhow, for some classes of problems, there are collections of matrices that are used when proving new iterative linear system solvers or new preconditioners, such as Matrix Market, http://math.nist.gov/MatrixMarket/. In the following test we compare the computational time of solving a practical problem using the above source.
Example 9.
Consider the linear system of Ax=b, in which A is defined by A = Example Data [“Matrix”, “Bai/pde900”] and the right hand side vector is b = Table [1., {i, 1, 900}]. In Table 3, we compare the consuming time (in seconds) of solving this system by GMRES solver and its left preconditioned system V1Ax=V1b. In Table 3, for example, PGMRES-(9) shows that the linear system V1Ax=V1b, in which V1 has been calculated by (9), is solved by GMRES solver. The tolerance is the residual norm to be less than 10-14 in this test. The results support fully the efficiency of the new scheme (9) in constructing robust preconditioners.
Results of elapsed computational time for solving Example 9.
Methods
GMRES
PGMRES-(1)
PGMRES-(3)
PGMRES-(4)
PGMRES-(9)
The total time
0.29
0.14
0.14
0.09
0.07
Example 10.
This experiment evaluates the applicability of the new method for finding Moore-Penrose inverse of 20 random sparse matrices (possessing sparse pseudo-inverses) m×k=1200×1500 as shown in Algorithm 4.
Note that the identity matrix should then be an m×m matrix and the approximate pseudoinverses would be of the size k×m. In this example, the initial approximate Moore-Penrose inverse is constructed by V0=ConjugateTranspose[A[j]]*(1./((SingularValueList[A[j],1][[1]])2)) for each random test matrix. And the stopping criterion is ||Vn+1-Vn||1≤10-8. We only compared methods (1) denoted by “Schulz”, (3) denoted by “KMS,” and the new iterative scheme (9) denoted by “PM.”
The results for the number of iterations and the running time are compared in Figures 3-4. They show a clear advantage of the new scheme in finding the Moore-Penrose inverse.
The results of comparisons for Example 10 in terms of the elapsed time.
5. Conclusions
In this paper, we have developed an iterative method in inverse finding of matrices, which has the capability to be applied on real or complex matrices with preserving the sparsity feature in matrix-by-matrix multiplications using a threshold. This allows us to interestingly reduce the computational time and usage memory in applying the new iterative method. We have shown that the suggested method (9) reaches ninth order of convergence. The extension of the scheme for pseudoinverse has also been discussed in the paper. The applicability of the new scheme was illustrated numerically in Section 4.
Finally, according to the numerical results obtained and the concrete application of such solvers, we can conclude that the new method is efficient.
The new scheme (9) has been tested for matrix inversion. However, we believe that such iterations could also be applied for finding the inverse of an integer in modulo pn [34]. Following this idea will, for sure, open a new topic of research which is a combination of Numerical Analysis and Number Theory. We will focus on this trend of research for future studies.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgment
The authors thank the referees for their valuable comments and for the suggestions to improve the readability of the paper.
SidjeR. B.SaadY.Rational approximation to the Fermi-Dirac function with applications in density functional theory201156345547910.1007/s11075-010-9397-6MR2774124ZBL1211.65026SoleymaniF.StanimirovićP. S.A higher order iterative method for computing the Drazin inverse201320131170864710.1155/2013/708647SchulzG.Iterative Berechnung der Reziproken matrix1933135759LinL.LuJ.CarR.WeinanE.Multipole representation of the Fermi operator with application to the electronic structure analysis of metallic systems200979112-s2.0-6514908792010.1103/PhysRevB.79.115133115133LiW.LiZ.A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix201021593433344210.1016/j.amc.2009.10.038MR2576833ZBL1185.65057LiH.-B.HuangT.-Z.ZhangY.LiuX.-P.GuT.-X.Chebyshev-type methods and preconditioning techniques2011218226027010.1016/j.amc.2011.05.036MR2820488ZBL1226.65024HackbuschW.KhoromskijB. N.TyrtyshnikovE. E.Approximate iterations for structured matrices2008109336538310.1007/s00211-008-0143-0MR2399149ZBL1144.65029SöderströmT.StewartG. W.On the numerical properties of an iterative method for computing the Moore-Penrose generalized inverse1974116174MR034184310.1137/0711008ZBL0241.65038LiuX.QinY.Successive matrix squaring algorithm for computing the generalized inverse AT,S(2)201220121226203410.1155/2012/262034LiuX.HuangS.Proper splitting for the generalized inverse AT,S(2) and its application on Banach spaces201220129736929MR2947709ZBL1252.47001LiuX.HuangF.Higher-order convergent iterative method for computing the generalized inverse over Banach spaces201320135356105MR3124037LiuX.JinH.YuY.Higher-order convergent iterative method for computing the generalized inverse and its application to Toeplitz matrices201343961635165010.1016/j.laa.2013.05.005MR3073891ZBL06259537RajagopalanJ.1996Quebec, CanadaConcordia UniversityGroszL.Preconditioning by incomplete block elimination200077-852754110.1002/1099-1506(200010/12)7:7/8<527::AID-NLA211>3.3.CO;2-FMR1800677ZBL1051.65055CodevicoG.PanV. Y.Van BarelM.Newton-like iteration based on a cubic polynomial for structured matrices200436436538010.1007/s11075-004-3996-zMR2108185ZBL1068.65050PanV.SchreiberR.An improved Newton iteration for the generalized inverse of a matrix, with applications19911251109113010.1137/0912058MR1114976ZBL0733.65023Ben-IsraelA.GrevilleT. N. E.20032ndSpringerMR1987382Khaksar HaghaniF.SoleymaniF.A new high-order stable numerical method for matrix inversion201420141083056410.1155/2014/830564SoleymaniF.Karimi VananiS.SiyyamH. I.Al-SubaihiI. A.Numerical solution of nonlinear equations by an optimal eighth-order class of iterative methods201359115917110.1007/s11565-012-0165-5MR3046822WangX.ZhangT.High-order Newton-type iterative methods with memory for solving nonlinear equations20141991109HotellingH.Analysis of a complex of statistical variables into principal components19332464174412-s2.0-5814942159510.1037/h0071325IsaacsonE.KellerH. B.1966New York, NY, USAJohn Wiley & SonsMR0201039KrishnamurthyE. V.SenS. K.2007New Delhi, IndiaAffiliated East-West PressMR848359WeiguoL.JuanL.TiantianQ.A family of iterative methods for computing Moore-Penrose inverse of a matrix20134381475610.1016/j.laa.2012.08.004MR2993363ZBL1258.65035SoleymaniF.A fast convergent iterative solver for approximate inverse of matrices201310.1002/nla.1890Zaka UllahM.SoleymaniF.Al-FhaidA. S.An efficient matrix iteration for computing weighted Moore-Penrose inverse201422644145410.1016/j.amc.2013.10.046MR3144324PanV. Y.2001Boston, Mass, USASpringer, Birkhäauser10.1007/978-1-4612-0129-8MR1843842MiladinovićM.MiljkovićS.StanimirovićP.Modified SMS method for computing outer inverses of Toeplitz matrices201121873131314310.1016/j.amc.2011.08.046MR2851415ZBL1262.65052ChenH.WangY.A family of higher-order convergent iterative methods for computing the Moore-Penrose inverse201121884012401610.1016/j.amc.2011.05.066MR2862074ZBL06045802SoheiliA. R.SoleymaniF.PetkovićM. D.On the computation of weighted Moore-Penrose inverse using a high-order matrix method201366112344235110.1016/j.camwa.2013.09.007MR3125379SoleymaniF.StanimirovićP. S.A note on the stability of a pth order iteration for finding generalized inverses201428778110.1016/j.aml.2013.10.004MR3128652StanimirovićP. S.SoleymaniF.A class of numerical algorithms for computing outer inverses201426323624510.1016/j.cam.2013.12.033MR3162349WolframS.20035thWolfram MediaMR1721106KnappM. P.XenophontosC.Numerical analysis meets number theory: using rootfinding methods to calculate inverses mod pn201041233110.2298/AADM100201012KMR2654927ZBL1265.11001