This paper presents a class of new accelerated restarted GMRES method
for calculating the stationary probability vector of an irreducible Markov
chain. We focus on the mechanism of this new hybrid method by showing
how to periodically combine the GMRES and vector extrapolation method
into a much efficient one for improving the convergence rate in Markov chain
problems. Numerical experiments are carried out to demonstrate the efficiency
of our new algorithm on several typical Markov chain problems.
1. Introduction
The Markov chain is a very robust tool for studying stochastic systems overtime and is in a wide range of applications including queueing systems [1, 2], computer and communication systems [3], information retrieval, and Web ranking [4–7]. For both discrete and continuous Markov chains, it has always been one of the central work to get their numerical solutions by adopting appropriate methods.
In this paper, we consider a class of Krylov subspace methods, restarted GMRES (GMRES(m)) method, for the numerical solutions of the stationary probability vector of an irreducible Markov chain. Let B=(bij)∈ℝm×n be a column stochastic matrix; that is, 0≤bij≤1(∀i,j) and eTB=eT, with e being the column vector of all ones. We seek the stationary vector x∈ℝn that satisfies
(1)Bx=x,xi≥0(∀i),∥x∥=1.
By the Perron-Frobenius theorem [8, 9], if B is irreducible, then there exists a unique solution x to (1), which is strictly positive (xi>0,∀i).
Equation (1) is equivalent to the following singular linear problem:
(2)Ax=0,xi≥0,∀i,∥x∥=1,
where A=I-B, with I being the identity matrix, and A is a singular M-matrix with the diagonal elements being negative column sums of its off-diagonal elements.
Now there are many numerical methods for solving (2), such as the traditional iterative methods, the power method or weighted Jacobi method [5, 10–12], the aggregation/disaggregation algorithm [13–16], and the Krylov subspace methods like the Arnoldi method and the GMRES [3, 17–21]. However, these iteration methods for calculating x may converge very slowly when the subdominant eigenvalue of B satisfies λ2~1 [3, 11]. Thus, it becomes quite required to accelerate the calculation in Markov chain problems.
In this paper, we consider the restarted GMRES method and propose a new way to accelerate its numerical calculation by use of the polynomial-type vector extrapolation methods. In fact, seeking the vector extrapolation method as the accelerator is very popular; see [5, 12, 22] for details. In discussing this issue, we start from the mechanisms of the GMRES(m) method and vector extrapolation method and then illustrate how to periodically knit these two methods. In numerical experiments, the proposed extrapolation-accelerated GMRES(m) methods are tested on several Markov chain problems, and the experimental results show its effectiveness.
This paper is organized as follows. Section 2 briefly provides the mechanism of the GMRES methods and some acceleration ones. In Section 3, we first review the vector extrapolation method and then consider how to periodically combine the extrapolation method with the GMRES method for numerical calculation of Markov chains. Section 4 provides experimental evidence of the effectiveness of our approach. Section 5 summarizes the thesis and points out the direction of our future research.
2. Background
In this section, we briefly introduce the mechanism of the GMRES methods and describe some existing modifications to the standard GMRES aimed at accelerating its convergence.
The GMRES method, proposed by Saad and Schultz in [20], is a popular method for the iterative solution of sparse linear systems with an unsymmetrical matrix,
(3)Ax=b,A∈ℝn×n.
If x0 is an initial guess for the true solution of (3) and r0=b-Ax0 is the initial residual, we have the equivalent system:
(4)Az=r0,A∈ℝn×n,
where x=x0+z. Let 𝒦k be the Krylov subspace:
(5)𝒦k≡span{r0,Ar0,…,Ak-1r0}.
GMRES is to find an approximate solution
(6)xk=x0+zkwithzk∈𝒦k,
such that
(7)∥b-Axk∥2=minx∈x0+𝒦k∥b-Ax∥2=minz∈𝒦k∥r0-Az∥2.
Here, ∥·∥2 denotes the Euclidean norm on ℝn, as well as the associated-induced matrix norm on ℝn×n.
GMRES is often referred to as an “optimal” method in the sense that it finds the approximate solution in the Krylov subspace that minimizes the residual [20]. However, the amount of storage and computational work grow quadratically with the number of steps. So the restarted version of the algorithm is often used as suggested in [20]. In restarted GMRES (GMRES(m)), the method is restarted after each cycle of m iteration steps, and the current approximate solution becomes the new initial guess for the next m iterates. The mechanism of the GMRES(m) for Markov chains as system (2) is described as follows. For more details, see [20, 23].
Algorithm 1.
The Restarted GMRES.
Start: choose x0, m, tol
(8)r0⟵-Ax0,v1⟵r0∥r0∥2,β=∥r0∥2.
For j=1,2,…,m, do
(9)hij=(Avj,vi),i=1,2,…,j.vj+1^=Avj-∑i=1jhijvi,hj+1,j=∥vj+1^∥2,vj+1=vj+1^hj+1,j.
Form the approximate solution: xm=x0+Vmym, where ym minimizes ∥βe1-Hmy¯∥2,y∈Rm.
Restart: while ∥rk+1∥2>tol, do
(10)rm=-Axmx0←xm,v1=rm/∥rm∥2 and go to step 2.
In general, the convergence rate depends on the restart parameter [24, 25]. Even in the situation in which the appropriate restart parameter results in the satisfactory convergence, the convergence behavior may not be the optimal; since an iterate is restarted, the current approximation space is discarded at each restart, and the orthogonality between approximation spaces generated at successive restarts is not preserved at each restart. For that reason, slow convergence can occur. Therefore, it is necessary to develop more efficient algorithms for enhancing the robustness of restarted GMRES, and thereof now its improvements and modifications continue to receive considerable attentions. Augmented methods are a class of acceleration techniques which seek to avoid stalling by improving information in GMRES at the time of the restart. These acceleration methods are presented by Morgan [19, 26], Saad [27] and Chapman and Saad [28]. The precondition technique is also often used to accelerate the numerical calculation of GMRES; see [17, 23, 29–32] for more details. Meanwhile, this paper focuses on undertaking the similar works, that is, speeding up the process and improving the robustness of the restarted GMRES.
3. The Main Algorithm and Practical Implementations
Now let us first present the rationale and theory behind vector extrapolation method. When considering the solutions of systems of linear or nonlinear equations by an iterative method, a sequence of vectors (approximation solutions) is yielded. As the classical iteration process may converge slowly, extrapolation strategy can often be applied to enhance its convergence rates. A detailed review of extrapolation methods can be found in the works of Smith et al. [22], Sidi [12], and Jbilou and Sadok [33]. Now there are mainly two classes of vector extrapolation methods: (1) polynomial-type methods, namely, the minimal polynomial extrapolation (MPE) of Cabay and Jackson [34], the reduced rank extrapolation (RRE) of Eddy [35] and Mešina [36], and the modified minimal polynomial extrapolation (MMPE) method of Sidi et al. [37], Brezinski [38], and Pugachev [39]; and (2) epsilon algorithms, namely, the topological epsilon algorithm of Brezinski [38] and the scalar and vector epsilon algorithms of Wynn [40]. Numerical experiences have illustrated that the polynomial type vector extrapolation methods are in general more economical than the epsilon ones, with respect to the computing time and storage requirements. And for the special touch, Kamvar et al. have recently proposed a new extrapolation, quadratic extrapolation, for computing the PageRank in [5], and Sidi [12] has generalized this method (the generalization one is marked by GQE) and proved that the resulting generalization is very closely related to MPE. Therefore, we focus on the two polynomial extrapolation methods, namely, RRE and GQE, in this paper. Now let us display the implementations of these two algorithms. For more details, we refer the reader to the paper [12] by Sidi.
Vector extrapolation algorithms are derived by considering vector sequences xi, generated from a fixed-point iterative method of the following form:
(11)xn+1=Axn+b,n=0,1,…,
where A is a fixed N×N matrix, b is a fixed N-dimensional vector, and x0 is an initial vector.
Suppose that we have produced a sequence of iterates {xi}i=1∞, where xi≥0. Then at the kth outer iteration, let
(12)X=[xk,xk-1,…,xk-m+1]∈ℝn×m
be a matrix consisting of the last m iterates with xk being the newest, where m is called the window size in usual. It is evident that X has the following properties:
(13)xi≥0,∥xi∥1=1,i=1,2,….
The problem to be solved is transformed into obtaining a vector z satisfying ∑i=1mzi=1, and thus we have an updated probability vector
(14)x^k=Xz=z1xk+z2xk-1+⋯+zmxk-m+1,
a linear combination of the last m iterates. Assume ρ(A)<1, which results in the unique solution s of the linear system
(15)x=Ax+b.
As illustrated in [12], the efficient implementations of the vector extrapolation methods (RRE and GQE) are presented as follows.
Algorithm 2.
The Reduced Rank Extrapolation Method (RRE).
Input vectors x0,x1,…,xk+1.
Compute ui=xi+1-xi,i=0,1,…,k, set Uk=[u0,u1,…,uk], and compute the QR-factorization of Uk, namely, Uk=QkRk.
Solve the linear system RkTRkd=e,d=[d0,d1,…,dk]T,e=[1,1,…,1]T∈Rk+1. It amounts to solving two triangular systems: RkTa=e for a and Rkd=a for d. Set λ=1/∑i=0kdi, and compute γ=[γ0,γ1,…,γk]T, with γi=λdi,i=0,1,…,k.
Compute ξ=[ξ0,ξ1,…,ξk-1]T by ξ0=1-γ0,ξj=ξj-1-γj,j=1,2,…,k-1.
Compute η=[η0,η1,…,ηk-1]T=Rk-1ξ. Compute the RRE approximation for the exact solution of system (15) by x^k+1=x0+Qk-1η=x0+∑i=0k-1ηiqi.
Algorithm 3.
The Generalization of Quadratic Extrapolation (GQE).
Input vectors x0,x1,…,xk+1.
Compute ui=xi+1-x0,i=0,1,…,k, set Uk=[u0,u1,…,uk], and compute the QR-factorization of Uk, namely, Uk=QkRk.
Solve the linear system Rk-1d=-Qk-1Tuk,d=[d0,d1,…,dk]T.
Set dk=1 and compute c=[c0,c1,…,ck]T by ci=∑j=ikdj,i=0,1,…,k (thus, ck=dk=1).
Compute x^k+1=(∑i=0kci)x0+Qk(Rkc).
Clearly, Algorithms 2 and 3 share the common feature: they both contain a QR factorization in step 2 for Uk=QkRk, where Qk∈ℝn×(k+1) is unitary and Rk∈ℝ(k+1)×(k+1) is an upper triangular matrix with positive diagonal elements. In addition, the QR-factorization is always carried out inexpensively by applying the modified Gram-Schmidt process (MGS) to the vectors u0,u1,…,uk; see [12] for more details. The MGS algorithm is given as follows.
MGS Algorithm
Compute r00=(u0,u0)1/2, and set q0=u0/r00.
For j=1:k, set uj(0)=uj.
For i=1:j, compute rij=(qi,uj(i)) and uj(i+1)=uj(i)-rijqi.
Compute rjj=(uj(j),uj(j))1/2 and qj=uj(j)/rjj.
As previously mentioned, the restarting of GMRES may entail the slow convergence since the current approximation space is discarded at each restart cycle. Meanwhile, the change of the restart parameter m will affect the number of iterations and also the execution time in the run time of GMRES. Furthermore, the increasing m will decrease the number of iteration needed to converge while increasing the computing work and storage amount required per iteration, especially for large systems of equations, like large-scale Markov chains, and PageRank computation in information retrieval. So it is very necessary to enhance the robustness of GMRES(m). And our work falls into the category of accelerating techniques. Next, we will discuss the idea and implementation of the new method.
Recall that in restarted GMRES, once the Krylov subspace reaches dimension m, the current approximate solution becomes the new initial guess for the next m iterations. The motivation of our new method comes from the successive restarts every m iterations. To be specific, we will implement the current approximation, after m iterations in a restart cycle, as the initial vector for the extrapolation procedure. And furthermore, the resulting vector by extrapolation method, in turn, can be the new improved starting vector for the next m iterations by GMRES(m). In summary, the mechanism of our new restarted GMRES method can be characterized as follows.
Algorithm 4.
The Extrapolation-Accelerated GMRES(m) (EGMRES)
Choose x0, m, n, tol, and set k=1.
Compute xk using Algorithm 1 (GMRES(m)).
Set X←[x(j),x(j-1),…,x(j-n-1)].
Compute x^k from X by applying Algorithm 2 (RRE) and Algorithm 3 (GQE), respectively.
If ∥Ax^k∥1>∥Axk∥1, then x^k←xk.
Check convergence. If ∥Ax^k-x^k∥1/∥x^k∥1<tol, stop. Otherwise, set k←k+1 and go to step 2.
Note that, in step 1 tol is the user described tolerance for residual vector 1-norms by GMRES(m), m is the restart number in GMRES, and n is the window size for extrapolation. From Algorithms 2 and 3, both the RRE and GQE methods require n+2 vectors x0,x1,…,xn+1 as inputs. For instance, when n=2, four approximate vectors are needed as input in the extrapolation procedure. Step 5 is to check the efficiency of extrapolation strategy in current iterations. Step 6 is devised to determine when to flip flop between the extrapolation procedure and GMRES procedure. The performances of our new algorithm and comparisons with the other original algorithms will be discussed in detail below.
4. Numerical Results in Markov Chain Problems
In this section, we will report the numerical experiments to examine the efficiency of our new accelerated algorithm in numerical solution of stationary probability vector for Markov chains. Three typical Markov chain problems have been used in our experiments. All the numerical results are obtained with a MATLAB R2010a implementation on Windows 8 with 2.5 GZ i5 processor and 4 GB memory.
For the sake of justice, the same starting vector x0=1/n×e, with e being the column vector of all ones, is used for all the algorithms. All the iterations are terminated when ∥Ax^k-x^k∥1/∥x^k∥1<tol, where x^k are the approximations obtained by the current iteration. For convenience, in all the tables and figures below, we have abbreviated the RRE-GMRES algorithm and the GQE-GMRES algorithm as RGMRES and GGMRES, respectively. We denote by “rest” the restart number in GMRES, “CPU” the CPU time used in seconds, “it” the iteration counts, and “Res” the residual 1-norms.
Example 1.
In this example, we compare the GMRES(m) algorithm, the RGMRES algorithm, and the GGMRES algorithm for the Markov chain problem. This test problem is a one-dimensional (1D) Markov chains often resulting from a M/M/1 queueing system. The simplest graph of this problem is displayed in Figure 1, where the transition rates are identical. Numerical results are presented in Table 1.
Numerical results of the three algorithms for one-dimensional Markov chain with identical transition rates.
Rest
GMRES
RGMRES
GGMRES
CPU
it
Res
CPU
it
Res
CPU
it
Res
5
0.1458
227
9.9857e-7
0.2071
97
1.0240e-6
0.0544
32
5.6827e-6
10
0.2708
78
9.7092e-7
0.1832
76
9.8590e-7
0.0612
30
4.0731e-6
15
0.5075
59
9.9114e-7
0.1317
40
1.0653e-6
0.0752
23
2.2496e-6
20
0.6750
45
9.8919e-7
0.1762
33
1.0609e-6
0.1113
21
1.5214e-6
30
0.6617
23
9.2978e-7
0.1963
18
8.5427e-7
0.1219
12
2.3510e-6
Graph for one-dimensional Markov chain with identical transition rates.
It is easy to see from Table 1 that the accelerated GMRES is more effective than the unaccelerated one, both in terms of CPU time and the number of iteration and regardless of the restart number size for GMRES. In particular, the GMRES accelerated by GQE (GGMRES) performs best in most cases. For instance, when the restart number is 20, in comparison with the results of unaccelerated GMRES, the CPU time has been saved by 74% for FGMRES and 85% for GGMRES, and the iteration count has been reduced by 27% for RGMRES and 54% for GGMRES. In Figure 2, we compare the converging speed among the three algorithms. It is clear that the convergence rate has been enhanced greatly when using the accelerating technique in GMRES, and especially that the GGMRES has faster convergence speed than the other algorithms for meeting the convergence precision quickly.
Convergence curves of the three algorithms for Example 1, tol=10-7.
Example 2.
This test problem is displayed in Figure 3, which is a 1D chain with uniform weights, except for two weak links with weight μ in the middle of the chain. This is a typical nearly completely decomposable (NCD) Markov chain problem, and it has been discussed in [41–43]. We run the GMRES algorithm, the accelerated GMRES algorithm by vector extrapolation method, namely, RGMRES and GGMRES, on this problem, when the window size in extrapolation takes different values. All the algorithms will be stopped as soon as the residual 2-norms are below tol=10-7. Let μ=1e-3 in the numerical simulation, and the experimental results are listed in Table 2.
Numeric comparisons of three algorithms for the uniform chain with two weak links in the middle (μ=1e-3).
Window size
2
3
4
5
CPU
it
CPU
it
CPU
it
CPU
it
GMRES
0.2708
78
0.4917
98
0.4807
98
0.4763
98
RGMRES
0.1832
76
0.1924
76
0.1526
61
0.1593
65
GGMRES
0.0612
30
0.0915
40
0.0679
28
0.0927
38
Graph for a uniform chain with two weak links in the middle (μ=1e-3).
It is seen from Table 2 that the accelerated GMRES methods by vector extrapolation method perform better than the unaccelerated GMRES method both in terms of iteration counts and CPU time, regardless of their window size. Obviously the GMRES method accelerated by GQE outperforms the other two methods. For instance, when the widow size is 4, the iteration number in GGMRES is only 46% of that in RGMRES and less than 28% of that in GMRES. Especially from the view point of CPU time, the GGMRES method has more obvious advantages than the GMRES method and the RGMRES method, for it reduces the convergence time by 55% compared with RGMRES method and by nearly 86% compared with GMRES method.
Example 3.
This problem is a 2D lattice (grid) with uniform weights, which has been discussed in [43, 44]. Aggregates of size 3×3 are used and the gauge variables are defined on the lattice edges and are scalars valued as one, as shown in Figure 4.
In this problem, we run the GMRES, RGMRES and GGMRES, and compare their convergence rates while changing the problem size, with the restart number being 10 and the window size being 2. Numerical results are presented in Table 3. The numerical simulation given in Table 3 for the 2D lattice problem clearly demonstrates the effectiveness of acceleration by vector extrapolation methods. It is seen that the RGMRES algorithm converges faster than the GMRES method, while the GGMRES algorithm performs the best.
Numerical results of the three algorithms with various problem sizes for uniform 2D lattice.
N
GMRES
RGMRES
GGMRES
CPU
it
Res
CPU
it
Res
CPU
it
Res
50
0.0393
20
9.2658e-7
0.0515
20
9.2658e-7
0.0423
17
7.4865e-7
100
0.3117
55
9.6876e-7
0.2056
27
8.8903e-7
0.1850
25
1.5432e-6
200
1.6468
71
9.5389e-7
1.5973
56
9.9972e-7
1.104
35
2.1738e-6
300
8.0600
118
9.7536e-7
7.2827
77
1.3757e-6
5.5989
59
1.9483e-6
Graph for a 2D lattice with uniform weights.
5. Conclusions
In this paper, we have presented a new GMRES method accelerated by vector extrapolation techniques to get the numerical solutions of the stationary probability vector of an irreducible Markov chain, using vector extrapolation techniques. Experimental results on several typical Markov chain problems demonstrate that the extrapolation method is an attractive option for accelerating the convergence of calculation of Markov chain, especially with the GQE (proposed by Sidi in [12]) being the accelerator. We note that, as mentioned previously, preconditioning technique may also be an appropriate strategy for improving convergence rate in Markov chains and would be one of our research topics of this field in future.
Acknowledgments
This research is supported by Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), NSFC (61170309), Sichuan Province Sci. and Tech. Research Project (2012GZX0080), and Scientific Research Fund of Sichuan Provincial Education Department (T11008).
ChingW. K.Iterative methods for queuing systems with batch arrivals and negative customers200343228529610.1023/A:1026031011953MR2010365ZBL1033.65006MeiniB.Solving M/G/1 type Markov chains: recent advances and applications1998141-247949610.1080/15326349808807483MR1617591ZBL0934.60086StewartW. J.1994Princeton, NJ, USAPrinceton University PressMR1312831BroderA. Z.LempelR.MaghoulF.PedersenJ.Efficient PageRank approximation via graph aggregation2006921231382-s2.0-3364596189610.1007/s10791-006-7146-1KamvarS. D.HaveliwalaT. H.ManningC. D.GolubG. H.Extrapolation methods for accelerating PageRank computationsProceedings of the 12th International World Wide Web ConferenceMay 2003Budapest, HungaryLangvilleA. N.MeyerC. D.A survey of eigenvector methods for Web information retrieval200547113516110.1137/S0036144503424786MR2149104ZBL1075.65053PageL.BrinS.MotwaniR.WinogradT.The PageRank citation ranking: bringing order to the web19991999-0120Stanford, Calif, USAComputer Science DepartmentBermanA.PlemmonsR. J.1987Philadelpha, Pa, USASIAMDe SterckH.ManteuffelT. A.McCormickS. F.NguyenQ.RugeJ.Multilevel adaptive aggregation for Markov chains, with application to web ranking20083052235226210.1137/070685142MR2429464ZBL1173.65028KamvarS.HaveliwalaT.GolubG.Adaptive methods for the computation of PageRank2004386516510.1016/j.laa.2003.12.008MR2066607ZBL1091.68044PhilippeB.SaadY.StewartW. J.Numerical methods in Markov Chain modeling19924011561179SidiA.Vector extrapolation methods with applications to solution of large systems of equations and to PageRank computations200856112410.1016/j.camwa.2007.11.027MR2427680ZBL1145.65312MarekI.MayerP.Convergence analysis of an iterative aggregation/disaggregation method for computing stationary probability vectors of stochastic matrices199854253274MR1640726ZBL0937.65047MarekI.MayerP.Convergence theory of some classes of iterative aggregation/disaggregation methods for computing stationary probability vectors of stochastic matrices200336317720010.1016/S0024-3795(02)00333-6MR1969068ZBL1018.65048SchweitzerP. J.KindleK. W.An iterative aggregation-disaggregation algorithm for solving linear equations198618431335410.1016/0096-3003(86)90003-2MR839494ZBL0594.65020StewartW. J.CaoW. L.Iterative aggregation/disaggregation techniques for nearly uncoupled Markov chains198532370271910.1145/3828.214137MR796209ZBL0628.65145BakerA. H.JessupE. R.ManteuffelT.A technique for accelerating the convergence of restarted GMRES200526496298410.1137/S0895479803422014MR2178207ZBL1086.65025GolubG. H.GreifC.An Arnoldi-type algorithm for computing PageRank200646475977110.1007/s10543-006-0091-yMR2285207MorganR. B.Implicitly restarted GMRES and Arnoldi methods for nonsymmetric systems of equations20002141112113510.1137/S0895479897321362MR1745522ZBL0963.65038SaadY.SchultzM. H.GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems19867385686910.1137/0907058MR848568ZBL0599.65018WuG.WeiY.An Arnoldi-extrapolation algorithm for computing PageRank2010234113196321210.1016/j.cam.2010.02.009MR2665379ZBL1201.65059SmithD. A.FordW. F.SidiA.Extrapolation methods for vector sequences198729219923310.1137/1029042MR889244ZBL0622.65003BaglamaJ.CalvettiD.GolubG. H.ReichelL.Adaptively preconditioned GMRES algorithms199920124326910.1137/S1064827596305258MR1639130EmbreeM.The tortoise and the hare restart GMRES200101/22Oxford, UKOxford University Computing Laboratory Numerical AnalysisJoubertW.On the convergence behavior of the restarted GMRES algorithm for solving nonsymmetric linear systems19941542744710.1002/nla.1680010502MR1307328MorganR. B.A restarted GMRES method augmented with eigenvectors19951641154117110.1137/S0895479893253975MR1351462ZBL0836.65050SaadY.Analysis of augmented Krylov subspace methods199718243544910.1137/S0895479895294289MR1437341ZBL0871.65026ChapmanA.SaadY.Deflated and augmented Krylov subspace techniques1997414366MR1442463ZBL0889.65028BenziM.UçarB.LangvilleA. N.StewartW. J.Product preconditioning for Markov chain problemsRaleigh, NC, USABoson Books239256BenziM.UçarB.Block triangular preconditioners for M-matrices and Markov chains200726209227MR2391218ErhelJ.BurrageK.PohlB.Restarted GMRES preconditioned by deflation199669230331810.1016/0377-0427(95)00047-XMR1395289ZBL0854.65025KharchenkoS. A.YereminA. Yu.Eigenvalue translation based preconditioners for the GMRES(k) method199521517710.1002/nla.1680020105MR1312130ZBL0829.65036JbilouK.SadokH.Vector extrapolation methods. Applications and numerical comparison20001221-214916510.1016/S0377-0427(00)00357-5MR1794654ZBL0974.65034CabayS.JacksonL. W.A polynomial extrapolation method for finding limits and antilimits of vector sequences1976135734752MR042869210.1137/0713060ZBL0359.65029EddyR. P.WangP. C. C.Extrapolating to the limit of a vector sequence1979New York, NY, USAAcademic Press387396MR541593MešinaM.Convergence acceleration for the iterative solution of the equations X=AX+f1977102165173MR045165010.1016/0045-7825(77)90004-4ZBL0344.65019SidiA.FordW. F.SmithD. A.Acceleration of convergence of vector sequences198623117819610.1137/0723013MR821914ZBL0596.65016BrezinskiC.Généralisations de la transformation de Shanks, de la table de Padé et de l' ϵ-algorithme1975114317360MR0421027PugachevB. P.Acceleration of the convergence of iterative processes and a method of solving systems of non-linear equations19781751992072-s2.0-0000316261WynnP.Acceleration techniques for iterated vector and matrix problems196216301322MR014564710.1090/S0025-5718-1962-0145647-XZBL0105.10302IsenseeC.HortonG.A multi-level method for the steady state solution of Markov chains2004Magdeburg, GermanySCSIsenseeC.HortonG.A multi-level method for the steady state solution of discretetime Markov chainsProceedings of the 2nd Balkan conference in informaticsNovember 2005Ohrid, Macedonia413420De SterckH.ManteuffelT. A.McCormickS. F.MillerK.PearsonJ.RugeJ.SandersG.Smoothed aggregation multigrid for Markov chains2010321406110.1137/080719157MR2599766ZBL1209.65011De SterckH.MillerK.SandersG.WinlawM.Recursively accelerated multilevel aggregation for Markov chains20103231652167110.1137/090770114MR2652093ZBL1213.65018