Approximate Schur-Block ILU Preconditioners for Regularized Solution of Discrete Ill-Posed Problems

High order iterative methods with a recurrence formula for approximate matrix inversion are proposed such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence p = 4k + 3, where k ≥ 1 is integer, are reduced through factorizations and nested loops in which the iterations are defined using a recurrence formula. Therefore, the computational cost is lowered from κ = 4k + 3 to κ = k + 4 matrix multiplications per step. An algorithm is proposed to obtain regularized solution of ill-posed discrete problems with noisy data by constructing approximate Schur-Block Incomplete LU (Schur-BILU) preconditioner and by preconditioning the one step stationary iterative method. From the proposed methods of approximate matrix inversion, the methods of orders p = 7, 11, 15, 19 are applied for approximating the Schur complement matrices. This algorithm is applied to solve two problems of Fredholm integral equation of first kind. The first example is the harmonic continuation problem and the second example is Phillip’s problem. Furthermore, experimental study on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection is also given. Numerical analysis for the regularized solutions of the considered problems is given and numerical comparisons with methods from the literature are provided through tables and figures.


Introduction
The numerical solution of many scientific and engineering problems requires the solution of large linear systems of equations in the form where ,  ∈   and  ∈  × is nonsingular and is usually unsymmetric and unstructured matrix.When  is large and sparse, the solution of (1) is generally obtained through Krylov subspace methods such as simple iteration, generalized minimal residual (GMRES), biconjugate gradient (BICG), Quasi-Minimal Residual (QMR), conjugate gradient squared (CGS), biconjugate gradient stabilized (BICGSTAB), and for Hermitian positive-definite problems conjugate gradient (CG) and for Hermitian indefinite problems minimum residual (MINRES).These methods converge very rapidly when the coefficient matrix  is close to the identity matrix [1].Unfortunately, in many practical applications  is not close to the identity matrix such as the discretization of the Fredholm integral equation of first kind [2].The robustness and efficiency of Krylov subspace methods can be improved dramatically by using a suitable preconditioner [1,3].Basically there are two classes of preconditioners: implicit and explicit preconditioners [4].
In implicit preconditioning methods typically first an approximate factorization of  that is  =  +  is computed where  is the defect matrix and  is used by an implicit way, as the incomplete LU (ILU) factorization of , [4].Experimental study of ILU preconditioners for indefinite matrices is given in [5] to gain a better practical understanding of ILU preconditioners and help improve their reliability, because besides fatal breakdowns due to zero pivots, the major causes of failure are inaccuracy and instability of the triangular solves.Another example for implicit preconditioning is the algorithm BBI (Band Block Inverse) which was presented to compute the blocks of the exact inverse of block band matrix  within a band consisting of  1 ≥  blocks to the left and  1 ≥  blocks to the right of the diagonal, in each block row, and is practically viable only when the bandwidths ,  are small and the entries  , are scalars or if block matrices have small order (see Chapter 8 of [4]).
The explicit preconditioning methods compute an approximation  on explicit form of the inverse  −1 and use it to form a preconditioning matrix on an explicit form.That is, for solving (1), the system  =  is considered by iteration when  and  are explicitly available [4].As an example an approximate inverse preconditioner for Toeplitz systems with multiple right-hand sides is given in [6].The study about factorized sparse approximate inverse preconditioning (FSAI) for solving linear algebraic systems with symmetric positive-definite coefficient matrices is proposed theoretically in [7], whereas the corresponding iterative construction of preconditioners is given in [8]."For matrices with irregular structure, it is unclear how to choose the nonzero pattern of the FSAI preconditioner at least in a nearly optimal way and the serial costs of constructing this preconditioner can be very high if its nonzero pattern is sufficiently dense" [9] and the authors proposed two new approaches to rise the efficiency.Most recently, in [10] two classes of iterative methods for matrix inversion are proposed and these methods are used as approximate inverse preconditioners to precondition BICG method for solving linear systems (1) with the same coefficient matrix and multiple right sides.
There are also hybrid preconditioning methods that form in a two-stage way, where an incomplete factorization (implicit) method is used for a matrix partitioned in blocks and a direct method (explicit) is used to approximate the inverse of the pivot blocks that occur at each stage of the factorization.As an example, in Chapter 7 of [4] when  is an  matrix the methods are constructed by recursively applying a 2 × 2 block partitioning of matrix , where elimination is to take place and then computing an approximation of its Schur complement.Using an approximate version of Young's "Property A" for symmetric positive-definite linear systems with a 2 × 2 block matrix it is proved in [11] that the condition number of the Schur complement is smaller than the condition number obtained by the block-diagonal preconditioning.
When the matrix  is large and sparse matrix, a technique based on exploiting the idea of successive independent sets has been used to develop preconditioners that are akin to multilevel preconditioners [12,13], called ILU with multielimination (ILUM).It is well known that the block ILU preconditioners usually perform better than their point counterparts and the block versions of the ILUM factorization preconditioning technique are proposed in [14].Robustness of BILUM preconditioner by adding interlevel acceleration in the form of solving interlevel Schur complement matrices approximately and using two implementation strategies to utilize Schur complement technique in multilevel recursive ILU preconditioning techniques (RILUM) is proposed in [15].For problems arising from practical applications, such as those from computational fluid dynamics, the coefficient matrices are often irregularly structured, ill-conditioned, and of very large size.If the underlying PDEs are discretized by high order finite elements on unstructured domains, the coefficient matrices may have many nonzeros in each row.Furthermore, some large blocks may be ill-conditioned or near singular and the standard techniques used to invert these blocks may produce unstable or inaccurate LU causing less accurate preconditioners.Because of these drawbacks simple strategies used in the standard BILUM technique proposed in [14] may become inefficient [16].
In order to determine a meaningful approximation of the solution of (1) when the coefficient matrix  is severely illconditioned or near singular one typically replaces the linear system (1) by a nearby system that is less sensitive to the perturbations of the right-hand side  and the coefficient matrix .This replacement is commonly referred to as regularization.Discrete ill-posed problems where both the coefficient matrix  and the right-hand side  are contaminated by noise appear in a variety of engineering applications [17][18][19].Tikhonov regularization method [20][21][22][23], Singular Value Decomposition (SVD) method, [24,25], well-posed stochastic extensions method [26], extrapolation techniques [27], the iterative method yielding best accessible estimate of the solution of the integral equation [28], and a technique for the numerical solution of certain integral equations of first kind [29] are the most popular regularization methods.
The motivation of this study is firstly to propose high order iterative methods for approximate matrix inversion of a real nonsingular matrix  such that the matrix multiplications and additions in the calculation of matrix polynomials for the hyperpower methods of orders of convergence  = 4 + 3, where  ≥ 1 is integer, are reduced through factorizations and nested loops of which the iterations are defined using a recurrence formula, and secondly to give an algorithm that constructs 2 × 2 block incomplete LU decomposition based on approximate Schur complement for the nonsingular coefficient matrix Ã ∈  × (when  is even) of the algebraic linear system of equations arising from the ill-posed discrete problems with noisy data.In this algorithm the methods of orders  = 7, 11, 15, 19 are applied for approximating the Schur complement matrices and the obtained preconditioners are used to precondition the one step stationary iterative method.Section 6 of the study is devoted for numerical examples.Algorithm 6, based on Algorithm 1 in [10], and the new proposed algorithm called Algorithm 11 are applied to find regularized solutions of algebraic linear systems obtained by discretization of two problems of Fredholm integral equation of first kind with noisy data of which the first example is the harmonic continuation problem [2,26], and the second example is Phillip's problem [29].For numerical comparisons the same problems are solved by Algorithms 6 and 11 with method of order  = 7 proposed in [30] and methods of orders  = 11, 15, 19, proposed in [31].Furthermore, experiments of both algorithms are conducted on some nonsymmetric linear systems of coefficient matrices with strong indefinite symmetric components from Harwell-Boeing collection.Analysis and investigations are provided by the obtained numerical results.In last section concluding remarks are given based on theoretical and numerical analysis.

Hyperpower Iterative Methods for Approximate Matrix Inversion
Exhaustive studies are published investigating iterative methods for computing the approximate inverse of a nonsingular square matrix  (see [10,[32][33][34] and references therein), generalized inverse  (2)  , [30,35], outer inverses [31], and Moore-Penrose inverses [36], based on matrix multiplication and addition.Let  denote the  ×  unit matrix and  ∈  × be nonsingular matrix.We denote the approximate inverse of  at the  − ℎ iteration by   , and the residual matrix by   =  −   .The most known iterative methods for obtaining approximate inverse of  are the  − ℎ order hyperpower method [35] for  ≥ 2 which requires  matrix by matrix multiplications for each iteration.
When  is  ×  matrix of rank  with real or complex elements, equation (3) may be used to find generalized inverses as Moore-Penrose inverse based on the choise of initial inverse see [35].One of the recent studies One of the recent studies includes the following factorization of (3) for  = 7 given in [30] for computing generalized inverse  (2)   , The method in (4) performs 5 matrix by matrix multiplications.Another study is [31] in which several systematic algorithms for factorizations of the hyperpower iterative family of arbitrary orders are proposed for computing outer inverses.Among these methods, 11 − ℎ order method 15 − ℎ order method and the 19 − ℎ order method requiring 7, 7, and 8 matrix by matrix multiplications, respectively, are stated by the authors.In [10] Class 2:

A New Family of Methods with Recurrence Formula
Let  denote the × unit matrix and  ∈  × be nonsingular matrix.We define the matrix-valued functions consisting of matrix multiplications and additions and call Family Generator Function to the matrix-valued function Φ  (  ), given as where Γ 0 (  ) = .We say that ( + Γ(  )) is 1 nested loop; (+Γ(  )(+Γ(  ))) is 2 nested loop.Using this presentation we express (14) in Horner's form with  − 1 nested loops, which can be given by the following recurrence formula: For an integer  ≥ 1 and the corresponding Family Generator Function Φ ,  = 4 + 3, we present a family of iterative methods for approximate matrix inversion of  as Theorem 1.Let  ∈  × be nonsingular matrix.The family of methods (17) is a factorization of (3) with  − 1 nested loops for the orders  = 4 + 3,  ≥ 1 integer.

Proof.
Proof follows from induction using the proposed family of methods (17).

Computational Complexity
Let be the corresponding approximate solution of (1) and the residual error, respectively [10].Let  be the number of matrix by matrix multiplications () per iteration of  − ℎ order hyperpower method given in factorized form.Let ] be the number of matrix by matrix additions (s) other then addition by identity and  be the number of matrix additions by identity per step.For obtaining an error ‖  ‖/‖‖ ≤  by an iterative method obtained by factorization of hyperpower method (3) using nested loops the authors of [10] showed that total number  = ln(ln / ln )(1/ ln ) iterations are required where ‖ 0 ‖ =  < 1.Therefore, as criteria to compare iterative methods originated from (3) the authors of [10] defined Asymptotic Convergence Factor  = / ln  where this factor occurs in the product of  with iteration number  as  = ln(ln / ln )(/ ln ).When the two factorizations of  − ℎ order hyperpower method have the same  values then it is necessary to use another quantity which measures the efficiency of the hyperpower method both with respect to matrix by matrix multiplications and matrix by matrix additions.
Definition 4. We define the asymptotic convergence values by where the components of the  are denoted by see also [34].

Algorithms for Numerical Regularized Solution
Discrete ill-posed problems arise from the discretization of ill-posed problems such as Fredholm integral equation of the first kind with smooth kernel [37].If the discretization leads to the linear system  ℎ = ,  ∈  × ,  ≥ , and  ∈   (31) then all the singular values of  as well as the SVD components of the solution on the average decay gradually to zero and we say that a discrete Picard condition is satisfied; see [37].In this cases the solution is very sensitive to perturbations in the data such as measurement or approximation errors and thus regularization methods are essential for computing meaningful approximate solutions.In this study we consider the problem of computing stable approximations of discrete ill-posed problems (31) where the data obey the following assumptions: (a) problem (31) Discrete ill-posed problems with data satisfying (32) appear in a number of applications such as inverse scattering [17] and potential theory [18].If  =  and that Ã is nonsingular the obtained perturbed algebraic linear system may be solved using Algorithms 6 and 11 given in the following subsections.When  >  denote the set of all least squares solutions of ( 33) by then  ∈  if and only if the orthogonality condition is satisfied.On the basis of Theorem 1.1.3in [38] if ( Ã) =  then the unique regularized least squares solution is In this case we may apply Algorithms 6 and 11 for the normalized system (35).
are the corresponding numerical regularized solution and the regularized residual error respectively; also we have T =  − Ã Ṽ .In this section and the following sections the norm of a matrix ‖.‖ is considered to be a subordinate matrix norm.For a given integer 1 ≤  ≤ 4 and the given predescribed accuracy  > 0, Algorithm 6 finds the approximate solution ỹm given in (37) for the perturbed system (33) using the methods ( 17) by performing m iterations to reach the accuracy ‖r m‖ ∞ /‖ b‖ ∞ ≤ .
Algorithm 6.This algorithm finds the numerical regularized solution ỹm of the perturbed system Ã = b, by using the proposed methods (17) for 1 ≤  ≤ 4 and is given on the basis of Algorithm 1 in [10].
Proof.Proof is obtained using Theorem 8 equation ( 47) and is analogous to the proof of Theorem 5 in [10].

𝜌 ((𝐿 𝑚
) . (57) Therefore, on the basis of convergence of one step stationary iterative method (see Theorem 5.3 Chapter 5 Algorithm 11.This algorithm constructs approximate 2 × 2 block Schur-BILU decomposition of Ã using the proposed methods ( 17) for 1 ≤  ≤ 4 to approximate the Schur complement matrices and then finds the regularized solution ỹ * of the perturbed system (33) by using the constructed Schur-BILU decomposition of Ã to precondition the one step stationary iterative method.Let stage number  = 1, and the subblock  = Ã11 .
Step 4. Let  * 1 be the total iteration number performed in Steps 2 and 3 and denote the approximate inverse of Ã11 =  at  * 1 iterations by  1, * 1 .Find S = Ã22 − Ã21 Ṽ1, * 1 Ã12 .(i) An approximate inverse preconditioner for S can be constructed using Steps 1-3 by taking  = 2 and  = S and repeating Steps 1-3.Let  * 2 be the total iteration number performed, then the obtained approximate inverse preconditioner of S is denoted by  2, * 2 .
Step 5. Construct the lower block triangular matrix   * 1 and the upper block triangular matrix   * 1 as defined in (51).
Step 7. Solve the system   * 1   * 1  1 = r0 by solving the block lower and block upper triangular systems.For the solution of block upper triangular system the obtained subsystems may be solved using Algorithm 6.Also preconditioned BICG method or preconditioned GMRES method may be used to solve these block subsystems of which the approximate inverse preconditioners Ṽ1, * 1 and  2, * 2 obtained in Step 4 can be used as preconditioners.Next denote the approximate solution by d1 and find ‖ d1 ‖ ∞ and increase the value of  to 1.
Remark 13.Let  +1 be as given in Theorem 12 and ỹ+1 be the approximate solution obtained by Algorithm 11; the error ẽ+1 =  +1 − ỹ+1 occurs due to the floating points or due to the preconditioned iterative method used to solve the block subsystems in the backward block substitution in Steps 7-9 of Algorithm 11.

Numerical Results
The experimental investigation of the proposed algorithms is given on two examples of Fredholm integral equation of first kind and on some nonsymmetric linear systems with strong indefinite symmetric components sourced from simulation of computer systems and nuclear reactor core model.All the computations are performed using a personal computer with properties AMD Ryzen 7 1800X Eight Core Processor 3.60GHz.Calculations are carried by Fortran programs in double precision.In this section the figures and the tables adopt the following notations:  1 is the total solution cost in seconds of Algorithm 6 for Steps 1-6. 1 is the total solution cost in seconds of Algorithm 6 for successive perturbations. 2 is the cost in seconds for constructing the preconditioner   * 1   * 1 in Steps 1-5 of Algorithm 11.  2 is the cost in seconds of the total iterations performed by the preconditioned stationary one step iterative method in Steps 6-9 of Algorithm 11.  2 is the total solution cost in seconds of Algorithm 11, that is, TCS 2 = PCS 2 +ICS 2 .
TBMMs denote the total block matrix by matrix multiplications.
TMMs denote the total matrix by matrix multiplications.
Application 14 (harmonic continuation problem).The first application is the harmonic continuation problem [2,26].Given a harmonic function (, ) in the unit circle with known values for some  < 1, (, ) = (), find its values () for  = 1.Now () and ℎ() are related by the Poisson integral: We use the same data used by Franklin [26] by which the answer is known from the real part of the analytic function whereas for || =  = 0.5, (67) In all applications given in this section we take Δ =    and Δ = [  ,   , . . .,   ]  ∈   satisfying (32) where  is  ×  identity matrix.The trace of the exact solution ℎ() at the grid points   = 2/,  = 1, . . .,  is denoted by .The condition number of matrix Ã =  + Δ is denoted by ( Ã) and for Application 14, ( Ã) with respect to   and the values of   are given in Table 3 where   = 0.5 1.5  .The shape of the coefficient matrix Ã when   = 0.5 × (5 × 10 −10 ) 1.5 and  = 800 of Application 14 is given in Figure 1, illustrating that the coefficient matrix is very dense.The TCS 1 to achieve the desired accuracy ‖r  ‖ ∞ /‖ b‖ ∞ ≤ 5 × 10 −11 when  = 800 and Ṽ0 = Ã /‖ Ã‖ 1 ‖ Ã‖ ∞ for Application 14 with respect to   are given in Figure 2.
Moreover, in [2] the best error of numerical solution occurring by singular value decomposition when  = 50 for the harmonic continuation problem (Application 14) was obtained approximately 10 −3 using single precision.However, relative maximum errors using the proposed methods  1 ,  3 ,  5 ,  7 , for the corresponding discretization by Algorithm 6 when   = 10 −11 are 2.05 × 10 −9 and by Algorithm 11 when   = 10 −6 are 1.19 × 10 −6 .Furthermore, we solved harmonic continuation problem by direct LU with pivoting and the accuracy of the solution is ≃ 10 −5 for  = 800, and   = 10 −11 .Also when this problem is solved for same value of  and   by Algorithm 6 using the proposed methods  1 ,  3 ,  5 ,  7 , relative  2 norm of the errors is less than 5.5 × 10 −11 for the third successive perturbation as shown in column seven of Table 5. Application 15 (Phillip's problem).We take the following Fredholm integral equation of first kind discussed in [29].Nowadays this problem is reconsidered by the authors of [41,42] and solved using a new L-curve technique and Its solution, kernel, and the right-hand side are given by respectively.We take quadrature nodes   = −6 + 12/,  = 1, . . ., , evaluating () at the points   = −6 + 12/  = 1, . . ., , and the discretization of (69), (70) gives the algebraic linear system  ℎ = .The trace of the exact solution ℎ() at the grid points   = −6 + 12/,  = 1, . . ., , is denoted by .The condition number of matrix Ã (( Ã)) of Application 15 with respect to   where   = 0.5(  ) 1.5  and the values of   are given in Table 7. TCS 1 with respect to   by using the methods   ,  = 1, . . ., 8, for Phillip's problem is given in Figure 4 when  = 800.This figure demonstrates that the proposed method  3 (17) requires the minimum total solution cost in seconds with respect to the perturbations   by Algorithm 6 to achieve the desired accuracy ‖r  ‖ ∞ /‖ b‖ ∞ ≤ 5 × 10 −7 when  = 800 for Application 15.Table 8 presents the iteration number m performed, TMMs, TCS 1 for an accuracy of ‖r m ‖ ∞ /‖ b‖ ∞ ≤ 5 × 10 −7 , and the relative  2 norm of the errors using the methods   ,  = 1, . . ., 8, by Algorithm 6 for Application 15 when  = 800 and   = 10 −7 .It can be viewed from this table that the best relative  2 norm error is approximately 1.85 × 10  =1 m performed, the TCSSP 1 and TMMs, and the relative  2 norm of the errors for the last three perturbed systems using the methods   ,  = 1, . . ., 8, by Algorithm 6 for Application 15 when  = 800 are presented in Table 9.This table shows that best relative  2 norm error obtained after fifth successive perturbation is 1.33×10 −6 .Thus, the proposed methods  3 ,  5 , and  7 with Algorithm 6 give solutions by performing less total solution cost in seconds compared with the methods of same orders  4 ,  6 , and  8 , respectively, for Application 15.TCS 2 with respect to   by Algorithm 11 applied with the methods   ,  = 1, . . ., 8, for Application 15 when  = 800 are illustrated in Figure 5. Figure 5 shows that, for the considered values of   , the proposed method  3 (17) requires the minimum TCS 2 by Algorithm 11 to achieve the desired accuracy ‖ T1, ‖ ∞ < 0.05 and ‖ d ‖ ∞ < 5 × 10 −7 , when  = 800 for Application 15.The PCS 2 , ICS 2 and the TCS 2 , iteration numbers  * 1 ,  * and TBMMs, maximum norm errors, and relative  2 norm errors with respect to   = 10 −7 of the methods   ,  = 1, 2, . . ., 8, by Algorithm 11, when  = 800 for Application 15 are shown in Table 10.
Moreover, to compare our results with the existing ones from the literature we take  = 64 for Phillip's problem.The best relative error of numerical solution occurring in [42] by Galerkin method with the code Phillips was 1.84 × 10 −3 .On the other hand relative maximum error using the proposed methods  1 ,  3 ,  5 ,  7 by Algorithm 6 when   = 10 −7 is 1.52×10 −4 and by Algorithm 11 when   = 10 −6 is 6.45×10 −4 .11.The numerical solution of the problems GRE216A, GRE216B, GRE343, GRE512, and GRE1107 is also studied in [43] and the authors provided results obtained by balance scheme in Table III of [43].The experimental study of ILU preconditioners for the problems BP1000, GRE1107, and NNC1374 is given in [5], and the authors classify these problems hard problems to solve.We present the numerical results of these test problems from the studies [5,43] in Table 12, of which the Pnc means that problem is not considered, F denotes the failure, and RNEng means relative norm error (RNE) obtained by the method cited in the corresponding reference which is not given.In Table 12 second column presents relative norm errors (RNE) and the iteration numbers (iter) of the problems from GRENOBLE set by balance scheme taken from the Table III of [43], in which the authors mentioned that the preconditioned GMRES failed to converge for any of these problems.This table also demonstrates the iteration numbers for the preconditioned GMRES method or possible reasons of the failure of this method for the solution of the problems BP1000, GRE1107, and NNC1374 with the preconditioners ILU(0) taken from Table 3, ILUTP(30, 1.00) taken from Table 5, and ILUTP(30, 0.01) taken from Table 6 of [5], presented in third, fourth, and fifth columns of Table 12, respectively.Furthermore, we present the solution of some problems from GRENOBLE set in Application 16 for the minimal values of relative  2 norm of the errors with respect to   to achieve the desired accuracy ‖ T1, ‖ ∞ < 0.05 and ‖ d ‖ ∞ <  using the method  3 by Algorithm 11 in Table 13.We conclude that the proposed algorithms give stable and highly accurate solution of the considered test problems when compared with the results in Table 12.Moreover, taking into consideration the fact that the authors of [5] classified the problem NNC1374 as very hard problem to solve, Algorithm 6 gave stable and almost accurate solution of this problem.

Concluding Remarks
In this study for any integer  ≥ 1, we propose a family of methods with recursive structure for computing approximate matrix inverse of a real nonsingular square matrix with convergence order  = 4 +3.It is proven that these methods require  =  + 4, matrix by matrix multiplications per iteration, which are fewer than  =  = 4 + 3, for the standard hyperpower method of same order.The proposed family of methods perform  = 2, matrix by matrix additions other than addition with the identity matrix and  =  + 2, matrix addition by identity per iteration.An algorithm is proposed by constructing 2 × 2 block ILU decomposition based on approximate Schur complement (Schur-BILU) for the coefficient matrix of the algebraic linear system of equations arising from the ill-posed discrete problems with noisy data.From the proposed methods of approximate matrix inversion, the methods of orders  = 7, 11, 15, 19 are applied for approximating the Schur complement matrices by which the obtained approximate Schur-BILU preconditioner is used to precondition the one step stationary iterative method.This economic computational efficiency is useful in many practical problems such as the solution of first kind Fredholm integral equations and the numerical results justify the given theoretical results.Furthermore, the cost of construction of the approximate Schur-BILU preconditioner can be amortized over systems with same coefficient matrix and different right sides since the preconditioner is to be reused several times.Table 14 shows that Algorithm 11 is at least two times faster than Algorithm 6 used with the methods   ,  = 1, 2, . . ., 8, in both considered problems when  = 800 and   = 10 −5 for Application 14 and   = 10 −7 for Application 15.

Data Availability
Data are used from "Matrix Market", a repository organized by the National Institute of Standards and Technology to support this study.

Table 1 :
Computational complexity of the proposed methods for real nonsingular matrices of size  × .

Table 2 :
(17)27)for the proposed family of methods(17)and for the methods from the literature of the orders = 7, 11, 15, 19.

Table 7 :
Condition number of matrix Ã of Application 15, with respect to   = 0.51.5 when  = 800.

Table 11 :
TCSSP 1 , total iteration numbers, and relative  2 norm errors obtained by Algorithm 6 with method  3 for the test problems of Application 16. be the approximate solutions obtained by Algorithm 6 by performing m iterations for an accuracy of ‖r m ‖ ∞ /‖ b‖ ∞ ≤ 5 ×   for the considered perturbed systems.The minimal values of the relative  2 norm of the errors with respect to   using the method  3 by Algorithm 6 for the test problems and the total iteration number m = ∑ 6 =1 m performed and TCSSP 1 are presented in Table m  = 1, . . ., 6

Table 12 :
Results of test problems from the literature.

Table 13 :
Computational costs in seconds, iteration numbers, and relative  2 norm errors obtained by Algorithm 11 with method  3 for some problems in Application 16.

Table 14 :
Ratios of total solution costs in seconds of Algorithms 6 and 11 for both applications using the methods   ,  = 1, . . ., 8, when  = 800.