Generalized Refinement of Gauss-Seidel Method for Consistently Ordered 2-Cyclic Matrices

This paper presents generalized refinement of Gauss-Seidel method of solving system of linear equations by considering consistently ordered 2-cyclic matrices. Consistently ordered 2-cyclic matrices are obtained while finite difference method is applied to solve differential equation. Suitable theorems are introduced to verify the convergence of this proposed method. To observe the effectiveness of this method, few numerical examples are given. The study points out that, using the generalized refinement of Gauss-Seidel method, we obtain a solution of a problem with a minimum number of iteration and obtain a greater rate of convergence than other previous methods.


Introduction
Consider the problem of large and sparse linear systems of the form where A = ða ij Þ is a nonsingular real matrix of order n, b is a given n-dimensional real vector, and x is an n-dimensional vector to be determined. By splitting where D is a diagonal matrix with a ii ≠ 0, −L is strictly lower, and −U is strictly an upper triangular part of A, different iterative methods were developed. In recent years, research results show that generalized, refinement, and extrapolation (acceleration or relaxation) are used for modifying the Gauss-Seidel method. Salkuyeh [1] introduced the generalized Gauss-Seidel (GGS) method and discussed the convergence of the method by considering strictly diagonally dominant (SDD) and M-matrices. The refinement of the Gauss-Seidel (RGS) method was studied by Vatti and Eneyew [2] and proved the convergence of the method by taking SDD matrices. Recently, Enyew et al. [3] developed a second refinement of the Gauss-Seidel method on SDD, symmetric positive definite (SPD), and M-matrices. All the above researchers tried to minimize the number of iterations and improved the rate of convergence. But still, different researchers have been conducting research in the area of iterative methods. In line with this, in this paper, we study the generalized refinement of Gauss-Seidel (GRGS) iterative method which is used to accelerate the convergence of the basic Gauss-Seidel method. Here, for the GRGS method, we mean thek th -RGS method, wherek = 0, 1, 2, 3, ⋯.
The Gauss-Seidel (GS) iterative method is and the successive overrelaxation (SOR) iterative method is where 0 < ω < 2. Equations (2), (3), and (4) can be denoted by Definition 1. The spectrum of a matrix A ∈ ℂ is defined by σ ðAÞ = fλ ∈ ℂ : det ðA − λIÞ = 0g .That is, the spectrum of a matrix A is the set of all its eigenvalues.
Definition 2. The spectral radius of a matrix A is denoted by ρðAÞ and is defined by ρðAÞ = max λ∈σðAÞ | λ | .
Definition 3. The (asymptotic) rate of convergence of A is denoted by R ∞ ðAÞ and is defined by R ∞ ðAÞ = −logρðAÞ.

Theorem 5 ([5]
). Let A be consistently ordered and 2-cyclic with nonvanishing diagonal elements, and let G J = I − ðdiagAÞ −1 A = D −1 ðL + UÞ .Then, (a) if μ is any eigenvalue of G J of multiplicity p, then −μ is also an eigenvalue of G J of multiplicity p for some eigenvalue μ of G J if and only if λ satisfies λ + ω − 1 = ωμλ 1/2 for some eigenvalue μ of G J .

Main Result
This could be written as After arranging and multiplying both sides by ðD − LÞ −1 , we get x = x + ðD − LÞ −1 ðb − AxÞ. Thus, Equations (10) or (11) are refiner of the Gauss-Seidel method. By taking either of the above equations (10) or (11), substitute Gauss-Seidel scheme (3) in the place of x ðn+1Þ to get the refinement of Gauss-Seidel method. Thus, we can derive the generalized refinement of Gauss-Seidel method: Equation (12) is called the generalized refinement of Gauss-Seidel (GRGS) method or the k th -refinement of Gauss-Seidel (k th -RGS) method. Equation (12) can be denoted by either If k = 0, the scheme is reduced to the Gauss-Seidel method.
If k = 1, the method is reduced to the refinement of Gauss-Seidel method.
If k = 2, the scheme is reduced to the second refinement of Gauss-Seidel method and so on.

Algorithm for GRGS Method
where D − L is nonsingular and U is nonzero matrices; initial approximation x ð0Þ ; tolerance TOL; maximum number of iterations N.

Convergence of Generalized Refinement of Gauss-Seidel Method
Theorem 8. If A is consistently ordered 2-cyclic matrices, then ρðG GRGS Þ = ρðG GS Þ k+1 = ρðG J Þ 2ðk+1Þ .The Gauss-Seidel method converges if and only if the generalized refinement of Gauss-Seidel method converges, and the GRGS method converges( k + 1 )times as fast as Gauss-Seidel method and 2ðk + 1Þ times as fast as the Jacobi method.
Proof. Let A be consistently ordered matrix with nonvanishing diagonal elements and 2-cyclic matrix. Then, by Theorem 6, we have ρðG GS Þ = ρðG J Þ 2 . This implies that the Gauss-Seidel method converges twice as fast as the Jacobi method. Hence, based on Theorem 7, the Jacobi method is convergent if and only if the Gauss-Seidel method is convergent. Similarly, if GS converges, then RGS, SRGS, 3 th -RGS, …, and GRGS methods converge. Again, and so on. Thus, the spectral radius of the GRGS method is as follows: Therefore, the GRGS method converges (k + 1) times as fast as the Gauss-Seidel method and 2ðk + 1Þ times as fast as the Jacobi method.
Here, one can deduce that the number of iterations of the GRGS method denoted by nðG GRGS Þ is Similarly, the rate of convergence of GRGS is denoted by 3 Abstract and Applied Analysis Theorem 9. The generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method and refinement of Gauss-Seidel method when the Gauss-Seidel method is convergent.
If we consider the Gauss-Seidel method, x n+1 Consider the refinement of Gauss-Seidel method.
Consider the second-refinement of Gauss-Seidel method.
x n+1 Finally, let us consider the generalized refinement of Gauss-Seidel method.
Therefore, the generalized refinement of Gauss-Seidel method converges faster than the Gauss-Seidel method, refinement of Gauss-Seidel method, and second-refinement of Gauss-Seidel method.

Numerical Examples
Example 1. Consider an M-matrix A (or 2-cyclic matrix A), which arises from discretization of the Poisson equations ∂ 2 T/∂x 2 + ∂ 2 T/∂y 2 = f , on the unit square as considered [6].
If the system has the form x 3 x 4 x 5 then solve it with tol = 0:0001.
From Table 1, one can visualize that the GRGS method is much better than the SOR method. When we choose the 8 th -RGS method we get the solution at the first iteration, which means finding a solution using a direct method. The 8 th -RGS method is even better than nonstationary methods like CG, BiCG, and MINRES methods.
Example 2. Consider the steady-state heat distribution in a thin square metal plate with dimensions 0.5 m by 0.5 m using n = m = 4. Two adjacent boundaries are held at 0°C, and the heat on the other boundaries increases linearly from 0°C at one corner to 100°C where the sides meet.

Example 3. Consider the Poisson equation
on the square with sides x = 0 = y, x = 1 = y for ðx, yÞ in the set with u = 0 on the boundary with mesh h = k = 1/4.

Solution.
Using a finite difference formula, we get the following system: For μ = ρðG J Þ = 0:7071, one can get the optimal values of ω = 2/ð1 + ffiffiffiffiffiffiffiffiffiffiffiffi 1 − μ 2 p Þ = 1:1716. Table 3 shows that the 9 th -RGS method has a minimum spectral radius and maximum rate of convergence compared to methods listed in the table. We deduce that the GRGS method is preferable than any other iterative methods even the nonstationary method. Therefore, this method is equivalent to the direct methods.
(iv) The SOR iterative method has the smallest number of iteration as compared to J and GS methods but it has a large number of iterations as compared to the GRGS method. Even though the SOR method is the best method for consistently ordered matrices, our new modified method GRGS is much better than SOR method

Data Availability
No data were used to support this study.