Refinement of Multiparameters Overrelaxation (RMPOR) Method

In this paper, we present reﬁnement of multiparameters overrelaxation (RMPOR) method which is used to solve the linear system of equations. We investigate its convergence properties for diﬀerent matrices such as strictly diagonally dominant matrix, symmetric positive deﬁnite matrix, and M-matrix. The proposed method minimizes the number of iterations as compared with the multiparameter overrelaxation method. Its spectral radius is also minimum. To show the eﬃciency of the proposed method, we prove some theorems and take some numerical examples.


Introduction
Large and sparse linear systems of the form can be solved using iterative methods. One of the techniques of obtaining iterative methods is splitting the coefficient matrix. A � M − N and A � D − k i�1 E i − F are some of the splitting of the matrix A. Using either of splittings, we can derive iterative methods which can solve the system of linear equation (1).
Recently, many researchers used the splitting k i�1 E i is strictly lower triangular matrix of D − A, and F is strictly upper triangular matrix of D − A, to get the new iterative methods. Song and Dai [1] introduced multiparameters overrelaxation (MPOR) method, whose special cases involve many classical iterative methods. e convergence conditions with A being a Hermitian matrix, an L-matrix, an M-matrix, an H-matrix, and a strictly diagonally dominant matrix are derived. Kuang and Ji [2] presented a two-parameter iterative method called TOR method, which is effective to give the numerical solution of partial differential equations. Wang extended the TOR method to the GTOR method and improved some results of Ju, Wang, and Zeng. In [3], O'Leary and White proposed multisplitting methods which are based on several splittings of the matrix A. More precisely, a multisplitting of A is defined as a collection of triples (M k , N k , E k ), k � 1, 2, . . . , K, such that for all k, M k , N k , and E k are n × n matrices, each M k is nonsingular, A � M k − N k , and E k is a diagonal matrix with nonnegative entries satisfying k i�1 E k � I. e corresponding multisplitting method to solve (1) is given by the iteration x m+1 � k i�1 E k y m,k , m � 0, 1, 2, . . ., where M k y m,k � N k x m + b, k � 1, 2, . . . , K. Similarly, the convergence of the parallel multisplitting TOR method is studied for M-matrix in [4]. Chang studied the convergence of the parallel multisplitting TOR method for H-matrices, but Zhang et al. [5] found some gaps in the proof of theorem.
ere are other many techniques to get new iterative methods such as extrapolation, overrelaxation, and acceleration. Up to now, a lot of first-order stationary iterative methods are proposed. ey include the well-known methods as Jacobi, JOR, Gauss-Seidel, SOR, AOR, GAOR, TOR, and so on [1].
Partial differential equations (PDEs) can be solved using finite difference and finite element methods. ese two methods have deficiencies in accuracy and code implementation but there is novel efficient matrix approach for solving the second-order linear matrix partial differential equations (MPDEs) under given initial conditions which are proposed by Tohidi and Khorsand Zak [6]. Similarly, Asgari et al. [7] introduced an extended block Golub-Kahan algorithm for large algebraic and differential matrix Riccati equations by providing new theoretical analysis of the method and the norm of the residual. e remainder of this paper is organized as follows. In Section 2, we first review some preliminaries which are used in our basic idea. In Section 3, the main result of this paper is presented. Convergence analysis of the proposed scheme is provided, and numerical examples are considered in this section. In Section 4, we provide a brief conclusion from our main results.

Preliminaries
e solution of equation (1) can be obtained using the following methods: In [2], the lower part of A is split into two parts: E 1 and E 2 . Also, E 1 and E 2 are multiplied different factors, and this a new scheme was introduced.
at is, the spectrum of a matrix A is the set of all its eigenvalues.

Definition 3.
e spectral radius of a matrix A is denoted by ρ(A) and is defined by ρ(A) � max λ∈σ(A) |λ|.
provided that det(D − cE) ≠ 0, and the AOR method converges.

Main Results
Let Ax � b be given with A is any nonsingular square matrix.
After some steps, one can get the following: Equation (5) is called refiner for the multiparameter overrelaxation (MPOR) method. To get the RMPOR method, first we should substitute the MPOR method into equation (5) or equation (4) and then simplify and rearrange, and the results will be Equation (6) is the refinement of multiparameter overrelaxation (RMPOR) iterative method.
Equation (6) can also be written as where if a 1 � a 2 � a 3 � · · · � a k � c and any ω (vi) Refinement of two-parameter overrelaxation (RTOR) if a 1 , a 2 , and ω, and the remaining parameters a 3 � a 4 � · · · � a k � 0, and so on

Convergence of RMPOR Method
Theorem 6. If A is the SDD matrix, then MPOR is convergent for 0 ≤ a i ≤ 1 and 0 < ω < 1. Proof.

Theorem 7. If A is symmetric and positive definite (SPD)
and D − k i�1 a i E i is nonsingular, then the MPOR method is convergent. Proof.
We now suffice to show that the spectral radius of G satisfies ρ(G) < 1. Let λ be an eigenvalue of G, and let x be an eigenvector corresponding to λ.

Theorem 8. If
A is an M-matrix and 0 ≤ a i ≤ ω < 1, then the MPOR iterative method is convergent.
From this, we have A ≤ M. erefore, M is an M-matrix and as a result M − 1 ≥ O. On the other hand, for 0 < a i < 1, we have ρ( k i�1 (a i D − 1 E i )) < 1, and therefore, For 0 ≤ a i ≤ ω < 1.
Hence, we conclude that is A a weak regular splitting. us, using eorem 2, we conclude that ρ(M − 1 N) < 1. is completes the proof.

Theorem 9. Let A be strictly diagonally dominant and
the RMPOR method converges.
Taking norms on both sides, we get us, Using inequalities, From the fact that ‖ k i�1 a i L i ‖ ∞ < 1 and λ < 1, Hence, we prove the theorem.

Theorem 10. If A is symmetric and positive definite (SPD)
and det(D − k i�1 a i L i ) ≠ 0, then the refinement of multiparameter overrelaxation (RMPOR) method is convergent for any initial guess x (0) .
On the other hand, (2) If x * k i�1 a i E i x < 0, then λ + ω − 1 − ωμ/λ − 1 < 0. Again, from this we, have two cases: (i) λ − 1 < 0 and λ + ω − 1 − ωμ > 0. In this case, we have λ < 1 and us, we must choose the second possibility, which is x * k i�1 a i E i x < 0. erefore, we can conclude that Now, let us start to proof the theorem, using equation (27).
Using equation (34) and SPD property of the matrix A, (36) erefore, the MPOR method is convergent. Now, let us prove the convergence of the RMPOR iterative method.
From this, we can generalize that the RMPOR method is convergent if A is SPD matrix.

Theorem 11. Let A be an M-matrix. If
and then ρ(L 2 a i ,ω ) < 1. at is, the RMPOR iterative method is convergent.
Proof. For the case when ω ≤ 1, the statement is proved. Now, we consider the case when ω > 1. We assume ρ(L a i ,ω ) > 1 and denote (40)

Journal of Mathematics
By direct comparision, it gets. Hence, there exists a nonzero vector x ≥ 0 such that us, and consequently, Obviously, it contradicts the hypothesis; therefore, we have ρ(L a i ,ω ) ≤ ρ(L a i ,ω ) < 1.

Numerical Examples
Journal of Mathematics   . e spectral radius of L a i ,ω is ρ(L a i ,ω ) � 0.5731 and ρ(L 2 a i ,ω ) � 0.3284 which means that RMPOR is faster than the MPOR iterative method.
From Table 1, the RMPOR method converges to the exact solution faster than the MPOR method. e spectral radius is also smaller than the MPOR method. Similarly, rate of convergence of the RMPOR is larger than that of MPOR method. Hence, the proposed is better than MPOR method.
Example 2 (see [10]). Consider the system of linear equations: 6x 1 + 2x 2 + 2x 3 � 5, e coefficient matrix is both SDD and SPD matrix. To solve the system with tolerance 0.0001 by using SOR, AOR, TOR, MPOR, and RMPOR iterative methods would be shown as follows: Solution is as follows: A �   with a 1 � 0.4, a 2 � 0.5, a 3 � 0.6, and ω � 0.9. We also used the splitting of E as E � E 1 + E 2 for the TOR method and E � E 1 + E 3 + E 4 for MPOR and RMPOR iterative methods. From Table 2, the number of iteration of RMPOR is smaller than that of the MPOR method. e spectral radius of the RMPOR method is smaller than the spectral radius of the MPOR method. Similarly, rate of convergence of RMPOR is larger than that of the MPOR method. Hence, the proposed method is better than the MPOR method.

Conclusion
Iterative methods are very essential to solve large and sparse linear system of equations which arise from the discretization of PDE and ODE problems. e iterative methods can be found by splitting the coefficient matrix.
ere are many techniques to split the matrix A. One of them is multiparameter splitting technique which is discussed in this paper. e method obtained by this technique is efficient to solve system of linear equations whose coefficient matrices are SDD, SPD, and M-matrices, and then the RMPOR iterative method is better than the MPOR iterative method.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.