Numerical Solution of Sylvester Equation Based on Iterative Predictor-Corrector Method

e inspiration of the study concerns an iterative predictor-corrector method with order of convergence p 45 for computing the inverse of the coecient matrix Λ (In ⊗A) + (BT ⊗ Im), which is obtained by the Sylvester equation AX +XB C. e numerical solutions of three examples by predictor-corrector algorithm are given. e nal numerical results also support the applicability, fast convergency, and high accuracy of the method for nding matrix inverses.


Introduction
Consider the Sylvester matrix equations of the form AX + XB C where A ∈ R m×m , B ∈ R n×n , C ∈ R m×n are the constant coe cient matrices and X ∈ R m×n is the unknown solution matrix. ese equations play important roles in matrix decompositions of the eigenvalues, in control theory, in reduction models, in numerical solutions of the matrix di erential Riccati equations, and in image processing (see [1][2][3][4][5][6]).
Solution of Sylvester matrix equations follows the transformation to the linear form Λcol(X) col(C), where Λ (I n ⊗ A) + (B T ⊗ I m ) is an mn × mn matrix [7]. So, higher-order dimensions of Λ can create problems. ese problems can be avoided with iterative algorithms for nding matrix inverses. An iterative optimization-based method on partial swarm theory to solve Sylvester equation was given in [8]. Iterative gradient-based algorithm for solving the equations by minimizing speci c criterion functions was presented in [7]. A preconditioned iterative gradient-based method is obtained by selection of two auxiliary matrices by using the Newton method, and it can be acceptable as a generalization of the iterative splitting method to solve system of linear equations in [9]. Later, a preconditioned positive-de nite skew-Hermitian iteration method of splitting for continuous Sylvester equations with positive-de nite/semi-de nite matrices was presented in [10]. e implicit iteration alternating direction for solving the continuous Sylvester equation, where the coe cient matrices are taking as positive semi-de nite matrices, and at least one of them to be positive de nite is given in [11]. e implicit iteration alternating direction method for solving the continuous Sylvester equation, when the coe cient matrices are taking as positive semide nite matrices (at least one of them to be positive de nite) is given in [12]. A method based on the sign function iteration for solving large-scale Sylvester equation was presented in [13]. A transformation method such as Hessenberg-Schur algorithm was also used to solve Sylvester equation by reducing the coe cient matrices A and B to triangular form in [14]. A numerical Arnoldi-based method for solving Sylvester equation when A is sparse and large for partial pole-assignment problem for large matrices was given in [15]. A method consisting of orthogonal reduction of the coe cient matrix A of the Sylvester matrix equation to a block-upper-Hessenberg form was given in [16].
Other than these methods, direct methods generally take time and require a lot of storage. Special interest is to use an iterative method for computing the Moore-Penrose inverse Λ † ∈ C n 2 ×n 1 of the matrix Λ∈ C n 1 ×n 2 . is method is derived by the Schulz method of 2 nd order. A p th order iterative method for computing the Moore-Penrose inverse was studied in [17,18] for the orders p ≥ 2, and it is known as the hyperpower iterative method. Let Λ∈ C n 1 ×n 2 , and we surveyed on iterative methods by which V k presents the approximate Moore-Penrose inverse of Λ at the k th iteration. It is usually presented as for n 1 ≤ n 2 . T k � I − ΛV k performs p matrix-by-matrix multiplications per step. ere are different representations of (1) which differ in the calculation of the power sum If the nested loops are used through the factorizations, then the computational effort in the p th order method decreases and the number of matrix-by-matrix multiplications and additions in the polynomial is reduced (see [19,20]). Most recently, in [21], for a given integer t ≥ 1, two different classes of iterative methods for finding matrix inverses of square nonsingular matrices were proposed and they were used as approximate inverse preconditioners for solving linear systems. Among these classes of methods, class 1 converges with order p � 3 * 2 t + 1 and class 2 converges with order p � 5 * 2 t − 1, requiring 2t + 4 and 3t + 4 matrixby-matrix multiplications per step, respectively. e method of orders p � 7, 11, 15, 19 to approximate matrix inversion was applied for approximating the Schur complement matrices. e given algorithm was applied to solve Fredholm integral equations of first kind in [22]. Another recursive approach for constructing incomplete block-matrix factorization of the M-matrices by iterative two-step method for the approximation of the inverse of pivoting the diagonal block-matrices at each stage of the recursion was given in [23].
In this study, an iterative predictor-corrector method of order p � 45 is used for computing the inverse of nonzero coefficient matrix Λ ∈ R mn×mn of the linear form Λcol(X) � col(C) of the Sylvester equations. It was established in [24]. e method requires 5 matrix-by-matrix multiplications per step for the predictor step and 5 matrix-by-matrix multiplications per step for the corrector step. en, to verify the predictor-corrector method's fast convergence, experimental analysis is conducted for the three examples, whereas for large dimensional matrices, the computation of the Moore-Penrose inverse is costly when the direct method as singular value decomposition is used. erefore, fast converging iterative methods for approximating the Moore-Penrose inverses that are also computationally efficient are essential.
e need to compute Moore-Penrose inverse for the solution of the Sylvester equation leads to one of the application area. As stated before, these equations appear in control theory and other important real-life phenomena.

Sylvester Matrix Equation
Consider the Sylvester matrix equations of the form where A ∈ R m×m , B ∈ R n×n , and C ∈ R m×n are constant matrices and X ∈ R m×n is the matrix of unknowns. Linear system (2) has a unique solution given as if and only if λ i [A] + λ j [B] ≠ 0 for any i and j [7], where ⊗ presents the Kronecker product for In practice, if B T � A, then equation (2) becomes continuous time Lyapunov equation. For numerical solution of the following iterative predictor-corrector method is used to find inverse of Λ in (4) for solving (7).

The Iterative Predictor-Corrector Method for Approximating Moore-Penrose Inverse of the Matrix
Let Λ∈ C n 1 ×n 2 , I denote the unit matrix, Λ * denote the conjugate transpose of Λ, R(Λ) denote the range of Λ, and P R(Λ) denote the orthogonal projection of R(Λ). e following matrix-valued functions are defined: where T k � (I − ΛV k ). We propose the predictor-corrector method in [24] as follows.
Predictor-corrector method: where In represents the initial step, G represents the generator, P represents the predictor at the fractional step k + 1/2, and C represents the corrector for approximating the Moore-Penrose inverse V k+1 of Λ at the k + 1 th step in (11). e predictor-corrector method given in (11) is effective in computational aspects and is useful for n 1 ≤ n 2 . If n 1 > n 2 , then dual version of the predictor-corrector iterative algorithm (11) can be used. Journal of Mathematics Theorem 1 (see [24]). Let Λ∈ C n 1 ×n 2 and let the nonzero where the real scalar α satisfies whose degree of practicality increases in the given order and degree of precision decreases in the same order and R 0 � P R(Λ) − αΛΛ * .

Error Analysis.
e minimum norm of the solution to the linear system: where col(C) ∈ R(Λ) with R(Λ) denotes the range of Λ, col(X) ∈ R m×n denotes the unknown solution, and rank(Λ) � r. e general least squares solution of (13) or minimum norm least squares solution is the solution of the problem System (14) has a unique solution obtained by the equation col(X) � Λ † col(C), and it is called the pseudoinverse solution [25].

Algorithm for Approximating Inverse of Matrix Λ to Solve Sylvester Equation.
Let col(X) k+1 � V k+1 col(C) be the approximate solution of (14), where V k+1 is the inverse of Λ obtained by the predictor-corrector method given in (11) at the k + 1 − th iteration by using initial approximation V 0 � αΛ T , and α satisfies 0 < α < 2/λ 1 erefore, the residual error at the corresponding step is τ k+1 � col(C) − Λcol(X) k+1 . Given a predescribed accuracy ε > 0, we propose the following algorithm based on Algorithm A in [24] that uses the predictor-corrector method (11) and approximates the pseudoinverse for Λ (See Algorithm 1).

Numerical Examples
In this section, all the calculations are carried out by Mathematica program in double precision for Examples 1-3. e tables adopt the following notation: TCP: total CPU time in seconds.
It is obtained as and then, compute approximate solution of the Sylvester equation (2). In this example, the right side vector C is calculated by AX + XB. We apply the proposed predictorcorrector algorithm to obtain the inverse of Λ, by performing k iterations for an accuracy of ε ≤ 1 × 10 − 6 for the corresponding Sylvester matrix equation (7). Table 1 presents L 2 norm of the errors using the predictor-corrector algorithm and the condition numbers.
Step 7. If k is the performed iteration number, then the approximate solution is y k � col(X) k that satisfies ‖τ k ‖ 2 /‖col(C)‖ 2 ≤ ε.  Table 1 with the literature [7], the predictor-corrector method is a more accurate, fast, and effective method.

Example 2. Sylvester equation (2) with the following coefficient matrices is considered:
A � triu(rand(p, p), 1) + diag(α + diag(rand(p))), B � triu(rand(t, t), 1) + diag(α + diag(rand(p))), (19) where triu represents a tridiagonal matrix, rand represents random numbers, diag represents the diagonal matrix, and α is used for the weights of the diagonal entries [9]. For Example 2, the exact solution is with X 0 � ones(p, t) × 10 − 6 , where speye represents a sparse identity matrix and α � 2. Stopping criterion is ε � 1 × 10 − 9 . Table 2 presents TCP, total iteration number k, and L 2 norm errors for given values of the dimensions p and t. It was already expected that the error would increase with the increase in the condition numbers as shown in Table 2. Figure 1 illustrates the error err � ‖τ k ‖ 2 /‖col(C)‖ 2 for the test problems in Example 2 with respect to iteration number k for α � 0.1, 0.5, 1, 5 when (p, t) � (10, 10). e obtained numerical results are compared with the results given in [7] and those given in [9]. Hence, our results by predictor-corrector algorithm are more accurate and more faster than methods given in [7,9].
According to Figure 1, fast and accurate result can be obtained when α increases. Example 3. Consider the following randomly generated coefficient matrices A, B, C ∈ R 6×6 [16] to solve Sylvester equation (2) via (7) by predictor-corrector algorithm with stopping criteria ε � 1 × 10 − 15 :       Figure 2 illustrates the error err � ‖τ k ‖ 2 /‖col(C)‖ 2 for the test problems in Example 3, and Table 3 presents L 2 norm of the errors using the predictor-corrector algorithm and the condition numbers. e accuracy of the predictor-corrector algorithm is remarkable.

Concluding Remarks
An iterative predictor-corrector method of convergence order p � 45 is used for computing the inverse of a nonzero matrix Λ ∈ R mn×mn in (4) to solve Sylvester equation (2). It converges fast, and it is a highly accurate method. It can also be useful when Λ is rectangular matrix or an ill-conditioned matrix. e predictor-corrector iterative method may also be applied to precondition the coefficient matrices of the linear algebraic system of equations obtained by using finite difference method to solve boundary value problems, for example, for the solution of Laplace's equation with singularities (see [26,27]), for the solution of the heat equation on hexagonal grid (see [28]), and for the approximation of the derivatives of the solution of the heat equation (see [29,30]). Moreover, the iterative method of predictor-corrector for finding matrix inverses may be applied to precondition the coefficient matrices of the system of equations obtained by using finite element method to solve parabolic partial differential equations (see [31]).

Data Availability
No data were used in this study.

Conflicts of Interest
e author declares that there are no conflicts of interest.