MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi 10.1155/2017/1624969 1624969 Research Article The Relaxed Gradient Based Iterative Algorithm for the Symmetric (Skew Symmetric) Solution of the Sylvester Equation AX+XB=C Zhang Xiaodan 1 http://orcid.org/0000-0001-7248-038X Sheng Xingping 2 Loiseau Jean Jacques 1 School of Information and Computer Anhui Agricultural University Hefei 230036 China ahau.edu.cn 2 School of Mathematics and Statistics Fuyang Normal College Anhui 236037 China fync.edu.cn 2017 342017 2017 26 02 2017 23 03 2017 342017 2017 Copyright © 2017 Xiaodan Zhang and Xingping Sheng. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In this paper, we present two different relaxed gradient based iterative (RGI) algorithms for solving the symmetric and skew symmetric solution of the Sylvester matrix equation AX+XB=C. By using these two iterative methods, it is proved that the iterative solution converges to its true symmetric (skew symmetric) solution under some appropriate assumptions when any initial symmetric (skew symmetric) matrix X0 is taken. Finally, two numerical examples are given to illustrate that the introduced iterative algorithms are more efficient.

National Natural Science Foundation of China 11471122 Natural Science Foundation of Anhui Province 1508085MA12 Key Projects of Anhui Provincial University Excellent Talent Support Program gxyqZD2016188 University Natural Science Research Key Project of Anhui Province KJ2015A161
1. Introduction

For the convenience of our statements, the following notations will be used throughout the paper: Rm×n represents the set of m×n real matrices. For ARm×n, we write AT, R(A), tr(A), ρ(A), λmax(A), λmin(A), A2, and AF to denote the transpose, the range space, the trace, the spectral radius, the maximal eigenvalue, the minimal eigenvalue, the spectral norm, and the Frobenius norm of a matrix A, respectively; that is, A2=λmax(ATA) and AF=[tr(ATA)]1/2. σmax(A), σmin(A) are the maximal singular value and the minimal nonzero singular value of A. The symbol of In represents an identity matrix of order n, 1m×n is an m×n matrix whose elements are 1, and cond(A)=σmax(A)/σmin(A) is the condition number of matrix A. The inner product in space Rm×n is defined as A,B=tr(ATB); particularly, AF2=tr(ATA)=tr(AAT)=A,A. AB denotes the Kronecker product, defined as AB=(aijB), i=1,,m, j=1,,n. For any matrix X=(x1,x2,,xn)Rm×n, the vector operator is defined as vec(X)=(x1T,x2T,,xnT)TRmn. Using vector operator and Kronecker product, we have vec(AXB)=(BTA)vec(X).

Considering the symmetric (skew symmetric) solution of the Sylvester matrix equation (1)AX+XB=C,where ARn×n, BRn×n, CRn×n, and the unknown matrix XRn×n.

The Sylvester matrix equation (1) has many applications in linear system theory, for example, pole/eigenstructure assignment , robust pole assignment , robust partial pole assignment , observer design , model matching problem , regularization of descriptor systems [12, 13], disturbance decoupling problem , and noninteracting control .

As is well known, (1) has a unique solution if and only if A and -B possess no common eigenvalues  and the solution can be computed by solving a linear system (IA+BTI)vec(X)=vec(C). Using this method, it will increase the computational cost and storage requirements, so that this approach is only applicable for small sized Sylvester equations.

Due to these drawbacks, many other methods for the solution have appeared in the literature. The idea of transforming the coefficient matrix into a Schur or Hessenberg form to compute (1) have been presented in [16, 17]. When the linear matrix (1) is inconsistent, a finite iterative method to solving its Hermitian minimum norm solutions has been presented in . An efficient iterative method based on Hermitian and skew Hermitian splitting has been proposed in . Krylov subspace based methods have been presented in  for solving Sylvester equations and generalized Sylvester equations. Recently based on the idea of a hierarchical identification principle , some efficient gradient based iterative algorithms for solving generalized Sylvester equations and coupled (general coupled) Sylvester equations have been proposed in [27, 3032]. Particularly, for Sylvester equations of form (1), it is illustrated in  that the unknown matrix to be identified can be computed by a gradient based iterative algorithm. The convergence properties of the methods are investigated in [27, 32]. Niu et al.  proposed a relaxed gradient based iterative algorithm for solving Sylvester equations. Wang et al.  proposed a modified gradient based iterative algorithm for solving Sylvester equations (1). More recently, Xie and Ma  gave an accelerated gradient based iterative algorithm for solving (1). In [37, 38] Xie et al. studied the special structure solution of matrix equation (1) by using iterative method.

In this paper, inspired by [28, 3436], we first derive a relaxed gradient based iterative (RGI) algorithm for solving the symmetric solution of matrix equation (1). Theoretical analysis shows that our method converges to the exact symmetric solution for any initial value with some appropriate assumptions. Then the proposed algorithm can be also applied to the skew symmetric solution of matrix equation (1). Numerical results illustrate that the proposed method is correct and feasible. We must point out that the ideas in this paper have some differences comparing with that in [28, 3436].

The rest of the paper is organized as follows. In Section 2, some main preliminaries are provided. In Section 3, the relaxed gradient based iterative methods are studied. Finally, a numerical example is included to verify the superior convergence for the algorithms.

2. Preliminaries

In this section, we reviewed the ideas and principles of the gradient based iterative (GI) method, the relaxed gradient based iterative (RGI) method, and the modified gradient based iterative (MGI) method.

Let μ>0 be the convergence factor or step factor. The gradient based iterative method for Ax=b is as follows: (2)xk=xk-1-μATAxk-1-b.

The convergence of the gradient based iterative is stated as follows.

Lemma 1 (see [<xref ref-type="bibr" rid="B33">32</xref>]).

Assume matrix A is full column-rank and 0<μ<2/A22; then the gradient based iterative sequences {x(k)} in (2) converge to x; that is, limkx(k)=x or the error x~(k)=x(k)-x converges to zero for any initial value x(0). Moreover, the maximal 2-convergence rate is given by γmax=1/βμopt, where(3)βμopt=cond2A-1cond2A+1,μopt=2σmax2A+σmin2A.In this case, the error vector x~(k) satisfies (4)x~kβkμoptx~0,k0.

In , Ding and Chen presented the following algorithm based on gradient for solving (1).

Algorithm 2 (see [<xref ref-type="bibr" rid="B28">28</xref>] (the gradient based iterative (GI) algorithm)).

Step 1. Input matrices A,B,CRn×n, given any small positive number ϵ. Choose the initial matrices X1(0) and X2(0). Compute X(0)=(X1(0)+X2(0))/2. Set k1.

Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (5)X1k=Xk-1-μATAXk-1+Xk-1B-C,X2k=Xk-1-μAXk-1+Xk-1B-CBT,Xk=X1k+X2k2.

Step 4. Set kk+1; return to Step 2.

The authors  also pointed out that if the convergence factor μ is chosen in (0,2/(σ2A+σ2B)), Algorithm 2 will converge to the exact solution of (1).

Niu et al.  gave a relaxed gradient based iterative algorithm for solving (1). When μ is in 0,2/ω1-ωσ2A+σ2B+σBAT the following algorithm has been proven to be convergent.

Algorithm 3 (see [<xref ref-type="bibr" rid="B35">34</xref>] (the relaxed gradient based iterative (RGI) algorithm)).

Step 1. Input matrices A,B,CRn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=ωX1(0)+(1-ω)X2(0). Set k1.

Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (6)X1k=Xk-1-1-ωμATAXk-1+Xk-1B-C,X2k=Xk-1-ωμAXk-1+Xk-1B-CBT,Xk=ωX1k+1-ωX2k.

Step 4. Set kk+1; return to Step 2.

Recently, in  Wang et al. proposed a modified gradient based iterative (MGI) algorithm to solve (1). The main difference is that, in the step of computing X2(k), the last approximate solution X1(k) has been considered fully to update X(k-1).

Algorithm 4 (see [<xref ref-type="bibr" rid="B36">35</xref>] (the modified gradient based iterative (MGI) algorithm)).

Step 1. Input matrix A,B,CRn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=(X1(0)+X2(0))/2. Set k1.

Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (7)X1k=Xk-1-μATAXk-1+Xk-1B-C.

Step 4. Compute (8)Xk-1=X1k+X2k-12.

Step 5. Update the sequences (9)X2k=Xk-1-μAXk-1+Xk-1B-CBT.

Step 6. Compute(10)Xk=X1k+X2k2.

Step 7. Set kk+1; return to Step 2.

More recently, Xie and Ma  presented the following AGBI algorithm for solving (1) based on the idea of MGI.

Algorithm 5 (see [<xref ref-type="bibr" rid="B37">36</xref>] (the accelerated gradient based iterative (AGBI) algorithm)).

Step 1. Input matrix A,B,CRn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=ωX1(0)+(1-ω)X2(0). Set k1.

Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (11)X1k=Xk-1-1-ωμATAXk-1+Xk-1B-C.

Step 4. Compute (12)Xk-1=ωX1k+1-ωX2k-1.

Step 5. Update the sequences (13)X2k=Xk-1-ωμAXk-1+Xk-1B-CBT.

Step 6. Compute (14)Xk=ωX1k+1-ωX2k.

Step 7. Set kk+1; return to Step 2.

3. Main Results

In this section, we first study the necessary and sufficient conditions of the symmetric solution for (1). Then the relaxed gradient based iterative algorithm for the symmetric solution of equation (1) is proposed. Following the same line, the relaxed gradient based iterative algorithm for the skew symmetric solution of equation (1) is also presented.

Theorem 6.

The matrix equation (1) has a unique symmetric solution Xs if and only if the following pair of the matrix equations (15)AX+XB=C,BTX+XAT=CThas a unique common solution X and Xs=(X+(X)T)/2.

Proof.

If Xs is a unique symmetric solution of (1), then (Xs)T=Xs and AXs+XsB=C; further we have(16)BTXs+XsAT=AXs+XsBT=CT.This shows that Xs is also the solution of the pair matrix equations (15).

Conversely, if the system of matrix equations (15) has a common solution X, let us denote Xs=(X+(X)T)/2; then we can check that (17)AXs+XsB=AX+XT2+X+XT2B=AX+XB2+AXT+XTB2=AX+XB2+BTX+XATT2=C+CTT2=C.This implies that Xs is the unique symmetric solution of (1).

According to Theorem 6, if the unique common solution X of equations (15) can be obtained, then the unique symmetric solution of (1) is Xs=(X+(X)T)/2.

According to Theorem 6, we construct a relaxed gradient based iterative algorithm to solve the symmetric solution of (1).

Algorithm 7 (the relaxed gradient based iterative (RGI) algorithm for symmetric solution of (<xref ref-type="disp-formula" rid="EEq1.1">1</xref>)).

Step 1. Input matrices A,B,CRn×n, given any small positive number ϵ and appropriative positive number ω such that 0<ω<1. Choose any initial matrix X(0). Set k1.

Step 2. If δk=AXs(k)+Xs(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (18)X1k=Xk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT,X2k=Xk-1-μ2BBTXk-1+Xk-1AT-CT+BTXk-1+Xk-1AT-CTA,Xk=ωX1k+1-ωX2k,Xsk=Xk+XTk2.

Step 4. Set kk+1; return to Step 2.

In the following paragraph, we will investigate the convergence of Algorithm 7.

Theorem 8.

Assume the matrix equations (15) have a unique solution X; then the iterative sequence {X(k)} generated by the Algorithm 7 converges to X, if (19)0<μ1<1ωA22+B22;0<μ2<11-ωA22+B22that is, limkX(k)=X or the error X(k)-X converges to zero for any initial value X(0).

Further the sequence {Xs(k)} converges to Xs, where Xs is the unique symmetric solution of (1).

Proof.

Define the error matrix (20)X~k=Xk-X,θ1k-1AX~k-1+X~k-1B,θ2k-1BTX~k-1+X~k-1AT.We have (21)X~k=Xk-X=ωX1k+1-ωX2k-X=ωX1k-X+1-ωX2k-X=ωXk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT-X+1-ωXk-1-μ2BBTXk-1+Xk-1AT-CT+BTXk-1+Xk-1AT-CTA-X=X~k-1-ωμ1ATAX~k-1+X~k-1B+AX~k-1+X~k-1BBT+1-ωμ2BBTX~k-1+X~k-1AT+BTX~k-1+X~k-1ATA=X~k-1-ωμ1ATθ1k-1+θ1k-1BT-1-ωμ2Bθ2k-1+θ2k-1A.The following two equalities are easily derived.(22)ATθ1k-1+θ1k-1BTF2A22+B22θ1k-1F2,Bθ2k-1+θ2k-1AF2A22+B22θ2k-1F2.Moreover (23)X~kF2=X~k-1-ωμ1ATθ1k-1+θ1k-1BT-1-ωμ2Bθ2k-1+θ2k-1AF2=X~k-1F2+ω2μ12ATθ1k-1+θ1k-1BTF2+1-ω2μ22Bθ2k-1+θ2k-1AF2-2ωμ1trX~k-1TATθ1k-1+θ1k-1BT-21-ωμ2trX~k-1TBθ2k-1+θ2k-1A+2ω1-ωμ1μ2trATθ1k-1+θ1k-1BTTBθ2k-1+θ2k-1AX~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+2ω1-ωμ1μ2ATθ1k-1+θ1k-1BT,Bθ2k-1+θ2k-1AX~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+2ωμ1ATθ1k-1+θ1k-1BTF1-ωμ2Bθ2k-1+θ2k-1AFX~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+ω2μ12ATθ1k-1+θ1k-1BTF2+1-ω2μ22Bθ2k-1+θ2k-1AF2=X~k-1F2-2ωμ11-ωμ1A22+B22θ1k-1F2-21-ωμ21-1-ωμ2A22+B22θ2k-1F2.Therefore, the above equality implies that (24)X~kF2X~0F2-2ωμ11-ωμ1A22+B22i=1kθ1i-1F2-21-ωμ21-1-ωμ2A22+B22i=1kθ2i-1F2.It follows from 0<μ1<1/ωA22+B22 and 0<μ2<1/1-ωA22+B22 that(25)i=1θ1i-1F2<,i=1θ2i-1F2<.The above two inequalities (25) mean that(26)limkθ1k-1=0,limkθ2k-1=0.In other words(27)limkAXk-1+Xk-1B=C,limkBTXk-1+Xk-1AT=CT.Equation (27) implies that (28)limkXk=X.

From Theorem 6 and the above limited equation (28), we have (29)limkXsk=limkXk+XTk2=X+XT2=Xs.

Following the same line, the idea of Algorithm 7 can be extended to solve the skew symmetric solution of (1). First, we need the following theorem.

Theorem 9.

The matrix equation (1) has a unique skew symmetric solution Xss if and only if the following pair of the matrix equations (30)AX+XB=C,BTX+XAT=-CThas a unique common solution X and Xss=(X-(X)T)/2.

The relaxed gradient based iterative algorithm for solving the skew symmetric solution of (1) can be stated as follows.

Algorithm 10 (the relaxed gradient based iterative (RGI) algorithm for skew symmetric solution of (<xref ref-type="disp-formula" rid="EEq1.1">1</xref>)).

Step 1. Input matrices A,B,CRn×n, given any small positive number ϵ and appropriative positive number ω such that 0<ω<1. Choose any initial matrix X(0). Set k1.

Step 2. If δk=AXss(k)+Xss(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.

Step 3. Update the sequences (31)X1k=Xk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT,X2k=Xk-1-μ2BBTXk-1+Xk-1AT+CT+BTXk-1+Xk-1AT+CTA,Xk=ωX1k+1-ωX2k,Xssk=Xk-XTk2.

Step 4. Set kk+1; return to Step 2.

Similarly, we have the following theorem to ensure the convergence of the Algorithm 10.

Theorem 11.

Assume the matrix equations (30) have a unique solution X; then the iterative sequence {X(k)} generated by the Algorithm 10 converges to X, if (32)0<μ1<1ωA22+B22,0<μ2<11-ωA22+B22;that is, limkX(k)=X or the error X(k)-X converges to zero for any initial value X(0).

Furthermore, the sequence {Xss(k)} converges to Xss, where Xss is the unique skew symmetric solution of (1).

4. Numerical Examples

In this section, two numerical examples are used to show the efficiency of the RGI Method. All the computations were performed on Intel® Core™ i7-4500U CPU @ 1.80 GHZ 2.40 GHZ system by using MATLAB 7.0. EER is the Frobenius norm of absolute error matrices which is defined to be EER=X(k)-XF, where X(k) is the kth iteration result for the RGI Method.

Example 1.

In matrix equation (1), we choose (33)A=4.53.53.0-5.5-5.0-6.0-3.02.04.02.05.0-3.01.51.50.0-0.5,B=-1.5-6.5-3.0-2.51.06.03.01.0-4.0-5.0-4.00.0-1.5-1.50.0-0.5,C=-5.512.511.04.53.5-3.52.0-4.5-8.5-10.5-1.0-1.52.00.00.0-1.0.

It is easy to show that the matrix equation (1) is consistent and has unique symmetric solution. By computing, the unique symmetric solution Xs is given as (34)Xs=-110110-100-11-110-10.

Taking X(0)=10-612×3 and applying the RGI Method (Algorithm 7) to compute the symmetric solution of AX+XB=C, the sum of A22+B22 is 962.2175. When taking ω=0.4, μ1=0.0026, and μ2=0.0017, the iterative errors δk versus k are shown in Figure 1, where δk=Xs(k)-XsF2.

Convergence cure of symmetric solution (ω=0.4, μ1=0.0026, and μ2=0.0017).

Example 2.

In matrix equation (1), if we choose (35)A=4.58.53.0-5.5-5.0-9.0-3.07.08.014.05.0-12.01.51.50.0-0.5,B=-1.5-6.5-3.0-2.51.06.03.01.0-4.0-10.0-4.00.0-1.5-1.50.0-0.5,C=-11.08.028.50.57.5-2.5-27.06.5-19.5-19.534.0-3.54.025.013.53.5,

it is easy to check that the matrix equation (1) is consistent and has unique skew symmetric solution. The unique skew symmetric solution Xss is computed as(36)Xss=0201-202-10-201-11-10.

Also taking X(0)=10-612×3 and ω=0.5, μ1=0.0021, and μ2=0.0021, then using Algorithm 10, the convergence cure of skew symmetric solution is shown in Figure 2.

Convergence cure of skew symmetric solution (ω=0.5, μ1=0.0021, and μ2=0.0021).

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This project was supported by NSF China (no. 11471122), Anhui Provincial Natural Science Foundation (no. 1508085MA12), Key Projects of Anhui Provincial University Excellent Talent Support Program (no. gxyqZD2016188), and the University Natural Science Research Key Project of Anhui Province (no. KJ2015A161).

Bhattacharyya S. P. de Souza E. Pole assignment via Sylvester's equation Systems & Control Letters 1982 1 4 261 263 10.1016/s0167-6911(82)80009-1 MR670209 2-s2.0-0001287194 Duan G. R. Solutions to matrix equation AV+BW=VF and their application to eigenstructure assignment in linear systems IEEE Transactions on Automatic Control 1993 38 2 276 280 Duan G. R. On the solution to the Sylvester matrix equation AV+BW=EVF IEEE Transactions on Automatic Control 1996 41 4 612 614 10.1109/9.489286 Zhou B. Duan G. R. A new solution to the generalized Sylvester matrix equation AV-EVF=BW Systems & Control Letters 2006 55 3 193 198 10.1016/j.sysconle.2005.07.002 Hu T. S. Lin Z. L. Lam J. Unified gradient approach to performance optimization under a pole assignment constraint Journal of Optimization Theory and Applications 2004 121 2 361 383 10.1023/B:JOTA.0000037409.86566.f9 MR2085282 ZBL1056.93036 2-s2.0-4344612745 Lam J. Yan W. Y. A gradient flow approach to the robust pole-placement problem International Journal of Robust and Nonlinear Control 1995 5 3 175 185 10.1002/rnc.4590050303 MR1329623 2-s2.0-84987172528 Lam J. Yan W.-Y. Pole assignment with optimal spectral conditioning Systems & Control Letters 1997 29 5 241 253 10.1016/s0167-6911(97)90007-4 MR1432650 Lam J. Yan W.-Y. Hu T. Pole assignment with eigenvalue and stability robustness International Journal of Control 1999 72 13 1165 1174 10.1080/002071799220326 MR1717876 ZBL1047.93519 2-s2.0-0033344016 Datta B. N. Lin W.-W. Wang J.-N. Robust partial pole assignment for vibrating systems with aerodynamic effects IEEE Transactions on Automatic Control 2006 51 12 1979 1984 10.1109/tac.2006.886543 MR2284426 2-s2.0-33845741147 Zhou B. Duan G. R. Parametric approach for the normal Luenberger function observer design in second-order linear systems Proceedings of the 45th IEEE Conference on Decision and Control 2006 San Diego, Calif, USA 1423 1428 Chu D. L. Van Dooren P. A novel numerical method for exact model matching problem with stability Automatica 2006 42 10 1697 1704 10.1016/j.automatica.2006.04.024 MR2249717 2-s2.0-33747751241 Chu D. L. Chan H. C. Ho D. W. C. Regularization of singular systems by derivative and proportional output feedback SIAM Journal on Matrix Analysis and Applications 1998 19 1 21 38 10.1137/s0895479895270963 MR1609933 2-s2.0-0032341235 Chu D. L. Mehrmann V. Nichols N. K. Minimum norm regularization of descriptor systems by mixed output feedback Linear Algebra and Its Applications 1999 296 1–3 39 77 10.1016/s0024-3795(99)00108-1 MR1713272 Chu D. L. Mehrmann V. Disturbance decoupling for descriptor systems by state feedback SIAM Journal on Control and Optimization 2000 38 6 1830 1858 10.1137/s0363012900331891 MR1776658 2-s2.0-0034540417 Chu D. L. Tan R. C. E. Numerically reliable computing for the row by row decoupling problem with stability SIAM Journal on Matrix Analysis and Applications 2002 23 4 1143 1170 10.1137/s0895479801362546 MR1920938 2-s2.0-0036403031 Golub G. H. Nash S. Van Loan C. A Hessenberg-Schur method for the problem AX-XB=C IEEE Transactions on Automatics Control 1979 24 909 913 Bartels R. H. Stewart G. W. Algorithm 432: Solution of the matrix equation AX-XB=C Communications of the ACM 1972 15 9 820 826 Deng Y.-B. Bai Z.-Z. Gao Y.-H. Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations Numerical Linear Algebra with Applications 2006 13 10 801 823 10.1002/nla.496 MR2278194 ZBL1174.65382 2-s2.0-33845664479 Bai Z.-Z. On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations Journal of Computational Mathematics 2011 29 2 185 198 10.4208/jcm.1009-m3152 MR2759973 2-s2.0-79955132775 Bouhamidi A. Jbilou K. A note on the numerical approximate solutions for generalized Sylvester matrix equations with applications Applied Mathematics and Computation 2008 206 2 687 694 10.1016/j.amc.2008.09.022 MR2483041 2-s2.0-56949103609 Kaabi A. Toutounian F. Kerayechian A. Preconditioned Galerkin and minimal residual methods for solving Sylvester equations Applied Mathematics and Computation 2006 181 2 1208 1214 10.1016/j.amc.2006.02.021 MR2270000 ZBL1105.65040 2-s2.0-33750478671 Kaabi A. Kerayechian A. Toutounian F. A new version of successive approximations method for solving Sylvester matrix equations Applied Mathematics and Computation 2007 186 1 638 645 10.1016/j.amc.2006.08.007 MR2314524 ZBL1121.65044 2-s2.0-33947240366 Lin Y.-Q. Implicitly restarted global FOM and GMRES for nonsymmetric matrix equations and Sylvester equations Applied Mathematics and Computation 2005 167 2 1004 1025 10.1016/j.amc.2004.06.141 MR2169748 2-s2.0-25144497980 Lin Y. Minimal residual methods augmented with eigenvectors for solving SYLvester equations and generalized SYLvester equations Applied Mathematics and Computation 2006 181 1 487 499 10.1016/j.amc.2005.12.055 MR2270508 2-s2.0-33749512783 Khojasteh Salkuyeh D. Toutounian F. New approaches for solving large Sylvester equations Applied Mathematics and Computation 2006 173 1 9 18 10.1016/j.amc.2005.02.063 MR2203370 2-s2.0-32144459666 Zhang J.-J. A note on the iterative solutions of general coupled matrix equation Applied Mathematics and Computation 2011 217 22 9380 9386 10.1016/j.amc.2011.04.026 MR2804016 ZBL1217.65081 2-s2.0-79957500531 Ding F. Chen T. Hierarchical gradient-based identification of multivariable discrete-time systems Automatica 2005 41 2 315 325 10.1016/j.automatica.2004.10.010 MR2157667 2-s2.0-22144434326 Ding F. Chen T. Gradient-based iterative algorithms for solving a class of matrix equations IEEE Transactions on Automatic Control 2005 50 8 1216 1221 10.1109/tac.2005.852558 MR2156053 2-s2.0-26244448321 Ding F. Chen T. Hierarchical identification of lifted state-space models for general dual-rate systems IEEE Transactions on Circuits and Systems. I. Regular Papers 2005 52 6 1179 1187 10.1109/TCSI.2005.849144 MR2147479 2-s2.0-22144498905 Chehab J.-P. Raydan M. An implicit preconditioning strategy for large-scale generalized Sylvester equations Applied Mathematics and Computation 2011 217 21 8793 8803 10.1016/j.amc.2011.03.148 MR2802287 2-s2.0-79956082262 Ding F. Chen T. On iterative solutions of general coupled matrix equations SIAM Journal on Control and Optimization 2006 44 6 2269 2284 10.1137/S0363012904441350 MR2248183 ZBL1115.65035 2-s2.0-33751185751 Ding F. Liu P. X. Ding J. Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle Applied Mathematics and Computation 2008 197 1 41 50 10.1016/j.amc.2007.07.040 MR2396289 ZBL1143.65035 2-s2.0-38949156204 Wu A.-G. Zeng X. Duan G.-R. Wu W.-J. Iterative solutions to the extended Sylvester-conjugate matrix equations Applied Mathematics and Computation 2010 217 1 130 142 10.1016/j.amc.2010.05.029 MR2672571 ZBL1223.65032 2-s2.0-77955425442 Niu Q. Wang X. Lu L.-Z. A relaxed gradient based algorithm for solving Sylvester equations Asian Journal of Control 2011 13 3 461 464 10.1002/asjc.328 MR2830148 ZBL1242.65081 2-s2.0-84862908727 Wang X. Dai L. Liao D. A modified gradient based algorithm for solving Sylvester equations Applied Mathematics and Computation 2012 218 9 5620 5628 10.1016/j.amc.2011.11.055 MR2870079 2-s2.0-83555174548 Xie Y.-J. Ma C.-F. The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation Applied Mathematics and Computation 2016 273 1257 1269 10.1016/j.amc.2015.07.022 MR3427834 2-s2.0-84948569118 Xie Y. Ma C. Iterative methods to solve the generalized coupled Sylvester-conjugate matrix equations for obtaining the centrally symmetric (centrally antisymmetric) matrix solutions Journal of Applied Mathematics 2014 2014 17 515816 10.1155/2014/515816 MR3232918 2-s2.0-84904787598 Xie Y. Huang N. Ma C. Iterative method to solve the generalized coupled Sylvester-transpose linear matrix equations over reflexive or anti-reflexive matrix Computers & Mathematics with Applications 2014 67 11 2071 2084 10.1016/j.camwa.2014.04.012 MR3210120 2-s2.0-84901689278