In this paper, we present two different relaxed gradient based iterative (RGI) algorithms for solving the symmetric and skew symmetric solution of the Sylvester matrix equation AX+XB=C. By using these two iterative methods, it is proved that the iterative solution converges to its true symmetric (skew symmetric) solution under some appropriate assumptions when any initial symmetric (skew symmetric) matrix X0 is taken. Finally, two numerical examples are given to illustrate that the introduced iterative algorithms are more efficient.
National Natural Science Foundation of China11471122Natural Science Foundation of Anhui Province1508085MA12Key Projects of Anhui Provincial University Excellent Talent Support ProgramgxyqZD2016188University Natural Science Research Key Project of Anhui ProvinceKJ2015A1611. Introduction
For the convenience of our statements, the following notations will be used throughout the paper: Rm×n represents the set of m×n real matrices. For A∈Rm×n, we write AT, R(A), tr(A), ρ(A), λmax(A), λmin(A), A2, and AF to denote the transpose, the range space, the trace, the spectral radius, the maximal eigenvalue, the minimal eigenvalue, the spectral norm, and the Frobenius norm of a matrix A, respectively; that is, A2=λmax(ATA) and AF=[tr(ATA)]1/2. σmax(A), σmin(A) are the maximal singular value and the minimal nonzero singular value of A. The symbol of In represents an identity matrix of order n, 1m×n is an m×n matrix whose elements are 1, and cond(A)=σmax(A)/σmin(A) is the condition number of matrix A. The inner product in space Rm×n is defined as A,B=tr(ATB); particularly, AF2=tr(ATA)=tr(AAT)=A,A. A⊗B denotes the Kronecker product, defined as A⊗B=(aijB), i=1,…,m, j=1,…,n. For any matrix X=(x1,x2,…,xn)∈Rm×n, the vector operator is defined as vec(X)=(x1T,x2T,…,xnT)T∈Rmn. Using vector operator and Kronecker product, we have vec(AXB)=(BT⊗A)vec(X).
Considering the symmetric (skew symmetric) solution of the Sylvester matrix equation (1)AX+XB=C,where A∈Rn×n, B∈Rn×n, C∈Rn×n, and the unknown matrix X∈Rn×n.
The Sylvester matrix equation (1) has many applications in linear system theory, for example, pole/eigenstructure assignment [1–4], robust pole assignment [5–8], robust partial pole assignment [9], observer design [10], model matching problem [11], regularization of descriptor systems [12, 13], disturbance decoupling problem [14], and noninteracting control [15].
As is well known, (1) has a unique solution if and only if A and -B possess no common eigenvalues [16] and the solution can be computed by solving a linear system (I⊗A+BT⊗I)vec(X)=vec(C). Using this method, it will increase the computational cost and storage requirements, so that this approach is only applicable for small sized Sylvester equations.
Due to these drawbacks, many other methods for the solution have appeared in the literature. The idea of transforming the coefficient matrix into a Schur or Hessenberg form to compute (1) have been presented in [16, 17]. When the linear matrix (1) is inconsistent, a finite iterative method to solving its Hermitian minimum norm solutions has been presented in [18]. An efficient iterative method based on Hermitian and skew Hermitian splitting has been proposed in [19]. Krylov subspace based methods have been presented in [20–26] for solving Sylvester equations and generalized Sylvester equations. Recently based on the idea of a hierarchical identification principle [27–29], some efficient gradient based iterative algorithms for solving generalized Sylvester equations and coupled (general coupled) Sylvester equations have been proposed in [27, 30–32]. Particularly, for Sylvester equations of form (1), it is illustrated in [33] that the unknown matrix to be identified can be computed by a gradient based iterative algorithm. The convergence properties of the methods are investigated in [27, 32]. Niu et al. [34] proposed a relaxed gradient based iterative algorithm for solving Sylvester equations. Wang et al. [35] proposed a modified gradient based iterative algorithm for solving Sylvester equations (1). More recently, Xie and Ma [36] gave an accelerated gradient based iterative algorithm for solving (1). In [37, 38] Xie et al. studied the special structure solution of matrix equation (1) by using iterative method.
In this paper, inspired by [28, 34–36], we first derive a relaxed gradient based iterative (RGI) algorithm for solving the symmetric solution of matrix equation (1). Theoretical analysis shows that our method converges to the exact symmetric solution for any initial value with some appropriate assumptions. Then the proposed algorithm can be also applied to the skew symmetric solution of matrix equation (1). Numerical results illustrate that the proposed method is correct and feasible. We must point out that the ideas in this paper have some differences comparing with that in [28, 34–36].
The rest of the paper is organized as follows. In Section 2, some main preliminaries are provided. In Section 3, the relaxed gradient based iterative methods are studied. Finally, a numerical example is included to verify the superior convergence for the algorithms.
2. Preliminaries
In this section, we reviewed the ideas and principles of the gradient based iterative (GI) method, the relaxed gradient based iterative (RGI) method, and the modified gradient based iterative (MGI) method.
Let μ>0 be the convergence factor or step factor. The gradient based iterative method for Ax=b is as follows: (2)xk=xk-1-μATAxk-1-b.
The convergence of the gradient based iterative is stated as follows.
Lemma 1 (see [32]).
Assume matrix A is full column-rank and 0<μ<2/A22; then the gradient based iterative sequences {x(k)} in (2) converge to x∗; that is, limk→∞x(k)=x∗ or the error x~(k)=x(k)-x∗ converges to zero for any initial value x(0). Moreover, the maximal 2-convergence rate is given by γmax=1/βμopt, where(3)βμopt=cond2A-1cond2A+1,μopt=2σmax2A+σmin2A.In this case, the error vector x~(k) satisfies (4)x~k≤βkμoptx~0,k≥0.
In [28], Ding and Chen presented the following algorithm based on gradient for solving (1).
Algorithm 2 (see [28] (the gradient based iterative (GI) algorithm)).
Step 1. Input matrices A,B,C∈Rn×n, given any small positive number ϵ. Choose the initial matrices X1(0) and X2(0). Compute X(0)=(X1(0)+X2(0))/2. Set k≔1.
Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (5)X1k=Xk-1-μATAXk-1+Xk-1B-C,X2k=Xk-1-μAXk-1+Xk-1B-CBT,Xk=X1k+X2k2.
Step 4. Set k≔k+1; return to Step 2.
The authors [28] also pointed out that if the convergence factor μ is chosen in (0,2/(σ2A+σ2B)), Algorithm 2 will converge to the exact solution of (1).
Niu et al. [34] gave a relaxed gradient based iterative algorithm for solving (1). When μ is in 0,2/ω1-ωσ2A+σ2B+σBAT the following algorithm has been proven to be convergent.
Algorithm 3 (see [34] (the relaxed gradient based iterative (RGI) algorithm)).
Step 1. Input matrices A,B,C∈Rn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=ωX1(0)+(1-ω)X2(0). Set k ≔ 1.
Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (6)X1k=Xk-1-1-ωμATAXk-1+Xk-1B-C,X2k=Xk-1-ωμAXk-1+Xk-1B-CBT,Xk=ωX1k+1-ωX2k.
Step 4. Set k≔k+1; return to Step 2.
Recently, in [35] Wang et al. proposed a modified gradient based iterative (MGI) algorithm to solve (1). The main difference is that, in the step of computing X2(k), the last approximate solution X1(k) has been considered fully to update X(k-1).
Algorithm 4 (see [35] (the modified gradient based iterative (MGI) algorithm)).
Step 1. Input matrix A,B,C∈Rn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=(X1(0)+X2(0))/2. Set k≔1.
Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (7)X1k=Xk-1-μATAXk-1+Xk-1B-C.
Step 4. Compute (8)Xk-1=X1k+X2k-12.
Step 5. Update the sequences (9)X2k=Xk-1-μAXk-1+Xk-1B-CBT.
Step 6. Compute(10)Xk=X1k+X2k2.
Step 7. Set k≔k+1; return to Step 2.
More recently, Xie and Ma [36] presented the following AGBI algorithm for solving (1) based on the idea of MGI.
Algorithm 5 (see [36] (the accelerated gradient based iterative (AGBI) algorithm)).
Step 1. Input matrix A,B,C∈Rn×n, given any small positive number ϵ and appropriate positive number ω. Choose the initial matrices X1(0) and X2(0). Compute X(0)=ωX1(0)+(1-ω)X2(0). Set k≔1.
Step 2. If δk=AX(k)+X(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (11)X1k=Xk-1-1-ωμATAXk-1+Xk-1B-C.
Step 4. Compute (12)Xk-1=ωX1k+1-ωX2k-1.
Step 5. Update the sequences (13)X2k=Xk-1-ωμAXk-1+Xk-1B-CBT.
Step 6. Compute (14)Xk=ωX1k+1-ωX2k.
Step 7. Set k≔k+1; return to Step 2.
3. Main Results
In this section, we first study the necessary and sufficient conditions of the symmetric solution for (1). Then the relaxed gradient based iterative algorithm for the symmetric solution of equation (1) is proposed. Following the same line, the relaxed gradient based iterative algorithm for the skew symmetric solution of equation (1) is also presented.
Theorem 6.
The matrix equation (1) has a unique symmetric solution Xs if and only if the following pair of the matrix equations (15)AX+XB=C,BTX+XAT=CThas a unique common solution X∗ and Xs=(X∗+(X∗)T)/2.
Proof.
If Xs is a unique symmetric solution of (1), then (Xs)T=Xs and AXs+XsB=C; further we have(16)BTXs+XsAT=AXs+XsBT=CT.This shows that Xs is also the solution of the pair matrix equations (15).
Conversely, if the system of matrix equations (15) has a common solution X∗, let us denote Xs=(X∗+(X∗)T)/2; then we can check that (17)AXs+XsB=AX∗+X∗T2+X∗+X∗T2B=AX∗+X∗B2+AX∗T+X∗TB2=AX∗+X∗B2+BTX∗+X∗ATT2=C+CTT2=C.This implies that Xs is the unique symmetric solution of (1).
According to Theorem 6, if the unique common solution X∗ of equations (15) can be obtained, then the unique symmetric solution of (1) is Xs=(X∗+(X∗)T)/2.
According to Theorem 6, we construct a relaxed gradient based iterative algorithm to solve the symmetric solution of (1).
Algorithm 7 (the relaxed gradient based iterative (RGI) algorithm for symmetric solution of (1)).
Step 1. Input matrices A,B,C∈Rn×n, given any small positive number ϵ and appropriative positive number ω such that 0<ω<1. Choose any initial matrix X(0). Set k≔1.
Step 2. If δk=AXs(k)+Xs(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (18)X1k=Xk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT,X2k=Xk-1-μ2BBTXk-1+Xk-1AT-CT+BTXk-1+Xk-1AT-CTA,Xk=ωX1k+1-ωX2k,Xsk=Xk+XTk2.
Step 4. Set k≔k+1; return to Step 2.
In the following paragraph, we will investigate the convergence of Algorithm 7.
Theorem 8.
Assume the matrix equations (15) have a unique solution X∗; then the iterative sequence {X(k)} generated by the Algorithm 7 converges to X∗, if (19)0<μ1<1ωA22+B22;0<μ2<11-ωA22+B22that is, limk→∞X(k)=X∗ or the error X(k)-X∗ converges to zero for any initial value X(0).
Further the sequence {Xs(k)} converges to Xs, where Xs is the unique symmetric solution of (1).
Proof.
Define the error matrix (20)X~k=Xk-X∗,θ1k-1≔AX~k-1+X~k-1B,θ2k-1≔BTX~k-1+X~k-1AT.We have (21)X~k=Xk-X∗=ωX1k+1-ωX2k-X∗=ωX1k-X∗+1-ωX2k-X∗=ωXk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT-X∗+1-ωXk-1-μ2BBTXk-1+Xk-1AT-CT+BTXk-1+Xk-1AT-CTA-X∗=X~k-1-ωμ1ATAX~k-1+X~k-1B+AX~k-1+X~k-1BBT+1-ωμ2BBTX~k-1+X~k-1AT+BTX~k-1+X~k-1ATA=X~k-1-ωμ1ATθ1k-1+θ1k-1BT-1-ωμ2Bθ2k-1+θ2k-1A.The following two equalities are easily derived.(22)ATθ1k-1+θ1k-1BTF2≤A22+B22θ1k-1F2,Bθ2k-1+θ2k-1AF2≤A22+B22θ2k-1F2.Moreover (23)X~kF2=X~k-1-ωμ1ATθ1k-1+θ1k-1BT-1-ωμ2Bθ2k-1+θ2k-1AF2=X~k-1F2+ω2μ12ATθ1k-1+θ1k-1BTF2+1-ω2μ22Bθ2k-1+θ2k-1AF2-2ωμ1trX~k-1TATθ1k-1+θ1k-1BT-21-ωμ2trX~k-1TBθ2k-1+θ2k-1A+2ω1-ωμ1μ2trATθ1k-1+θ1k-1BTTBθ2k-1+θ2k-1A≤X~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+2ω1-ωμ1μ2ATθ1k-1+θ1k-1BT,Bθ2k-1+θ2k-1A≤X~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+2ωμ1ATθ1k-1+θ1k-1BTF1-ωμ2Bθ2k-1+θ2k-1AF≤X~k-1F2+ω2μ12A22+B22θ1k-1F2+1-ω2μ22A22+B22θ2k-1F2-2ωμ1θ1k-1F2-21-ωμ2θ2k-1F2+ω2μ12ATθ1k-1+θ1k-1BTF2+1-ω2μ22Bθ2k-1+θ2k-1AF2=X~k-1F2-2ωμ11-ωμ1A22+B22θ1k-1F2-21-ωμ21-1-ωμ2A22+B22θ2k-1F2.Therefore, the above equality implies that (24)X~kF2≤X~0F2-2ωμ11-ωμ1A22+B22∑i=1kθ1i-1F2-21-ωμ21-1-ωμ2A22+B22∑i=1kθ2i-1F2.It follows from 0<μ1<1/ωA22+B22 and 0<μ2<1/1-ωA22+B22 that(25)∑i=1∞θ1i-1F2<∞,∑i=1∞θ2i-1F2<∞.The above two inequalities (25) mean that(26)limk→∞θ1k-1=0,limk→∞θ2k-1=0.In other words(27)limk→∞AXk-1+Xk-1B=C,limk→∞BTXk-1+Xk-1AT=CT.Equation (27) implies that (28)limk→∞Xk=X∗.
From Theorem 6 and the above limited equation (28), we have (29)limk→∞Xsk=limk→∞Xk+XTk2=X∗+X∗T2=Xs.
Following the same line, the idea of Algorithm 7 can be extended to solve the skew symmetric solution of (1). First, we need the following theorem.
Theorem 9.
The matrix equation (1) has a unique skew symmetric solution Xss if and only if the following pair of the matrix equations (30)AX+XB=C,BTX+XAT=-CThas a unique common solution X∗ and Xss=(X∗-(X∗)T)/2.
The relaxed gradient based iterative algorithm for solving the skew symmetric solution of (1) can be stated as follows.
Algorithm 10 (the relaxed gradient based iterative (RGI) algorithm for skew symmetric solution of (1)).
Step 1. Input matrices A,B,C∈Rn×n, given any small positive number ϵ and appropriative positive number ω such that 0<ω<1. Choose any initial matrix X(0). Set k≔1.
Step 2. If δk=AXss(k)+Xss(k)B-CF/CF<ϵ, stop; otherwise, go to Step 3.
Step 3. Update the sequences (31)X1k=Xk-1-μ1ATAXk-1+Xk-1B-C+AXk-1+Xk-1B-CBT,X2k=Xk-1-μ2BBTXk-1+Xk-1AT+CT+BTXk-1+Xk-1AT+CTA,Xk=ωX1k+1-ωX2k,Xssk=Xk-XTk2.
Step 4. Set k≔k+1; return to Step 2.
Similarly, we have the following theorem to ensure the convergence of the Algorithm 10.
Theorem 11.
Assume the matrix equations (30) have a unique solution X∗; then the iterative sequence {X(k)} generated by the Algorithm 10 converges to X∗, if (32)0<μ1<1ωA22+B22,0<μ2<11-ωA22+B22;that is, limk→∞X(k)=X∗ or the error X(k)-X∗ converges to zero for any initial value X(0).
Furthermore, the sequence {Xss(k)} converges to Xss, where Xss is the unique skew symmetric solution of (1).
4. Numerical Examples
In this section, two numerical examples are used to show the efficiency of the RGI Method. All the computations were performed on Intel® Core™ i7-4500U CPU @ 1.80 GHZ 2.40 GHZ system by using MATLAB 7.0. EER is the Frobenius norm of absolute error matrices which is defined to be EER=X(k)-X∗F, where X(k) is the kth iteration result for the RGI Method.
Example 1.
In matrix equation (1), we choose (33)A=4.53.53.0-5.5-5.0-6.0-3.02.04.02.05.0-3.01.51.50.0-0.5,B=-1.5-6.5-3.0-2.51.06.03.01.0-4.0-5.0-4.00.0-1.5-1.50.0-0.5,C=-5.512.511.04.53.5-3.52.0-4.5-8.5-10.5-1.0-1.52.00.00.0-1.0.
It is easy to show that the matrix equation (1) is consistent and has unique symmetric solution. By computing, the unique symmetric solution Xs is given as (34)Xs=-110110-100-11-110-10.
Taking X(0)=10-612×3 and applying the RGI Method (Algorithm 7) to compute the symmetric solution of AX+XB=C, the sum of A22+B22 is 962.2175. When taking ω=0.4, μ1=0.0026, and μ2=0.0017, the iterative errors δk versus k are shown in Figure 1, where δk=Xs(k)-XsF2.
Convergence cure of symmetric solution (ω=0.4, μ1=0.0026, and μ2=0.0017).
Example 2.
In matrix equation (1), if we choose (35)A=4.58.53.0-5.5-5.0-9.0-3.07.08.014.05.0-12.01.51.50.0-0.5,B=-1.5-6.5-3.0-2.51.06.03.01.0-4.0-10.0-4.00.0-1.5-1.50.0-0.5,C=-11.08.028.50.57.5-2.5-27.06.5-19.5-19.534.0-3.54.025.013.53.5,
it is easy to check that the matrix equation (1) is consistent and has unique skew symmetric solution. The unique skew symmetric solution Xss is computed as(36)Xss=0201-202-10-201-11-10.
Also taking X(0)=10-612×3 and ω=0.5, μ1=0.0021, and μ2=0.0021, then using Algorithm 10, the convergence cure of skew symmetric solution is shown in Figure 2.
Convergence cure of skew symmetric solution (ω=0.5, μ1=0.0021, and μ2=0.0021).
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This project was supported by NSF China (no. 11471122), Anhui Provincial Natural Science Foundation (no. 1508085MA12), Key Projects of Anhui Provincial University Excellent Talent Support Program (no. gxyqZD2016188), and the University Natural Science Research Key Project of Anhui Province (no. KJ2015A161).
BhattacharyyaS. P.de SouzaE.Pole assignment via Sylvester's equation19821426126310.1016/s0167-6911(82)80009-1MR6702092-s2.0-0001287194DuanG. R.Solutions to matrix equation AV+BW=VF and their application to eigenstructure assignment in linear systems1993382276280DuanG. R.On the solution to the Sylvester matrix equation AV+BW=EVF199641461261410.1109/9.489286ZhouB.DuanG. R.A new solution to the generalized Sylvester matrix equation AV-EVF=BW200655319319810.1016/j.sysconle.2005.07.002HuT. S.LinZ. L.LamJ.Unified gradient approach to performance optimization under a pole assignment constraint2004121236138310.1023/B:JOTA.0000037409.86566.f9MR2085282ZBL1056.930362-s2.0-4344612745LamJ.YanW. Y.A gradient flow approach to the robust pole-placement problem19955317518510.1002/rnc.4590050303MR13296232-s2.0-84987172528LamJ.YanW.-Y.Pole assignment with optimal spectral conditioning199729524125310.1016/s0167-6911(97)90007-4MR1432650LamJ.YanW.-Y.HuT.Pole assignment with eigenvalue and stability robustness199972131165117410.1080/002071799220326MR1717876ZBL1047.935192-s2.0-0033344016DattaB. N.LinW.-W.WangJ.-N.Robust partial pole assignment for vibrating systems with aerodynamic effects200651121979198410.1109/tac.2006.886543MR22844262-s2.0-33845741147ZhouB.DuanG. R.Parametric approach for the normal Luenberger function observer design in second-order linear systemsProceedings of the 45th IEEE Conference on Decision and Control2006San Diego, Calif, USA14231428ChuD. L.Van DoorenP.A novel numerical method for exact model matching problem with stability200642101697170410.1016/j.automatica.2006.04.024MR22497172-s2.0-33747751241ChuD. L.ChanH. C.HoD. W. C.Regularization of singular systems by derivative and proportional output feedback1998191213810.1137/s0895479895270963MR16099332-s2.0-0032341235ChuD. L.MehrmannV.NicholsN. K.Minimum norm regularization of descriptor systems by mixed output feedback19992961–3397710.1016/s0024-3795(99)00108-1MR1713272ChuD. L.MehrmannV.Disturbance decoupling for descriptor systems by state feedback20003861830185810.1137/s0363012900331891MR17766582-s2.0-0034540417ChuD. L.TanR. C. E.Numerically reliable computing for the row by row decoupling problem with stability20022341143117010.1137/s0895479801362546MR19209382-s2.0-0036403031GolubG. H.NashS.Van LoanC.A Hessenberg-Schur method for the problem AX-XB=C197924909913BartelsR. H.StewartG. W.Algorithm 432: Solution of the matrix equation AX-XB=C1972159820826DengY.-B.BaiZ.-Z.GaoY.-H.Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equations2006131080182310.1002/nla.496MR2278194ZBL1174.653822-s2.0-33845664479BaiZ.-Z.On Hermitian and skew-Hermitian splitting iteration methods for continuous Sylvester equations201129218519810.4208/jcm.1009-m3152MR27599732-s2.0-79955132775BouhamidiA.JbilouK.A note on the numerical approximate solutions for generalized Sylvester matrix equations with applications2008206268769410.1016/j.amc.2008.09.022MR24830412-s2.0-56949103609KaabiA.ToutounianF.KerayechianA.Preconditioned Galerkin and minimal residual methods for solving Sylvester equations200618121208121410.1016/j.amc.2006.02.021MR2270000ZBL1105.650402-s2.0-33750478671KaabiA.KerayechianA.ToutounianF.A new version of successive approximations method for solving Sylvester matrix equations2007186163864510.1016/j.amc.2006.08.007MR2314524ZBL1121.650442-s2.0-33947240366LinY.-Q.Implicitly restarted global FOM and GMRES for nonsymmetric matrix equations and Sylvester equations200516721004102510.1016/j.amc.2004.06.141MR21697482-s2.0-25144497980LinY.Minimal residual methods augmented with eigenvectors for solving SYLvester equations and generalized SYLvester equations2006181148749910.1016/j.amc.2005.12.055MR22705082-s2.0-33749512783Khojasteh SalkuyehD.ToutounianF.New approaches for solving large Sylvester equations2006173191810.1016/j.amc.2005.02.063MR22033702-s2.0-32144459666ZhangJ.-J.A note on the iterative solutions of general coupled matrix equation2011217229380938610.1016/j.amc.2011.04.026MR2804016ZBL1217.650812-s2.0-79957500531DingF.ChenT.Hierarchical gradient-based identification of multivariable discrete-time systems200541231532510.1016/j.automatica.2004.10.010MR21576672-s2.0-22144434326DingF.ChenT.Gradient-based iterative algorithms for solving a class of matrix equations20055081216122110.1109/tac.2005.852558MR21560532-s2.0-26244448321DingF.ChenT.Hierarchical identification of lifted state-space models for general dual-rate systems20055261179118710.1109/TCSI.2005.849144MR21474792-s2.0-22144498905ChehabJ.-P.RaydanM.An implicit preconditioning strategy for large-scale generalized Sylvester equations2011217218793880310.1016/j.amc.2011.03.148MR28022872-s2.0-79956082262DingF.ChenT.On iterative solutions of general coupled matrix equations20064462269228410.1137/S0363012904441350MR2248183ZBL1115.650352-s2.0-33751185751DingF.LiuP. X.DingJ.Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle20081971415010.1016/j.amc.2007.07.040MR2396289ZBL1143.650352-s2.0-38949156204WuA.-G.ZengX.DuanG.-R.WuW.-J.Iterative solutions to the extended Sylvester-conjugate matrix equations2010217113014210.1016/j.amc.2010.05.029MR2672571ZBL1223.650322-s2.0-77955425442NiuQ.WangX.LuL.-Z.A relaxed gradient based algorithm for solving Sylvester equations201113346146410.1002/asjc.328MR2830148ZBL1242.650812-s2.0-84862908727WangX.DaiL.LiaoD.A modified gradient based algorithm for solving Sylvester equations201221895620562810.1016/j.amc.2011.11.055MR28700792-s2.0-83555174548XieY.-J.MaC.-F.The accelerated gradient based iterative algorithm for solving a class of generalized Sylvester-transpose matrix equation20162731257126910.1016/j.amc.2015.07.022MR34278342-s2.0-84948569118XieY.MaC.Iterative methods to solve the generalized coupled Sylvester-conjugate matrix equations for obtaining the centrally symmetric (centrally antisymmetric) matrix solutions201420141751581610.1155/2014/515816MR32329182-s2.0-84904787598XieY.HuangN.MaC.Iterative method to solve the generalized coupled Sylvester-transpose linear matrix equations over reflexive or anti-reflexive matrix201467112071208410.1016/j.camwa.2014.04.012MR32101202-s2.0-84901689278