Algorithms for Solving Nonhomogeneous Generalized Sylvester Matrix Equations

)is paper considers a new method to solve the first-order and second-order nonhomogeneous generalized Sylvester matrix equations AV + BW � EVF + R and MVF2 + DVF + KV � BW + R, respectively, where A, E, M, D, K, B, and F are the arbitrary real known matrices and V and W are the matrices to be determined. An explicit solution for these equations is proposed, based on the orthogonal reduction of the matrix F to an upper Hessenberg form H. )e technique is very simple and does not require the eigenvalues of matrix F to be known. )e proposed method is illustrated by numerical examples.


Introduction
Consider the following two homogeneous generalized Sylvester matrix equations: Matrix equation (1) is called a first-order homogeneous generalized Sylvester matrix equation that is closely related to many problems in linear systems theory, such as eigenstructure assignment [1][2][3][4][5] and control of systems with input constraints [6]. Second-order homogeneous generalized Sylvester matrix equation (2) has found applications in many control problems, for example, pole assignment [7][8][9] and eigenstructure assignment [10,11].
As a generalization of the above matrix equations, we have considered the following nonhomogeneous generalized Sylvester matrix equation: where A, E, M, D, K, B ∈ R n×n , R ∈ R n×p , and F ∈ R p×p are the known matrices, while V ∈ R n×p and W ∈ R n×p need to be determined. Several authors have studied different methods for matrix equation (3) (see for example, Song and Chen, [12], Ramadan et al. [13], Duan [14], and Wu et al. [15]). e secondorder nonhomogeneous Sylvester matrix equation was introduced by Duan [16]. Recently, Ramadan et al. [17] developed the Hessenberg method to solve the Sylvester matrix equation XA + BX � C by reducing only one coefficient matrix to a block upper Hessenberg form. e main goal of this paper is to present algorithms for solving well-known nonhomogeneous generalized Sylvester matrix equations (3) and (4). e proposed algorithm differs from the preceding standard methods; in our algorithms, F is only reduced to an unreduced upper Hessenberg matrix H, and eigenvalues of matrix F must be distinct, because if any eigenvalue of H repeats, then it is defective. An unreduced Hessenberg matrix is always nonsingular, so F must be nonsingular. roughout this paper, the notation (GSME) is used for generalized Sylvester matrix equation and we assume that det(E) ≠ 0, det(F) ≠ 0, det(B) ≠ 0, and det(M) ≠ 0.

The Proposed
Method for AV + BW � EVF + R Consider the following nonhomogeneous generalized firstorder Sylvester matrix equation: where A, E, B ∈ R n×n , R ∈ R n×p , and F ∈ R p×p are the known matrices, while V, W ∈ R n×p are to be determined. e following lemma plays a vital role in this paper.
Lemma 1 (see [18,19] Algorithm 1 constructs an unknown matrix L � l k p k�1 ∈ R n×p and compute matrix G � 0 0 · · · g ∈ R n×p for the following matrix equation: or where l 1 , l 2 , . . . , l p are the columns of L, 0 is the zero vector, g is the unknown vector, and S � [s 1 , s 2 , . . . , s p ] is the known real n × p matrix.

Theorem 1. e solution of matrix equation (5) is
and F ∈ R p×p are the known matrices and the matrix X is generated as in Lemma 1.
Proof. Matrix equation (5) can be rewritten in the following form: at is, where H � Q T FQ ∈ R p×p is an unreduced upper Hessenberg matrix, R � RQ, and Q ∈ R p×p is an orthogonal similarity transformation. e matrix X ∈ R p×p is generated by (6), and matrix equation (7) is multiplied by X to get By using Lemma 1(HX � XH), assume that SX � R, V � LX, and W � GX, where L and G are computed from Algorithm 1. en, we recover the original problem via the relations V � VQ T , W � WQ T , R � R Q T , and F � QHQ T , and then, the solution of (5) is V � LXQ T and W � GXQ T (see Algorithm 2).

The Proposed
Method of MVF 2 + DVF + KV � BW + R Consider the following second-order nonhomogeneous generalized Sylvester matrix equation: where M, D, K, B ∈ R n×n , R ∈ R n×p , and F ∈ R p×p are the known matrices, while V ∈ R n×p and W ∈ R n×p are to be determined. e following algorithm constructs an unknown matrix L ∈ R n×p and computes matrix G ∈ R n×p for the following matrix equation: Putting U � LH in (13), we get or Since U � LH, then Both l 1 , l 2 , . . . , l p , u 1 , u 2 , . . . , u p , and s 1 , s 2 , . . . , s p are columns of L, U, and S , respectively. G � [0, 0, . . . , 0, g 1 , g 2 ] is a n × p real matrix, where 0 is the zero vector and g 1 and g 2 are the unknown vectors. From (15) and (16), we design Algorithm 3.

Theorem 2.
e solution of matrix equation (12) is V � LXQ T and W � GXQ T , where M, D, K, B ∈ R n×n , R ∈ R n×p , and F ∈ R p×p are the known matrices, and the matrix X is generated as in Lemma 1.
Proof. Matrix equation (12) can be rewritten in the Hessenberg form as follows: at is where W � WQ and V � VQ are to be determined, while H � Q T FQ ∈ R p×p is an unreduced upper Hessenberg matrix, R � RQ, and Q ∈ R p×p is an orthogonal similarity transformation. e main idea is to find a matrix L and matrix G for a new matrix equation (13) as shown in Algorithm 3, where S � RX − 1 is known as n × p real matrix while the matrix X ∈ R p×p is generated by (6). Multiply matrix equation (13) by X; then, By using Lemma 1 and assuming that V � LX, W � GX, and S � RX − 1 we have Step 1 Choose l 1 arbitrarily ALGORITHM 1: e proposed algorithm for AL + BG � ELH + S.

Input: Matrices
A, E, B ∈ R n×n , R ∈ R n×p and F ∈ R p×p . Output: Matrices V and W. Assumptions: det(E) ≠ 0det(F) ≠ 0, det(B) ≠ 0, and X are nonsingular matrix as shown in [19] and eigenvalues of matrix F must be distinct.
Step 1: Reduce F to an unreduced upper Hessenberg H � Q T FQ ∈ R p×p . Let Q be an orthogonal matrix.
Step 2: Construct the matrix X ∈ R p×p generated by (6).
Step 3: Compute the matrix S ∈ R n×p by solving SX � RQ.
Step 4: Construct the matrices L and G as shown in Algorithm 1.

Mathematical Problems in Engineering 3
We then recover the original problem via the relations V � VQ T , W � WQ T , and F � QHQ T ; then, the solution of (12) is V � LXQ T and W � GXQ T (see Algorithm 4). □

Numerical Examples
In this section, we present two numerical examples to illustrate the application of our proposed method. Example 1. We solve first-order GSME (3) where A, E, B ∈ R n×n and F ∈ R p×p are random matrices. e accuracy of the proposed method is reported for different values of n, when p � 2, and for different values of p, when n � 10, as in Table 1.

Example 2.
We solve the second-order GSME (4) where M, D, K, B ∈ R n×n and F ∈ R p×p are random matrices. e accuracy of the proposed method is reported for different values of n, when p � 2, and for different values of p, when n � 10, as in Table 2.

Discussion
e accuracy of the proposed methods is remarkably dependent on p. If p is too large, one of the entries h i+1,i of H may have a tendency to be zero, which affects singularity of the X matrix. us, the smaller the value of p, the greater the precision. In previous studies of GSME AV + BW � EVF + R [12][13][14][15] and MVF 2 + DVF + KV � BW + R [16], the working hypotheses that size n of known matrices are very small. However, in our method, n is very large.

Conclusion
In this study, the solution of GSME AV + BW � EVF + R and MVF 2 +DVF+KV�BW+R, where A, B, E, M, D, K,R, and F are arbitrary real known matrices and V and W are the matrices to be determined, is investigated. With the help of orthogonal similarity transformation and reduction to Hessenberg form, some good results are obtained. e

Step1
Choose l 1 and u 1 arbitrarily.

Input:
Matrices M, D, K, B ∈ R n×n , R ∈ R n×p and F ∈ R p×p . Output: Matrices V and W. Assumptions: de t(M) ≠ 0, de t(F) ≠ 0, de t(B) ≠ 0 and X is nonsingular matrix as shown in [19] and eigenvalues of matrix F must be distinct.
Step 1: Reduce F to an unreduced upper Hessenberg H � Q T FQ ∈ R p×p . Let Q be an orthogonal matrix.
Step 2: Construct the matrix X ∈ R p×p generated by (6).
Step 3: Compute the matrix S ∈ R n×p by solving SX � RQ.
Step 4: Construct the matrices L and G as shown in Algorithm 3.
Step 5: Compute V � LXQ T and W � GXQ T . ALGORITHM 4: e proposed algorithm for MVF 2 + DVF + KV � BW + R. proposed techniques are tested by solving two test problems where the accuracy is seen to be highly remarkable.

Open Problem
Extend the Hessenberg method to solve Sylvester quaternion matrix equation [20] and coupled matrix equation [21].

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.