We propose a feasible and effective iteration method to find solutions to the matrix equation AXB=C subject to a matrix inequality constraint DXE≥F, where DXE≥F means that the matrix DXE-F is nonnegative. And the global convergence results are obtained. Some numerical results are reported to illustrate the applicability of the method.

1. Introduction

In this paper, we consider the following problem:
(1)AXB=C,DXE≥F,
where A∈Rp×m, B∈Rn×q, C∈Rp×q, D∈Rs×m, E∈Rn×t, and F∈Rs×t are known constant matrixes and X∈Rm×n is unknown matrix.

The solutions X to the linear matrix equation with special structures have been widely studied, for example, symmetric solutions (see [1–5]), R-symmetric solutions (see [6]), (R,S)-symmetric solutions (see [7, 8]), bisymmetric solutions (see [9–12]), centrosymmetric solutions (see [13]), and other general solutions (see [14–23]). Some iterative methods to solve a pair of linear matrix equations have been studied as well (see [24–31]).

However, very little research has been done on the solutions to a matrix equation subjected to a matrix inequality constraint. In 2012, Peng et al. (see [32]) proposed a feasible and effective algorithm to find solutions to the matrix equation AX=B subjected to a matrix inequality constraint CXD≥E based on the polar decomposition in Hilbert space. Next year, Li et al. (see [33]) used a similar way to study the bisymmetric solutions to the same problem. Motivated and inspired by the work mentioned above, in this paper, we consider the solutions of the matrix equation AXB=C over linear inequality DXE≥F constraint. We use the theory on the analytical solution of matrix equation AXB=C to transform the problem into a matrix inequality smallest nonnegative deviation problem. And then, combined with the polar decomposition theory, an iterative method for solving this transformed problem is proposed. Meanwhile, the global convergence results are obtained. Some numerical results are reported and indicate that the proposed method is quite effective.

Throughout this paper, we use the following notation: for A∈Rm×n, we write AT, A+, and ∥A∥ to denote the transpose, the Moore-Penrose generalized inverse, and the Frobenius norm of the matrix A, respectively. For any A=(aij), B=(bij), we write A≥B if aij≥bij. A⊗B denotes the Kronecker product defined as A⊗B=(aijB). For the matrix X=(x1,x2,…,xn)∈Rm×n, vec(X) denotes the vec operator defined as vec(X)=(x1T,x2T,…,xnT)T. For A=(aij)∈Rm×n, [A]+ is a matrix with ijth entry equal to max{0,aij}. Obviously, A=[A]+-[-A]+. The inner product in space Rm×n is defined as
(2)〈A,B〉=tr(BTA),∀A,B∈Rm×n.
Hence Rm×n is a Hilbert inner product space, and the norm of a matrix generated by this inner product space is the Frobenius norm.

This paper is organized as follows. In Section 2, we transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Then we study the existence of the solutions for problem (1) in Section 3. The iterative method to the transformed problem and convergence analysis are presented in Section 4. Section 5 shows some numerical experiments. Finally, we conclude this paper in Section 6.

2. Transforming the Original Problem

In this section, we use the theory on the analytical solution of matrix equation AXB=C to transform problem (1) into a matrix inequality smallest nonnegative deviation problem. Firstly, we present the following lemma about the analytical solution of matrix equation AXB=C.

Lemma 1 (see Theorem 1.21 in [<xref ref-type="bibr" rid="B37">34</xref>]).

Given A∈Rp×m, B∈Rn×q, and C∈Rp×q, the matrix equation AXB=C is solvable for X in Rm×n if and only if AA+CB+B=C. Moreover, if the matrix equation AXB=C is solvable, then the general solutions can be expressed as
(3)X=A+CB++G-A+AGBB+,
where G is an arbitrary m×n matrix.

Assume that AA+CB+B=C; that is, assume that the matrix equation AXB=C is solvable. Substituting (3) into the second inequality of (1), we get
(4)D(A+CB++G-A+AGBB+)E≥F.
By simple calculation, we have
(5)D(G-A+AGBB+)E≥F-DA+CB+E.
Hence (3) is a solution of (1) if and only if the matrix G in (1) satisfies (5). However, inequality (5) may be unsolvable; namely,
(6){G∈Rm×n∣D(G-A+AGBB+)E≥F~}=∅,
where F~=F-DA+CB+E. In this case, we can find Y∈R+s×t such that
(7)D(G-A+AGBB+)E+Y≥F~
is solvable (if (5) is solvable, then Y=0). Obviously, there exist many Y∈R+s×t satisfied (7). Here we need to find a Y such that ∥Y∥≤∥Y¯∥, where Y, Y¯ satisfied (7). Thus we consider the following smallest nonnegative deviation of the matrix inequality, which is also a quadratic programming problem:
(8)minimizeP(G,Y)=∥Y∥2subjecttoD(G-A+AGBB+)E+Y≥F~,hhhhhhhhhhhhhhhhhhY≥0,G∈Rm×n.
If G and Y solve (8) with Y=0, then G solves (7), and (3) is a solution of (1). If G and Y solve (8) Y≠0, then G solves (7) in the smallest nonnegative deviation sense, and (3) is a solution of the matrix equation AXB=C over the nonnegative smallest deviation constraint of the inequality DXE≥F. Conversely, if G solves (7), then G and Y=0 solve (8). So, to find X∈Rm×n satisfied (1), we only need to solve the smallest nonnegative deviation problem (8).

Suppose that G and Y solve (8); then
(9)Y=[F~-D(G-A+AGBB+)E]+.
On the other hand, if a pair of matrices G and Y solves the smallest nonnegative deviation problem (8), then there exists a nonnegative matrix Z satisfying
(10)D(G-A+AGBB+)E+Y-Z=F~.
Consequently, the smallest nonnegative deviation problem (8) is equivalent to the following optimization problem:
(11)minimizeP(G,Y,Z)=∥Y∥2subjecttoD(G-A+AGBB+)E+Y-Z=F~,hhhhhhhhhhhhhhY≥0,Z≥0,G∈Rm×n.
Eliminating Y from (11) yields the following optimization problem:
(12)minimizeF(G,Z)=∥D(G-A+AGBB+)E-Z-F~∥2subjecttoZ≥0,G∈Rm×n.
Suppose that a pair (G,Z) solves (12); then
(13)Z=[D(G-A+AGBB+)E-F~]+,
which together with the matrices Y=[F~-D(G-A+AGBB+)E]+ and G solves (11), and hence G and Y solve (8). This allows one to determine whether or not (3) is a solution of (1). Therefore, to solve (1), we first solve optimization problem (12). Our iteration method proposed below will take advantage of these equivalent forms of (1).

3. The Solution of the Problem

To illustrate the existence of the solutions G*, Y*, and Z* of (8), (11), and (12), we give the following theorem in the first place.

Theorem 2 (see [<xref ref-type="bibr" rid="B20">35</xref>, <xref ref-type="bibr" rid="B34">36</xref>]).

Let M⊆Rs×t be a closed convex cone (i.e., M is closed convex set and αu∈M, for all u∈M and α≥0). Let M* be the polar cone of M; that is,
(14)M*={y∈Rs×t∣〈u,y〉≤0,∀u∈M,y≥0}.
Then for all f∈Rs×t,f has unique polar decomposition of the form
(15)f=u^+y^,u^∈M,y^∈M*,〈u^,y^〉=0.

Theorem 2 implies that u^ is the projection of f onto M and y^ is the projection of f onto M*.

For problem (12), we give the following two matrix sets:
(16)M={Q∈Rs×t∣Q=D(G-A+AGBB+)E-Z,hhhhhhhhhZ≥0,G∈Rm×n},N={Y∈Rs×t∣DTYET-A+ADTYETBB+=0,Y≥0}.
Now we will prove that M is a closed convex cone and N=M*.

Lemma 3.

The matrix set M is a closed convex cone in the Hilbert space Rs×t.

Proof.

For all Q∈M, there exists Z≥0 and G∈Rm×n such that Q=D(G-A+AGBB+)E-Z. By the definition of Kronecker product, we have
(17)vec(Q)=[ET⊗D-(BB+E)T⊗(DA+A)]vec(G)-(I⊗I)vec(Z)=[ET⊗D-(ETBB+)⊗(DA+A)]×([vec(G)]+-[vec(-G)]+)-(I⊗I)vec(Z)=(ET⊗D-(ETBB+)⊗(DA+A),h-ET⊗D+(ETBB+)⊗(DA+A),-I⊗IET⊗D-(ETBB+)⊗(DA+A))·([vec(G)]+T,[vec(-G)]+T,vec(Z)T)T=Hβ,
where the second equality follows from the definition of Moore-Penrose generalized inverse, and
(18)H=(ET⊗D-(ETBB+)⊗(DA+A),hhhhh-ET⊗D+(ETBB+)⊗(DA+A),-I⊗I),β=([vec(G)]+T,[vec(-G)]+T,vec(Z)T)T.
It is easy to see that β≥0. As G and Z≥0 are arbitrary, β is arbitrary as well. Let H=(h1,h2,…,hl) and β=(β1,β2,…,βl)T, where l=2mn+st; then vec(Q)=∑i=1lβihi. Thus M is equivalent to the following set:
(19)K={q∈Rst∣q=∑i=1lβihi,βi≥0}.
By the result in [37], we know that the set K is a closed convex cone in the Hilbert space Rst. Hence M is a closed convex cone in the Hilbert space Rs×t.

Lemma 4.

The matrix set N is the polar cone of the matrix set M.

Proof.

By the definition of the polar cone, we get
(20)M*={Y∈Rs×t∣〈Q,Y〉≤0,∀Q∈M,Y≥0}.
So we just need to prove N=M*. Firstly, we prove N⊆M*.

For all Y∈N and Q=D(G-A+AGBB+)E-Z∈M, we have
(21)〈Q,Y〉=〈D(G-A+AGBB+)E-Z,Y〉=〈DGE,Y〉-〈DA+AGBB+E,Y〉-〈Z,Y〉=〈G,DTYET〉-〈G,A+ADTYETBB+〉-〈Z,Y〉=〈G,DTYET-A+ADTYETBB+〉-〈Z,Y〉=-〈Z,Y〉≤0.
Thus Y∈M*. Then N⊆M*.

Now we prove M*⊆N. For all Y∈M*, if Y∉N, then DTYET-A+ADTYETBB+≠0. So there exists a positive real number α and Z≥0 such that
(22)α〈DTYET-A+ADTYETBB+,DTYET-A+ADTYETBB+〉>〈Y,Z〉.
Let Q=αD(G-A+AGBB+)E-Z, where G=DTYET-A+ADTYETBB+:
(23)〈Q,Y〉=〈αD(G-A+AGBB+)E-Z,Y〉=α〈G,DTYET-A+ADTYETBB+〉-〈Z,Y〉=α〈DTYET-A+ADTYETBB+,hhhhDTYET-A+ADTYETBB+〉-〈Z,Y〉>0.
This contradicts the assumption Y∈M*.

Theorem 5.

Assume that the matrices G* and Z* solve (12). Define matrices Q*∈Rs×t and Y*∈Rs×t as
(24)Q*=D(G*-A+AG*BB+)E-Z*,Y*=[F~-D(G*-A+AG*BB+)E]+.
Then F~=Q*+Y*, Q*∈M, Y*∈N, and 〈Q*,Y*〉=0; namely, Q* and Y* are the polar decomposition of F~.

Proof.

As G* and Z* solve (12), we have Z*=[D(G*-A+AG*BB+)E-F~]+. Then
(25)Q*=D(G*-A+AG*BB+)E-[D(G*-A+AG*BB+)E-F~]+=D(G*-A+AG*BB+)E-F~-[D(G*-A+AG*BB+)E-F~]++F~=-[F~-D(G*-A+AG*BB+)E]++F~=F~-Y*.
Thus F~=Q*+Y*.

By Lemmas 3 and 4 and Theorem 2, we get that F~ has unique polar decomposition with M and N; that is, there exists unique Q^∈M and Y^∈N such that F~=Q^+Y^ and 〈Q^,Y^〉=0.

Consider optimization problem (12). The objective function in (12) is
(26)F(G,Z)=∥D(G-A+AGBB+)E-Z-F~∥2=∥D(G-A+AGBB+)E-Z-Q^-Y^∥2=∥D(G-A+AGBB+)E-Z-Q^∥2+∥Y^∥2-2〈D(G-A+AGBB+)E-Z-Q^,Y^〉=∥D(G-A+AGBB+)E-Z-Q^∥2+∥Y^∥2-2〈D(G-A+AGBB+)E,Y^〉+2〈Z,Y^〉=∥D(G-A+AGBB+)E-Z-Q^∥2+∥Y^∥2-2〈G,DTY^ET-A+ADTY^ETBB+〉+2〈Z,Y^〉=∥D(G-A+AGBB+)E-Z-Q^∥2+∥Y^∥2+2〈Z,Y^〉≥∥Y^∥2.
Since G*, Z* solve (12), D(G*-A+AG*BB+)E-Z*∈M, and Q^∈M, we have Q^=D(G*-A+AG*BB+)E-Z*=Q*∈M. Thus Y*=F~-Q*=F~-Q^=Y^∈N. Then 〈Q*,Y*〉=〈Q^,Y^〉=0. This completes the proof.

Remark 6.

Theorem 5 implies that if a pair of matrices G* and Z* solves optimization problem (12), then D(G*-A+AG*BB+)E-Z* and [F~-D(G*-A+AG*BB+)E]+ are the projections of F~ onto M and N, respectively. Conversely, by Theorem 2 we get that F~ has unique polar decomposition of the form F~=Q*+Y*, Q*∈M, and Y*∈N. By the definition of M, there exists G* and Z*≥0 such that Q*=D(G*-A+AG*BB+)E-Z* and Y*=[F~-D(G*-A+AG*BB+)E]+. Moreover, G*, Z*, and Y* solve optimization problem (11). Thus problem (11) is solvable.

By the above analysis, we get the following theorem immediately.

Theorem 7.

Problem (1) is solvable if and only if AA+CB+B=C and Y*=[F~-D(G*-A+AG*BB+)E]+=0, where G* and Z* are the solutions of optimization problem (12).

4. Iterative Method and Convergence Analysis

In this section, we present an iteration method to solve (1) and give the convergence analysis. We are now in a position to give our algorithm to compute the solutions G*, Y*, and Z* of (8), (11), and (12).

Algorithm 8 (an iteration method for (<xref ref-type="disp-formula" rid="EEq1.1">1</xref>)).

Step 0. Input matrices A, B, C, D, E, and F. Choose the initial matrix G0. Compute F~=F-DA+CB+E, Y0=[F~-D(G0-A+AG0BB+)E]+, and Z0=[D(G0-A+AG0BB+)E-F~]+. Take the stopping criterion ɛ>0. Set k:=0.

Step 1. Find a solution Wk of the least squares problem
(27)minimize∥D(W-A+AWBB+)E-Yk∥.

Step 2. Update the sequences
(28)Gk+1=Gk+Wk,Yk+1=[F~-D(Gk+1-A+AGk+1BB+)E]+,Zk+1=[D(Gk+1-A+AGk+1BB+)E-F~]+.

Step 3. If ∥Yk+1-Yk∥≤ɛ or ∥Zk+1-Zk∥≤ɛ, then stop; otherwise, set k:=k+1 and go to Step 1.

Next we give the following lemma.

Lemma 9.

Rs×t is the direct sum of M¯ and N¯; that is,
(29)Rs×t=M¯⊕N¯,
where
(30)M¯={D(X-A+AXBB+)E∣X∈Rm×n},N¯={Y∈Rs×t∣DTYET-A+ADTYETBB+=0}.

Proof.

Obviously, M¯ and N¯ are linear subspaces of Rs×t. By orthogonal decomposition theorem in the Hilbert space, we obtain Rs×t=M¯⊕M¯⊥, where M¯⊥ is the orthogonal complement space of M¯. So we just need to prove N¯=M¯⊥.

We prove N¯⊆M¯⊥ firstly. For all Y∈N¯ and D(X-A+AXBB+)E∈M¯, we have
(31)〈D(X-A+AXBB+)E,Y〉=〈DXE,Y〉-〈DA+AXBB+E,Y〉=〈X,DTYET〉-〈X,(DA+A)TY(BB+E)T〉=〈X,DTYET〉-〈X,A+ADTYETBB+〉=〈X,DTYET-A+ADTYETBB+〉=0,
where the third equality follows from the definition of Moore-Penrose generalized inverse. Hence Y∈M¯⊥; namely, N¯⊆M¯⊥.

Then we prove M¯⊥⊆N¯. For all Y∈M¯⊥ and D(X-A+AXBB+)E∈M¯, we have 〈D(X-A+AXBB+)E,Y〉=0. By the same way, we get 〈X,DTYET-A+ADTYETBB+〉=0. As X∈Rm×n is arbitrary, we take X=DTYET-A+ADTYETBB+. Then
(32)〈DTYET-A+ADTYETBB+,DTYET-A+ADTYETBB+〉=0.
So DTYET-A+ADTYETBB+=0; that is, Y∈N¯. Thus M¯⊥⊆N¯.

From the above, we get N¯=M¯⊥. Therefore, Rs×t=M¯⊕N¯.

Now we present the convergence theorem.

Theorem 10.

Let F~=Q*+Y* be the unique polar decomposition of F~. Then
(33)limk→∞Yk=Y*,limk→∞(D(Gk-A+AGkBB+)E-Zk)=Q*.

Proof.

Since matrix Wk solves (27), we have ∥D(Wk-A+AWkBB+)E-Yk)∥≤∥Yk∥. This together with Algorithm 8 yields(34)∥Yk+1∥=∥[F~-D(Gk+1-A+AGk+1BB+)E]+∥=∥[F~-D(Gk-A+AGkBB+)Ehh-D(Wk-A+AWkBB+)E]+∥=∥[Yk-Zk-D(Wk-A+AWkBB+)E]+∥≤∥[Yk-D(Wk-A+AWkBB+)E]+∥≤∥Yk-D(Wk-A+AWkBB+)E∥≤∥Yk∥,
where the first inequality follows from Zk≥0 and the second inequality follows from nonexpansive property of the projection. This implies that the sequence {∥Yk∥} is monotonically decreasing and is bounded from below. So there exists a constant α≥0 such that limk→∞∥Yk∥=α. Furthermore, {Yk} is bounded. Then {Yk} has at least one cluster point. Next we show that any cluster point of the sequence {Yk} is equal to Y*. Consequently, {Yk} converges toward to Y*.

Let Y~ be any cluster point of the sequence {Yk}. Without loss of generality, we suppose limk→∞Yk=Y~. Obviously, Y~≥0. It follows from Lemma 9 that Yk has unique orthogonal decomposition of the form
(35)Yk=Y^k+Y~k,Y^k∈M¯,Y~k∈N¯.
Moreover, by (27), Wk satisfies D(Wk-A+AWkBB+)E=Y^k. Thus ∥D(Wk-A+AWkBB+)E-Yk)∥:
(36)∥Yk+1∥≤∥Yk-D(Wk-A+AWkBB+)E∥=∥Y~k∥≤∥Yk∥.
Let k→∞; we get limk→∞∥Yk∥=limk→∞∥Y~k∥=α. This together with ∥Yk∥2=∥Y^k∥2+∥Y~k∥2 yields that
(37)limk→∞∥Y^k∥=0.
So limk→∞Y^k=0. Therefore
(38)limk→∞Y~k=limk→∞(Yk-Y^k)=Y~.
Since Y~k∈N¯, Y~∈N¯ as well. This together with Y~≥0 follows that Y~∈N.

Since
(39)D(Gk-A+AGkBB+)E-Zk=D(Gk-A+AGkBB+)E-[D(Gk-A+AGkBB+)E-F~]+=F~-Yk,
we get limk→∞(D(Gk-A+AGkBB+)E-Zk)=limk→∞(F~-Yk)=F~-Y~. By Lemma 3 and D(Gk-A+AGkBB+)E-Zk∈M, we gain F~-Y~∈M. By the definition of M, there exists G~∈Rm×n and Z~≥0 such that F~-Y~=D(G~-A+AG~BB+)E-Z~. Let Q~=D(G~-A+AG~BB+)E-Z~. Then F~=Q~+Y~ is the unique polar decomposition of F~. Hence Q~=Q* and Y~=Y*. Furthermore,
(40)limk→∞(D(Gk-A+AGkBB+)E-Zk)=F~-Y~=F~-Y*=Q*.
This completes the proof of the theorem.

5. Numerical Experiments

In this section, we present two numerical examples to illustrate the efficiency and the performance of Algorithm 8. Firstly, we consider least squares problem (27) in Algorithm 8.

By the definition of Kronecker product, least squares problem (27) in Algorithm 8 is equivalent to
(41)minmize∥(ET⊗D-E1T⊗D1)vec(W)-vec(Yk)∥,
where D1=DA+A and E1=BB+E. It is well known that the normal equation of problem (41) is
(42)(ET⊗D-E1T⊗D1)T×[(ET⊗D-E1T⊗D1)vec(W)-vec(Yk)]=0.
Notice that
(43)(ET⊗D-E1T⊗D1)T(ET⊗D-E1T⊗D1)=(E⊗DT-E1⊗D1T)(ET⊗D-E1T⊗D1)=(E⊗DT)(ET⊗D)-(E⊗DT)(E1T⊗D1)-(E1⊗D1T)(ET⊗D)+(E1⊗D1T)(E1T⊗D1)=(EET)⊗(DTD)-(EE1T)⊗(DTD1)-(E1ET)⊗(D1TD)+(E1E1T)⊗(D1TD1),
and therefore (42) is equal to
(44)DTDWEET-DTD1WE1ET-D1TDWEE1T+D1TD1WE1E1T=DTYkET-D1TYkE1T.
Taking the definition of D1 and E1 into the above equation, we get
(45)DTDWEET-DTDA+AWBB+EET-A+ADTDWEETBB++A+ADTDA+AWBB+EETBB+=DTYkET-A+ADTYkETBB+,
which is the normal equation of problem (27). Let
(46)L(W)=DTDWEET-DTDA+AWBB+EET-A+ADTDWEETBB++A+ADTDA+AWBB+EETBB+
and Q=DTYkET-A+ADTYkETBB+. Then problem (27) is equivalent to L(W)=Q, which can be solved by the modified conjugate gradient method (see [33]).

Step 0. Input matrices A, B, C, D, E, F, and Yk. Choose the initial matrix W(0). Compute L(W(0)), R(0)=Q-L(W(0)), R~(0)=L(R(0)), and P(0)=R~(0). Take the stopping criterion ϵ2>0. Set j:=0.

Step 1. If ∥R(j)∥≤ϵ2, then stop; otherwise, set j:=j+1 and go to Step 2.

Step 2. Update the sequences
(47)W(j)=W(j-1)+∥R(j-1)∥2∥P(j-1)∥2P(j-1),R(j)=Q-L(W(j)),R~(j)=L(R(j)),P(j)=R~(j)+∥R(j)∥2∥R(j-1)∥2P(j-1).

Step 3. Return to Step 1.

In our experiment, all computations were done using the PC with Pentium Dual-Core CPU E5800 @2.40 GHz. All the programming is implemented in MATLAB R2011b. The initial matrix G0 in Algorithm 8 is taken as the null matrix and the termination criterion is ∥Yk+1-Yk∥F≤ɛ=2.22×10-16 or ∥Zk+1-Zk∥F≤2.22×10-16.

Example 12.

Matrices A, B, C, D, E, and F are given as follows:
(48)A=(1.71.2-1.71.93.42.4-3.43.8-2.1-1.22.3-2.72.11.2-2.32.7),B=(1.20.3-0.30.1-0.10.11.30.2-0.10.20.20.21.20.2-0.2-0.1-0.20.21.10.2-0.2-0.30.3-0.21.2),C=(7.226.475.355.445.8414.4412.9410.7010.8811.68-8.92-7.93-6.51-6.84-6.708.927.936.516.846.70),D=(-2.91.3-2.42.12.9-1.32.4-2.12.22.22.32.92.22.21.41.7),E=(-1.92.3-3.42.1-2.61.9-2.33.4-2.12.62.22.4-4.63.8-3.4-1.1-1.22.3-1.91.7-3.2-1.35.3-3.71.8),F=(-5.83.9-8.22.7-8.14.8-6.96.2-5.76.1-20.02.29.0-12.0-3.9-21.0-0.82.6-15.00.12).

Computing by Algorithm 8, we have Y*=0,
(49)Z*=(0.99361.35642.00001.37221.99970.00641.64360.00001.62780.00031.78260.00009.82650.92521.21062.67810.342019.08821.23500.8386)
and a solution G~ to inequality (5) as follows:
(50)G~=(-0.15840.1584-0.10710.0536-0.19520.0960-0.09600.1873-0.09360.2442-0.01320.01320.5250-0.26250.53310.0693-0.06930.4474-0.22370.4974).
It follows from AA+CB+B=C and Y*=[F~-D(G~-A+AG~BB+)E]+=0 that problem (1) is solvable. By substituting G~ into (3), we obtain a solution X~ to problem (1) as follows:
(51)X~=A+CB++G~-A+AG~BB+=(1.04561.13410.50510.98760.83281.56371.18800.87190.96511.9153-0.7795-0.54090.0916-0.91360.17470.64260.27720.81610.32100.4646).
Furthermore, denote
(52)δk=∥Yk+1-Yk∥F,ρk=∥Zk+1-Zk∥F.
We have the following iterative error curve in Figure 1.

∥Yk+1-Yk∥F and ∥Zk+1-Zk∥F for Example 12.

Example 13.

Given matrices A, B, and C being the same as in Example 12, D∈R4×4 and E∈R5×5 are identity matrices, and F is given as follows:
(53)F=(0.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.10.1).

Following Algorithm 8, we get Y*=0,
(54)Z*=(1.37761.08240.68081.07131.03241.39521.20480.60160.98251.61100.00000.00000.00000.00000.00000.98620.63380.58460.88940.1588)
and a solution G~ to inequality (5) as follows:
(55)G~=(0.27360.20660.16850.23730.10430.02750.02080.01690.02380.03980.86630.65410.53350.75110.45840.51290.38730.31580.44470.2916).
It follows from AA+CB+B=C and Y*=[F~-D(G~-A+AG~BB+)E]+=0 that problem (1) is solvable. By substituting G~ into (3), we obtain a solution X~ to problem (1) as follows:
(56)X~=A+CB++G~-A+AG~BB+=(1.47761.18240.78081.17131.13241.49521.30480.70161.08251.71100.10000.10000.10000.10000.10001.08620.73380.68460.98940.2588).
Furthermore, we have the following iterative error curve in Figure 2.

∥Yk+1-Yk∥F and ∥Zk+1-Zk∥F for Example 13.

6. Conclusion

In this paper, we propose Algorithm 8 to find solutions to the matrix equation AXB=C subject to a matrix inequality constraint DXE≥F. And the global convergence results are obtained. For least squares problem (27) in Algorithm 8, we use the modified conjugate gradient method to solve it. Numerical results also confirm the good theoretical properties of our approach.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Authors’ Contribution

All authors contributed equally and significantly to the writing of this paper. All authors read and approved the final paper.

Acknowledgments

The project is supported by the National Natural Science Foundation of China (Grant nos. 11071041, 11201074), Fujian Natural Science Foundation (Grant no. 2013J01006), the University Special Fund Project of Fujian (Grant no. JK2013060), and R&D of Key Instruments and Technologies for Deep Resources Prospecting (the National R&D Projects for Key Scientific Instruments) under Grant no. ZDYZ2012-1-02-04.

DaiH.On the symmetric solutions of linear matrix equationsChuK. W. E.Symmetric solutions of linear matrix equations by matrix decompositionsDonF. J. H.On the symmetric solutions of a linear matrix equationDengY.-B.BaiZ.-Z.GaoY.-H.Iterative orthogonal direction methods for Hermitian minimum norm solutions of two consistent matrix equationsTrenchW. F.Characterization and properties of matrices with generalized symmetry or skew symmetryTrenchW. F.Hermitian, Hermitian R-symmetric, and HERmitian R-skew symmetric Procrustes problemsTrenchW. F.Inverse eigenproblems and associated approximation problems for matrices with generalized symmetry or skew symmetryTrenchW. F.Minimization problems for (R,S)-symmetric and (R,S)-skew symmetric matricesXieD. X.ShengY. P.HuX.The least-squares solutions of inconsistent matrix equation over symmetric and antipersymmetric matricesZhaoL.HuX.ZhangL.Least squares solutions to AX=B for bisymmetric matrices under a central principal submatrix constraint and the optimal approximationWangQ.SunJ.LiS.Consistency for bi(skew)symmetric solutions to systems of generalized Sylvester equations over a finite central algebraDehghanM.HajarianM.An iterative method for solving the generalized coupled Sylvester matrix equations over generalized bisymmetric matricesDehghanM.HajarianM.Analysis of an iterative algorithm to solve the generalized coupled Sylvester matrix equationsSongC.ChenG.ZhaoL.Iterative solutions to coupled Sylvester-transpose matrix equationsGuC. Q.QianH. J.Skew-symmetric methods for nonsymmetric linear systems with multiple right-hand sidesKonghuaG.HuX.ZhangL.A new iteration method for the matrix equation AX=BJbilouK.Smoothing iterative block methods for linear systems with multiple right-hand sidesKarimiS.ToutounianF.The block least squares method for solving nonsymmetric linear systems with multiple right-hand sidesLiF.GongL.HuX.ZhangL.Successive projection iterative method for solving matrix equation AX=BToutounianF.KarimiS.Global least squares method (Gl-LSQR) for solving general linear systems with several right-hand sidesWangQ.A system of matrix equations and a linear matrix equation over arbitrary regular rings with identityHuangG. X.YinF.GuoK.An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB=CBaksalaryJ. K.KalaR.The matrix equation AXB+CYD=EMitraS. K.Common solutions to a pair of linear matrix equations A1×B1=C1 and A2×B2=C2MitraS. K.A pair of simultaneous linear matrix equations A1XB1=C1, A2XB2=C2 and a matrix programming problemShinozakiN.SibuyaM.Consistency of a pair of matrix equations with an applicationNavarraA.OdellP. L.YoungD. M.A representation of the general common solution to the matrix equations A1XB1=C1 and A2XB2=C2 with applicationsHajarianM.Matrix form of the CGS method for solving general coupled matrix equationsHajarianM.Matrix iterative methods for solving the Sylvester-transpose and periodic Sylvester matrix equationsHajarianM.Matrix form of the Bi-CGSTAB method for solving the coupled Sylvester matrix equationsHajarianM.Solving the general coupled and the periodic coupled matrix equations via the extended QMRCGSTAB algorithmsPengZ.-Y.WangL.PengJ.-J.The solutions of matrix equation AX=B over a matrix inequality constraintLiJ. F.PengZ. Y.PengJ. J.Bisymmetric solution of the matrix equation AX=B under a matrix inequality constraintZhangK. Y.XuZ.MoreauJ. J.Décomposition orthogonale d'un espace hilbertien selon deux cones mutuellement polairesWierzbickiA. P.KurcyuszS.Projection on a cone, penalty functionals and duality theory for problems with inequaltity constraints in Hilbert spaceCiarletP. G.