1. Introduction
Throughout we denote the complex
m
×
n
matrix space by
ℂ
m
×
n
. The symbols
I
,
A
*
,
A

1
, and
∥
A
∥
stand for the identity matrix with the appropriate size, the conjugate transpose, the inverse, and the Frobenius norm of
A
∈
ℂ
m
×
n
, respectively.
It is a very active research topic to study solutions to various matrix equations [1–4]. There are many authors who have investigated the classical matrix equation
(1)
A
X
=
B
with different constraints such as symmetric, reflexive, Hermitiangeneralized Hamiltonian, and repositive definite [5–9]. By special matrix decompositions such as singular value decompositions (SVDs) and CS decompositions [10–12], Hu and his collaborators [13–15], Dai [16], and Don [17] have presented the existence conditions and detailed representations of constrained solutions for (1) with corresponding constraints, respectively. For instance, Peng and Hu [18] presented the eigenvectorsinvolved solutions to (1) with reflexive and antireflexive constraints; Wang and Yu [19] derived the bi(skew)symmetric solutions and the bi(skew)symmetric least squares solutions with the minimum norm to this matrix equation; Qiu and Wang [20] proposed an eigenvectorsfree method to (1) with
P
X
=
X
P
and
X
*
=
s
X
constraints, where
P
is a Hermitian involutory matrix and
s
=
±
1
.
Inspired by the work mentioned above, we focus on the matrix equation (1) with
P
X
=
X
P
and
X
*
=
X
constraints, which can be described as follows: find
X
such that
(2)
{
∥
A
X

B
∥
2
=
min
,
P
X
=
X
P
,
X
*
=
X
}
.
Moreover, we also discuss the least squares solutions of (1) with
P
X
=
X
G
P
G
*
and
X
*
=
X
constraints, where
G
is a given unitary matrix of order
n
.
In Section 2, we present the least squares solutions to the matrix equation (1) with the constraints
P
X
=
X
P
and
X
*
=
X
. In Section 3, we derive the least squares solutions to the matrix equation (1) with the constraints
P
X
=
X
G
P
G
*
and
X
*
=
X
. In Section 4, we give an algorithm and a numerical example to illustrate our results.
2. Least Squares Solutions to the Matrix Equation (<xref reftype="dispformula" rid="EEq1.1">1</xref>) with the Constraints <inlineformula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M32">
<mml:mi>P</mml:mi>
<mml:mi>X</mml:mi>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo mathvariant="bold">=</mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mi>X</mml:mi>
<mml:mi>P</mml:mi></mml:math>
</inlineformula> and <inlineformula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M33">
<mml:msup>
<mml:mrow>
<mml:mi>X</mml:mi></mml:mrow>
<mml:mrow>
<mml:mi>*</mml:mi></mml:mrow>
</mml:msup>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo mathvariant="bold">=</mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mi>X</mml:mi></mml:math>
</inlineformula>
It is required to transform the constrained problem to unconstrained one. To this end, let
(3)
P
=
U
diag
(
I
k
,

I
n

k
)
U
*
be the eigenvalue decomposition of the Hermitian matrix
P
with unitary matrix
U
. Obviously,
P
X
=
X
P
holds if and only if
(4)
diag
(
I
k
,

I
n

k
)
X
¯
=
X
¯
diag
(
I
k
,

I
n

k
)
,
where
X
¯
=
U
*
X
U
. Partitioning
(5)
X
¯
=
(
X
11
X
12
X
21
X
22
)
,
X
11
∈
ℂ
k
×
k
,
X
22
∈
ℂ
(
n

k
)
×
(
n

k
)
,
(4) is equivalent to
(6)
X
12
=

X
12
,
X
21
=

X
21
.
Therefore,
(7)
X
=
U
diag
(
X
11
,
X
22
)
U
*
,
X
11
∈
ℂ
k
×
k
,
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
X
22
∈
ℂ
(
n

k
)
×
(
n

k
)
.
The constraint
X
*
=
X
is equivalent to
(8)
X
=
U
diag
(
X
1
,
X
2
)
U
*
,
X
i
*
=
X
i
,
i
=
1,2
,
with
X
1
∈
ℂ
k
×
k
,
X
2
∈
ℂ
(
n

k
)
×
(
n

k
)
.
Partition
U
=
(
U
1
,
U
2
)
and denote
(9)
A
1
=
A
U
1
,
A
2
=
A
U
2
,
B
1
=
B
U
1
,
B
2
=
B
U
2
;
then assume that the singular value decomposition of
A
1
and
A
2
is as follows:
(10)
A
1
=
M
1
[
Σ
1
0
0
0
]
N
1
*
,
A
2
=
M
2
[
Σ
2
0
0
0
]
N
2
*
,
where
M
1
,
M
2
,
N
1
, and
N
2
are unitary matrices,
Σ
1
=
diag
(
α
1
,
…
,
α
r
)
,
α
i
>
0
(
i
=
1
,
…
,
r
)
,
r
=
rank
(
A
1
)
,
Σ
2
=
diag
(
β
1
,
…
,
β
l
)
,
β
j
>
0
(
j
=
1
,
⋯
,
l
)
, and
l
=
rank
(
A
2
)
.
Theorem 1.
Given
A
,
B
∈
ℂ
m
×
n
. Then the least squares solutions to the matrix equation (1) with the constraints
P
X
=
X
P
and
X
*
=
X
can be expressed as
(11)
X
=
U
diag
(
N
1
(
Σ
1

1
B
11
+
B
11
*
Σ
1

1
2
Σ
1

1
B
12
B
12
*
Σ
1

1
X
14
)
N
1
*
0
0
N
2
(
Σ
2

1
B
21
+
B
21
*
Σ
2

1
2
Σ
2

1
B
22
B
22
*
Σ
2

1
X
24
)
N
2
*
)
U
*
,
where
X
14
=
X
14
*
and
X
24
=
X
24
*
are arbitrary matrix.
Proof.
According to (8) and the unitary invariance of Frobenius norm
(12)
∥
A
X

B
∥
=
∥
A
U
diag
(
X
1
,
X
2
)
U
*

B
∥
=
∥
A
U
diag
(
X
1
,
X
2
)

B
U
∥
.
By (9), the least squares problem is equivalent to
(13)
∥
A
X

B
∥
=
∥
(
A
1
X
1

B
1
,
A
2
X
2

B
2
)
∥
.
We get
(14)
∥
A
X

B
∥
2
=
∥
A
1
X
1

B
1
∥
2
+
∥
A
2
X
2

B
2
∥
2
.
According to (10), the least squares problem is equivalent to
(15)
∥
A
X

B
∥
2
=
∥
M
1
[
Σ
1
0
0
0
]
N
1
*
X
1

B
1
∥
2
+
∥
M
2
[
Σ
2
0
0
0
]
N
2
*
X
2

B
2
∥
2
=
∥
[
Σ
1
0
0
0
]
N
1
*
X
1
N
1

M
1
*
B
1
N
1
∥
2
+
∥
[
Σ
2
0
0
0
]
N
2
*
X
2
N
2

M
2
*
B
2
N
2
∥
2
.
Assume that
(16)
N
1
*
X
1
N
1
=
[
X
11
X
12
X
13
X
14
]
,
N
2
*
X
2
N
2
=
[
X
21
X
22
X
23
X
24
]
,
M
1
*
B
1
N
1
=
[
B
11
B
12
B
13
B
14
]
,
M
2
*
B
2
N
2
=
[
B
21
B
22
B
23
B
24
]
.
Then we have
(17)
∥
A
X

B
∥
2
=
∥
[
Σ
1
0
0
0
]
[
X
11
X
12
X
13
X
14
]

[
B
11
B
12
B
13
B
14
]
∥
2
+
∥
[
Σ
2
0
0
0
]
[
X
21
X
22
X
23
X
24
]

[
B
21
B
22
B
23
B
24
]
∥
2
=
∥
Σ
1
X
11

B
11
∥
2
+
∥
Σ
2
X
21

B
21
∥
2
+
∥
Σ
1
X
12

B
12
∥
2
+
∥
Σ
2
X
22

B
22
∥
2
+
∥
B
13
∥
2
+
∥
B
14
∥
2
+
∥
B
23
∥
2
+
∥
B
24
∥
2
.
Hence
(18)
∥
A
X

B
∥
2
=
min
is solvable if and only if there exist
X
11
,
X
12
,
X
21
,
X
22
such that
(19)
∥
Σ
1
X
11

B
11
∥
2
=
min
,
∥
Σ
1
X
12

B
12
∥
2
=
min
,
∥
Σ
2
X
21

B
21
∥
2
=
min
,
∥
Σ
2
X
22

B
22
∥
2
=
min
.
It follows from (19) that
(20)
X
11
=
Σ
1

1
B
11
+
B
11
*
Σ
1

1
2
,
X
12
=
Σ
1

1
B
12
,
X
21
=
Σ
2

1
B
21
+
B
21
*
Σ
2

1
2
,
X
22
=
Σ
2

1
B
22
.
Substituting (20) into (16) and then into (8), we can get that the form of
X
is (11).
3. Least Squares Solutions to the Matrix Equation (<xref reftype="dispformula" rid="EEq1.1">1</xref>) with the Constraints <inlineformula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M76">
<mml:mi>P</mml:mi>
<mml:mi>X</mml:mi>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo mathvariant="bold">=</mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mi>X</mml:mi>
<mml:mi>G</mml:mi>
<mml:mi>P</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mi>G</mml:mi></mml:mrow>
<mml:mrow>
<mml:mi>*</mml:mi></mml:mrow>
</mml:msup></mml:math>
</inlineformula> and <inlineformula>
<mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M77">
<mml:msup>
<mml:mrow>
<mml:mi>X</mml:mi></mml:mrow>
<mml:mrow>
<mml:mi>*</mml:mi></mml:mrow>
</mml:msup>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mo mathvariant="bold">=</mml:mo>
<mml:mo> </mml:mo>
<mml:mo> </mml:mo>
<mml:mi>X</mml:mi></mml:math>
</inlineformula>
In this section, we generalize the constraints
P
X
=
X
P
to
P
X
=
X
G
P
G
*
, where
G
is a given unitary matrix of order
n
. Obviously, the constraint is equal to
(21)
P
X
G
=
X
G
P
.
Notice that (1) can be equivalently rewritten in
(22)
A
X
G
=
B
G
.
Denoting by
Y
=
X
G
and setting
C
=
B
G
, the equation becomes
(23)
A
Y
=
C
,
with the constraints
P
Y
=
Y
P
and
Y
*
=
Y
.
Therefore, the least squares solutions to matrix equation (1) with the constraints
P
X
=
X
G
P
G
*
and
X
*
=
X
can be solved similar to Theorem 1.
Theorem 2.
Given
A
,
B
∈
ℂ
m
×
n
. Then the least squares solutions to the matrix equation (1) with the constraints
P
X
=
X
G
P
G
*
and
X
*
=
X
can be expressed as
(24)
X
=
U
diag
(
N
1
(
Σ
1

1
C
11
+
C
11
*
Σ
1

1
2
Σ
1

1
C
12
C
12
*
Σ
1

1
Y
14
)
N
1
*
0
0
N
2
(
Σ
2

1
C
21
+
C
21
*
Σ
2

1
2
Σ
2

1
C
22
C
22
*
Σ
2

1
Y
24
)
N
2
*
)
U
*
G
*
,
where
Y
14
=
Y
14
*
and
Y
24
=
Y
24
*
are arbitrary matrix.
4. An Algorithm and Numerical Examples
Based on the main results of this paper, we in this section propose an algorithm for finding the least squares solutions to the matrix equation
A
X
=
B
with the constraints
P
X
=
X
P
and
X
*
=
X
. All the tests are performed by MATLAB 6.5 which has a machine precision of around
1
0

16
.
Algorithm 3.
(
1
)
Input
A
,
B
∈
ℂ
m
×
n
,
P
∈
ℂ
n
×
n
and compute
U
∈
ℂ
n
×
n
,
I
k
∈
ℂ
k
×
k
,

I
n

k
∈
ℂ
(
n

k
)
×
(
n

k
)
by the eigenvalue decomposition to
P
.
(
2
)
Compute
A
1
,
A
2
,
B
1
,
B
2
according to (9).
(
3
)
Compute
N
1
,
N
2
,
M
1
,
M
2
,
Σ
1
,
Σ
2
by the singular value decomposition of
A
1
,
A
2
.
(
4
)
Compute
B
11
,
B
12
,
B
21
,
B
22
according to (16).
(
5
)
Compute
X
by Theorem 1.
Example 4.
Suppose
(25)
A
=
[
0
0
0
0
0
1.2
i
0
0
0
0
0
0.8
i
]
,
B
=
[

3

0.8
i

1

3
i

1

1

i

1
9
i

7

2

2
2
i

2
]
,
P
=
[
1
0
0
0
0
1
0
0
0
0

1
0
0
0
0

1
]
.
Applying Algorithm 3, we obtain the following:
(26)
U
=
[
0

i
0
0
i
0
0
0
0
0
0
1
0
0
1
0
]
,
A
1
=
[
0
0
1.2
0
0
0
]
,
A
2
=
[
0
0
0
0
0.8
i
0
]
,
B
1
=
[
0.8
3
.
i

i

1
+
i

2
i
2
i
]
,
B
2
=
[

1

1

3
i

7
9
i

2
2
i
]
,
M
1
=
[
0
i
0

i
0
0
0
0
1
]
,
M
2
=
[
0
1
0
0
0
i
1
0
0
]
,
N
1
=
[
i
0
0
1
]
,
N
2
=
[

i
0
0
i
]
,
Σ
1
=
[
1.2
]
,
Σ
2
=
[
0.8
]
,
B
11
=
[
i
]
,
B
12
=
[

1

i
]
,
B
21
=
[
2
i
]
,
B
22
=
[

2
]
,
X
=
[
3

0.83
+
0.83
i
0
0

0.83

0.83
i
0
0
0
0
0

2
2.5
0
0
2.5
0
]
.