A New Method of Kronecker Product Decomposition

. Kronecker product decomposition is often applied in various felds such as particle physics, signal processing, image processing, semidefnite programming, quantum computing, and matrix time series analysis. In the paper, a new method of Kronecker product decomposition is proposed. Teoretical results ensure that the new method is convergent and stable. Te simulation results show that the new method is far faster than the known method. In fact, the new method is very applicable for exact decomposition, fast decomposition, big matrix decomposition, and online decomposition of Kronecker products. At last, the extension direction of the new method is discussed.


Introduction
Kronecker product, as a special case of tensor product, is a concept having its origin in group theory and has been successfully applied in various felds such as particle physics, signal processing, image processing, semidefnite programming, and quantum computing et al. [1].Ford and Tyrtyshnikov [2] combined the discrete wavelet transform approximation and the approximation with a sum of Kronecker products to enable the solution of very large dense linear systems by an iterative technique using a Kronecker product approximation represented in a wavelet basis.Yang et al. [3] researched the generalized Kronecker product linear system associated with a class of consecutiverank-descending matrices arising from bivariate interpolation problems.Muñoz-Matute et al. [4] introduced an algorithm to speed up the computation of the φ-function action over vectors for two-dimensional (2D) matrices expressed as a Kronecker sum using Kronecker products of one-dimensional matrices.More literature studies can refer to Rifa and Zinoviev [5], Enríquez and Rosas-Ortiz [6], Hao et al. [7], Marco et al. [8], Chen and Kressner [9], and the reference cited in.
Defnition 1 (see [10]).Assume matrices A � (a ij ) m×n and B � (b ij ) p×q , then the m × n block matrix (a ij B) m×n is called the Kronecker product of A and B, denoted by A ⊗ B, that is Te Kronecker product decomposition of a matrix C is the factorization of C into the Kronecker product of two matrices C � A ⊗ B where the dimensions of A and B are given.
As the theory of matrix time series analysis develops, we often need to deal with the Kronecker product decomposition, see Chen et al. [11] and Wu and Hua [12].For example, consider the 1− order autoregressive model for centralized matrix time series in bilinear form X t � ΦX t−1 Ψ + ε t , t � 1, 2, . . ., (2) where X t , t ∈ Z   is a matrix time series, ε t , t ∈ Z   is a matrix white noise, and Φ and Ψ are two constant matrices.
According to the moment estimation method, it is easy to obtain that where vec(•) is the vectorization of matrix by columns, and N is the length of observation sequence.Ten, we need to solve Φ and Ψ, which is a problem on Kronecker product decomposition.Denote that we need to solve Φ and Ψ such that As far as we know, there is a method to solve (5), which is the optimization method [11].Tat is, ( 5) is transformed into the following minimum problem on matrices where ‖ • ‖ F is the Frobenius norm of a matrix.However, it is very slow to solve (6) when C is a big matrix.
In the paper, we will propose a new method of Kronecker product decomposition.Te method is convergent and stable.Also, the new method is far easier and faster than the optimization method (6).

The New Method of Kronecker Product Decomposition
In the section, we will propose a new method to solve A � (a ij ) m×n and B � (b ij ) p×q satisfying where C � (c ij ) mp×nq is given.For (7) and any constant l ≠ 0, it follows that Tat is, the Kronecker product decomposition of ( 7) is not unique.Tus, we add the constraint condition that arg max For the sake of convenience, we denote For example, let B � −3 1 0 2  , then long(B) � −3.
For any mp × nq dimensional matrix C, if C � O mp×nq , then, we can take A � O m×n and B � ones(p, q), where ones(p, q) is the p × q dimensional matrix each element of which is one.Tus, we always assume C ≠ O mp×nq for Kronecker product decomposition.
(i) Steps of Kronecker product decomposition with the constraint condition ( 9): (1) Block matrix C into C � (C ij ) m×n and denote where mean • { } means taking the average value, and C ij (u, v) and b uv are the (u, v) elements of C ij and  B,respectively, for each i � 1, 2, . . ., m and j � 1, 2, . . ., n.
For example, consider to decompose C into Kronecker product of A and B, where and B is a 2 × 2 dimensional matrix.It yields from the steps of Kronecker product decomposition with the constraint condition (9) that Ten,

Theoretical Properties of the New Method
As for the new method of Kronecker product decomposition, we present some of its properties in the section.First, we will show the new method is always applicable.
Te proof of the proposition is presented in Appendix A. Theorem 3. If C � A ⊗ B is not null matrix, where A and B are m × n and p × q dimensional matrices, respectively, it yields from the new method of Kronecker product decomposition that C is decomposed into the Kronecker product of  A and  B, where Te proof of the theorem is presented in Appendix B.

it yields from the new method of Kronecker product decomposition that C is decomposed into the Kronecker product of 􏽢 A and 􏽢 B, then
Teorem 3 shows the new method of Kronecker product decomposition can obtain the exact solution if C exactly equals a Kronecker product of A and B. However, C is obtained by some estimation method in practice and it will be afected by some random disturbance.Tat is,

Journal of Mathematics
where ε � (ε uv ) mp×nq is a matrix-valued white noise.Tat ε is a matrix-valued white noise means that vec(ε) is a vectorvalued white noise.Theorem 5.If C � A ⊗ B + ε is not null matrix, where A and B are m × n and p × q dimensional matrices, respectively, A is a nonzero matrix and long(B) � 1, it yields from the new method of Kronecker product decomposition that C is decomposed into the Kronecker product of  A and  B, then Te proof of the theorem is presented in Appendix C. Teorem 5 shows the new method of Kronecker product decomposition is efective, that is, the results of decomposition are close to the original matrices as long as the disturbance ε is not too large.

Simulation
For sake of convenience, we have compiled a MATLAB program for the new method of Kronecker product decomposition in the appendix, named by "KronDecomposition.m,"which is based on MATLAB R2020b version.

Simulation for Convergence. Consider
and C is decomposed into Kronecker product by our "KronDecomposition.m"that that is, at this time the Kronecker product decomposition has no error.
In the following, we consider the Kronecker product decomposition of C with random disturbance as follows: where ε follows the uniform distribution on the interval [−1, 1] or the standard normal distribution, i.e., ε ∼ U(−1, 1) or ε ∼ N(0, 1).For each λ, we simulate N times ε, and compute the mean μ, standard deviation σ, maximum value of the absolute value of maximum error maxError and running time of the decomposition t in which the Kronecker product decomposition is by our "KronDecomposition.m,"see Table 1.Also, the corresponding results in which the Kronecker product decomposition is by the optimization method ( 6) are presented in Table 2.
Table 1 shows that the mean μ, standard deviation σ, and maximum value maxError of the absolute value of maximum error by the new method decreases as the disturbance ε decreases whether ε obeys a uniform distribution or a normal distribution, which is consistent with Teorem 5.
Comparing Tables 1 and 2, it shows that the absolute value of maximum errors by the new method is a little greater than those by the optimization method (6).However, the computing time of the new method is far less than that by the optimization method (6).

Simulation for Computing Speed.
In the subsection, we will present a comparison of the new method and the optimization method (6) in terms of computing speed.We consider the Kronecker product decomposition of C with diferent dimensions as follows: For the sake of simplicity, we set n � 4, q � 2, and p � m, in which it is only to make the optimization method ( 6) easier that we take p � m.Also, each row of A m×4 is [1,2,3,4] and that of B m×2 is [1,1], where m � 1, 2, 3, • • •.For example, We decompose C m 2 ×8 into a Kronecker product of m × 4 and m × 2 matrices by the new method "KronDecomposition.m"and the optimization method (6) for m � 2, 4, 6, . . ., 50, and present the maximum error and running time of the decomposition in Table 3, where Time1 is the running time of the new method "KronDecomposition.m,"Time2 is the running time of the optimization method (6), maxError1 is the maximum error of the new method "KronDecomposition.m,"and maxError2 is the maximum error of the optimization method (6).Ten draw the running time of the new method "KronDecomposition.m"and the optimization method (6) in Figure 1.
Table 3 shows there is no error using the new method "KronDecomposition.m" to decompose C m 2 ×8 into Kronecker product for all m � 2, 4, 6, . . ., 50, and the error using the optimization method ( 6) is also very small.And then, Table 3 and Figure 1 show the running time by using the new method "KronDecomposition.m" to decompose C m 2 ×8 into

Journal of Mathematics
Kronecker product is far less than that by using the optimization method (6) for all m � 2, 4, 6, . . ., 50.In summary, as to decompose C m 2 ×8 into Kronecker product, the new method "KronDecomposition.m" is much better than the optimization method (6).

Applications of Kronecker Product Decomposition
In For the sake of clarity, we denote the time series by where P 1 (t) and V 1 (t) are the daily closing price and daily volume of Stock 000046, P 2 (t) and V 2 (t) are the daily closing price and daily volume of Stock 000563, and P 3 (t) and V 3 (t) are the daily closing price and daily volume of Stock 000617.
In the following, we will consider the logarithmic rates (log rate) of daily closing prices and daily volumes of the three stocks.Denote where In order to obtain the frst-order matrix autoregressive model, MAR (1), following as and where ε t , t ≥ 2   is a 3 × 2-dimensional matrix white noise series.

Conclusion
A new method of Kronecker product decomposition is proposed, which is easy, convergent, stable, and fast.Te new method is very applicable for exact decomposition, fast decomposition, big matrix decomposition, and online decomposition of Kronecker products.
Comparing with the known method of Kronecker product decomposition, i.e., optimization method, the computing speed of the new method is very faster than that of the known method.If the matrix to be decomposed into a Kronecker product just equals a Kronecker product of two matrices, the new method can fast obtain its exact solution, but the known method has a little error.If the matrix to be decomposed into a Kronecker product does not equal a Kronecker product of two matrices, the error of the new method is a little bigger than that of the known method, but the computing speed of the new method is very faster than that of the known method.
Tere are many directions to extend the scope of the new method.It is a possible extension of the new method that using weighted average instead of arithmetic average in Step (4).Furthermore, the method can be applied in many felds such as group theory, particle physics, matrix time series analysis, and dynamic complex network modeling.

A. Proof of Proposition 2
Assume long(S mn ) � 0 in Step (3), that is, S mn � O p×q .It yields from Step (2) that and then By the recursive method, we can obtain that thus C � O mp×nq , which contradicts the assumption C ≠ O mp×nq .Tat is, long(S mn ) ≠ 0 in Step (3) holds.Furthermore, it yields from long(S mn ) ≠ 0 that S mn ≠ O p×q , then B ≠ O p×q , so b uv ≠ 0, u � 1, 2, . . .,  p; v � 1, 2, . . ., q} is not empty in Step (4).

B. Proof of Theorem 3
It follows from Step (1) that where a ij is the (i, j) element of A. Without loss of generality, we assume a 11 ≠ 0, otherwise, S 1 � O p×q and we consider whether a 12 equals zero, and so on.Owning to a 11 ≠ 0, it obtains from Step (2) that where a (i−1)n+j � a ij for all i � 1, 2, . . ., m and j � 1, 2, . . ., n, and sign(•) is the sign function, i.e., sign(x) � Tus,

C. Proof of Theorem 5
First, we block the matrix-valued white noise ε into ε � (E ij ) m×n and denote E (i−1)n+j � E ij , where E ij is a p × q dimensional matrix for all i � 1, 2, . . ., m and j � 1, 2, . . ., n.It follows from Step (1) that where a (i−1)n+j � a ij and a ij is the (i, j) element of A.
where and then In the following, we will show the determining of addition or subtraction to compute S k in Step (2).Denote where S l 0 −1 �  l 0 −1 l�1 (−1) I l E l , and we stipulate I 1 � 0, and Denote the sign of the frst element whose absolute value equals ‖S l 0 −1 ‖ M by "κ," then then it follows from (C.6) that and the sign of the frst element with the largest absolute value in S l 0 is also κ.Furthermore, using a series of complex calculations, we can obtain that A.

) Case C. 2 . 0 (− 1 )(− 1 ) 0 (− 1 )
Te minimum of all elements of B equals −1.Similar to Case C.1, we can obtain from Step (2) that S mn � S l 0 −1 +  mn l�l I l C l �  mn l�1 I l E l +  mn l�l I l a l B.

Table 1 :
Kronecker product decomposition for diferent random disturbances by the new method.

Table 3 :
Comparison of running time and maximum error for Kronecker product decomposition.