The Tensor Pad ́𝑒 -Type Approximant with Application in Computing Tensor Exponential Function

Tensor exponential function is an important function that is widely used. In this paper, tensor Pad ́𝑒 -type approximant (TPTA) is defined by introducing a generalized linear functional for the first time. The expression of TPTA is provided with the generating function form. Moreover, by means of formal orthogonal polynomials, we propose an efficient algorithm for computing TPTA. As an application, the TPTA for computing the tensor exponential function is presented. Numerical examples are given to demonstrate the efficiency of the proposed algorithm.


Introduction
Tensor exponential function is an important function that is widely used, owing to its key role in the solution of tensor differential equations [1][2][3][4].For instance, Markovian master equation can be written as tensor differential equations (/)P() = A ⋅ P(), where the probabilities tensor P() ∈ R  1 × 2 ×⋅⋅⋅×  [5].Consider the initial value problem defined by the tensor ordinary differential equation [6,7] Ẏ () = AY () , where the superimposed dot denotes differentiation with respect to  and A and Y 0 are given constant tensors.The solution to system (1) is Y() = exp[( −  0 )A]Y 0 , and exp(⋅) is the tensor exponential function.
and the preceding series is absolutely convergent for any argument A and, as its scalar counterpart, can be used to evaluate the tensor exponential function to any prescribed degree of accuracy [16].The computation of ( 2) is carried out by simply truncating the infinite series with   terms: with   being such that (1/  )‖A   ‖ ≤   .However, the accuracy and effectiveness of the preceding algorithm is limited by round-off and choice of termination criterion [16].Pad é approximant has become by far the most widely used one in calculation of exponential function or formal power series due to the following reasons: first, the series may converge too slowly to be of any use and the approximation can accelerate its convergence; second, only few coefficients of the series may be known and a good approximation to the series is needed to obtain properties of the function that it represents [20].For instance, matrix Pad é type approximant (MPTA) [21] can be used to simplify the high degree multivariable system by approximating transfer function matrix () that can be expanded into a power series with matrix coefficients, i.e., () = ∑ ∞ =0     , where   ∈ C × .The key to construct TPTA is to maintain the same order of tensor A with different powers.For this issue, we introduce t-product [22,23] of two tensors to solve it.In addition, in order to give the definition of TPTA, we introduce a generalized linear functional in the tensor space for the first time.
This paper is organized as follows.In Section 2, we provide some preliminaries.First, we introduce the t-product of two tensors; then, we show the definitions of tensor exponential function and the Frobenius norm of a tensor.In Section 3.1, we define the tensor Pad é -type approximant by using generalized linear functional; the expression of TPTA is of the form of tensor numerator and scalar denominator; and then we introduce the definition of orthogonal polynomial with respect to generalized linear functional and sketch an algorithm to compute the TPTA.Numerical examples are given and analyzed in Section 4. Finally, we finish the paper with concluding remarks in Section 5.

Preliminaries
There arise mainly a problem for approximating tensor exponential function.That is how to expand  A into the power series for order- ( ≥ 3) tensors.For a Symmetric and second-order tensor , higher powers of  can be computed by the Cayley-Hamilton theorem [24], but it fails for the order- ( ≥ 3) tensors.Therefore, we shall utilize the tproduct to obtain higher powers of order- ( ≥ 3) tensors in this section.Firstly, we introduce some notations and basic definitions which will be used in the sequel.Throughout this paper tensors are denoted by calligraphic letters (e.g., A, B), while capital letters represent matrices, and lowercase letters refer to scalars.
An order- tensor, A, can be written as Thus, a matrix is considered a second-order tensor, and a vector is a first-order tensor [22], for  = 1, . . .,   , denoted by A  ∈ R  1 × 2 ×⋅⋅⋅× −1 , the tensor whose order is  − 1 and is created by holding the th index of A fixed at .For example, consider a third-order tensor, A = (  1  2  3 ) ∈ R 3×3×3 .Fixing the 3rd index of A, we can get three 3 × 3 matrices, namely, 2-order tensor, which are respectively.Now, we will define the t-product of two tensors.
Remark 4. The  times power of A is defined as A  = A*A* ⋅ ⋅ ⋅ *A ( times); " * " denotes the t-product.
then, from Definition 2, we have ) . ( Remark 6.One of the characteristic features of t-product is that it ensures that the order of multiplication result of two tensors does not change, whereas other tensor multiplications do not have the feature; that is why we chose the t-product as the multiplication of tensors.
The tensor exponential function is a tensor function on tensors analogous to the ordinary exponential function, which can be defined as follows.
Definition 7. Let A be an  1 ×  2 × ⋅ ⋅ ⋅ ×   real or complex tensor.The tensor exponential function of , denoted by  A or exp(A), is the  1 ×  2 × ⋅ ⋅ ⋅ ×   tensor given by the power series where A 0 is defined to be the identity tensor I (see Definition 8) with the same orders as A.
By Definition 8, we can define tensor inverse, transpose, and orthogonality.However, we do not discuss these works here, as it is beyond the scope of the present work.For the details of these definitions of tensor, we refer to reader to [22,23,25] and the references therein.
Let A ∈ R  1 × 2 ×⋅⋅⋅×  ; then the norm of a tensor is the square root of the sum of the squares of all its entries [25]; i.e., This is analogous to the matrix Frobenius norm.The inner product of two same-sized tensors A, B ∈ R  1 × 2 ×⋅⋅⋅×  is the sum of the products of their elements [25]; i.e., It follows immediately that (A, A) = ‖A‖ 2 .

Tensor Pad é -Type Approximant
Let () be a given power series with tensor coefficients; i.e.,

𝑓 (𝑥) = A
Let P denote the set of scalar polynomials in one real variable whose coefficients belong to the real field R and P  denote the set of elements of P of degree less than or equal to .
Let  : P → R  1 × 2 ×⋅⋅⋅×  be a linear functional on P. Let it act on  by Then, by the linearity of , we have

Definition of Tensor Pad
) be a scalar polynomial of P  of exact degree .In this case, V  () is said to be quasi-monic.Define the tensor polynomial W −1 () associated with V  () with tensor-valued coefficients, by It is easily seen that Then, the polynomials Ṽ () and W−1 () are obtained from V  () and W −1 (), respectively, by reversing the numbering of the coefficients.By the procedure given above, the following conclusion is obtained.Proof.Expanding (V  () − V  ())/( − ) in (18) and applying  yields that Computing Ṽ ()(), we get -type approximant with order  for the given power series (15) and is denoted by ( − 1/)  ().
Remark 11.The polynomial V  (), called the generating polynomial of ( − 1/)  with respect to power series (), can be arbitrarily chosen.
Remark 12.The tensor Pad é -type approximant ( − 1/)  possesses the degree constraint, which is caused by its construction process.The constraint implies that the method does not construct tensor Pad é -type approximant of type (/) when  is different from  − 1.
To fill this gap, we define a new tensor Pad é -type approximant by introducing a generalized linear functional.
Let  () : P → R  1 × 2 ×⋅⋅⋅×  be a generalized linear functional on P. Let it act on  by Similarly to what was done for W −1 (), we consider the polynomial W  () associated with V  (), and defined by Set and define Then we have the following conclusion.
Example 16.Let Now we apply Algorithm 15 to compute TPTA of type (2/2) for this example.(2) Use (19) to compute Ṽ2 (): (3) By using ( 25) and ( 26) we get and (4) Substituting Ṽ2 () and W1 () into ( 27), we obtain (5) Set  22 () = P 22 ()/Ṽ 2 ().It is easy to verify that 3.2.Algorithm for Computing TPTA.Generally, the precision of TPTA is limited, since the denominator polynomials of TPTA are arbitrarily prescribed.In this subsection, in order to improve the precision of approximation, we propose an algorithm for computing the denominator polynomials and illustrate the efficiency of this algorithm in next section.First, we give the following conclusion.
Proof.Note that  is a linear functional on P, only acting on .From ( 18) and ( 20) we deduced that and then this error formula holds.
In terms of the error formula, it holds that If we impose that V  () satisfies the condition (V  ()) = 0, then the first term of (42) disappears, and the order of approximation becomes  + 1. If, in addition, we also impose the condition (V  ()) = 0, the second term in the expansion of the error also disappears, and the order of approximation becomes  + 2, and so on.We indicate that V  () depends on +1 arbitrary constants; however, on the other side, a rational function is defined apart from a multiplying factor in its numerator and its denominator.It implies that ( − 1/)  () depends on  arbitrary constants.So let us take V  () such that Definition 18. V  () in ( 43) is called an orthogonal polynomial with respect to the linear functional  and ( − 1/)  () in (42) is also called a TPTA for the given power series (15) when (43) is satisfied.
From (43) we obtain Let   = 1 in (44); then it follows that Forming the scalar product of both sides of (45) with A 0 , A 1 , . . ., A −1 , respectively, we get Denote ) , and call det(  (A 0 )) the Hankel determinant of () with respect to the coefficients A 0 , A 1 , . . ., A −1 .Then ( 46) is converted into In the case of TPTA, V  () is not arbitrarily chosen any more but is determined by the preceding system.The choice of V  () can help to improve the accuracy of approximation, but unfortunately we have not been able to guarantee that the solution of system (48) comes into existence, so far.We only give the following basic theorem about system (48) on the basis of linear algebra.

Theorem 19. The solution of (48) exists if and only if
Proof.The proof of the assertion follows from the simple fact that, for a system of linear equations, described by  = , where  is matrix and ,  are vectors, the solution of system (48) comes into existence for  if and only if () = ( : ); i.e., the right-hand vector must be in the vector space spanned by the columns of the coefficient matrix .Moreover, if det(  (A 0 )) ̸ = 0, according to Cramer's rule, the solution is unique.
The proof of existence and uniqueness of (/)  is similar to the preceding process.

Application for Computing the Tensor Exponential Function
The method of truncated infinite series has abroad applications in finite single crystal plasticity for computing tensor exponential function [16].However, the accuracy and effectiveness of such algorithm are limited by round-off and choice of termination criterion.In this section, we will utilize the method of TPTA to compute tensor exponential function.We start by briefly reviewing some basic equations that model the behaviour of single crystals in the finite strain range [16].Consider a single crystal model where   and   denote elastic part and plastic part, respectively.
For a single crystal with a total number   of slip systems, the evolution of the inelastic deformation gradient,   , is defined by means of the following rate form: where γ  denotes the contribution of slip system  to the total inelastic rate of deformation.The vectors   0 and   0 denote, respectively, the slip direction and normal direction of slip system .
The above tensor differential equation can be discretized in an implicit fashion with use of the tensor exponential function.The implicit exponential approximation to the inelastic flow equation results in the following discrete form: The above formula is analogous to the exact solution of initial value problem (1) and it is necessary to calculate exp ( In [7], the author used Algorithm 23 to calculate (56).
From Table 1, it is observed that the estimates from TPTA can reach the desired accuracy.
From Table 3, we can see that (3/3)  A has the best approximation for this example.We also compute exp(A) by using Algorithm 23, and the corresponding numerical results are listed in Table 4.By comparison of Table 3 with Table 4, we find that it requires at most 6 coefficients (since  = 3) of power series expansion of exp(A) to achieve an error of 10 −5 in Algorithm 22, while requiring 11 coefficients in Algorithm 23.It is straightforward to understand that Algorithm 23 is more expensive than Algorithm 22 especially for higher order tensor exponential function.In practical applications, only few coefficients of the series may be known, so, we may get the desired results by means of TPTA.Thus the effectiveness of the proposed Algorithm 22 is verified.

Conclusion
In this paper, we presented tensor Pad é -type approximant method for computing tensor exponential function; the expression of TPTA is of the form of tensor numerator and scalar denominator.In order to have a tensor Pad é type approximant with the higher possible precision of approximation, we proposed an algorithm for computing denominator polynomials of TPTA, and its effectiveness has been investigated in one example of tensor exponential function.The key to the TPTA to be applied to the tensor exponential function is that it can be expanded into power series with the same order tensors coefficients by means of tproduct.Of course, there are several ways to multiply tensors [26][27][28][29][30], but the order of the resulting tensor may be changed.For example, if A is  1 ×  2 ×  3 and B is  1 ×  2 ×  3 , then the contracted product [26] of A and B is  2 ×  3 ×  2 ×  3 .So, the choice of the multiplication of two tensors is an open question for expanding tensor exponential function, and the corresponding tensor Pad é approximant theoretic is a subject of further research.

Table 1 :
Numerical results of Example 24 at different points by using Algorithm 22.

Table 2 :
The exact value of exp(A).

Table 3 :
Numerical approximations of exp(A) using Algorithm 22 for Example 25.

Table 4 :
Numerical results of Example 25 by using Algorithm 23.