Algorithm of J-factorization of rational matrices with zeros and poles on the imaginary axis

The problem of J-factorization of rational matrices, which have zeros and poles on the imaginary axis, is reduced to construction of the solutions of two algebraic Riccati equations. For construction of these solutions, it is offered to use appropriate algorithms. These algorithms permit to find the solutions in cases when the Hamiltonian matrices, which are corresponding to these equations, have eigenvalues on the imaginary axis. Algorithms of factorization, which had been offered, permit to find the solution of the problem when the matrix, which will be factored, has zeros at infinity.


Introduction.
It is known that the procedure of J-factorization of rational matrices arises in the modern control theory [6].For the development of the numerical algorithms of realization of this procedure, the investigations of the connection between the procedure of J-factorization and the solution of algebraic Riccati equation (ARE) [8] are very important.The point is that procedure of J-factorization in some cases can be reduced to the solution of ARE [10].However, if the matrix has zeros and poles on the imaginary axis, it should be taken into account at reduction of this procedure to solution of ARE.Thus in this case, it is necessary to remember that for solution of ARE it is necessary to use the special algorithms (the Hamiltonian matrix, corresponding to ARE, has eigenvalues on the imaginary axis).Below, the algorithm of J-factorization of the rational matrices with zeros and poles on the imaginary axis is stated.This algorithm has been published in [15].
Before the formulation of the problem, we will give some known facts.According to [2,3], the problem of factorization of a rational matrix relatively to the imaginary axis is formulated as follows.The Hermitian positive definite on the imaginary axis rational matrix Φ(s) is given, that is, Φ(iω) > 0 for any real ω. (1.1) The superscript T denotes transposition.It is necessary to determine the matrix G(s), which satisfies the relation and the matrices G(s) and G −1 (s) which have no poles in the open right halfplane C + {s, Res > 0}.Frequently, in such problems the matrix Φ(s) is stated as in [4]: 3) The identity matrices with the corresponding dimensions are denoted by I.If the eigenvalues of the matrix A do not belong to C + and there exists R −1 (Φ(∞) is invertible), the problem of determination of G(s) can be reduced to solution of ARE.So, according to [4], In this relation, the matrix P is the stabilizing solution of ARE that is the solution which ensures disposition of the eigenvalues of the matrix T in the left half-plane.In this case the matrix has no poles in C + .However the poles of G(s) are determined by eigenvalues of the matrix A. For generalization of this problem in the case when the matrix (1.3) has zeros on the imaginary axis and R ≥ 0, see [5].
The problem of J-factorization of the matrix (1.3) was considered in [10].So, if there exists R −1 , the problem of J-factorization of the matrix (1.3) consists in representing it as follows (see [10, relation (89)]): (1.9) The matrix P is the stabilizing solution of the ARE It is possible to note that if zeros of Φ(s) lay on the imaginary axis, then the Hamiltonian matrix, which corresponds to ARE (1.10), will also have eigenvalues on the imaginary axis.In this case, it will be difficult to use the standard procedures for solution of ARE (Schur method [16], the matrix sign function method [9]) for determination of the stabilizing solution of ARE (1.10).In this case, we can use algorithms [7,12,13,14] for solution of ARE.However, it is necessary to note that, if we use such algorithm of factorization, the poles of G(s), as well as in (1.4), are determined by eigenvalues of the matrix A.
There is the problem of generalization of the algorithm [10] of J-factorization of the matrix (1.3) (representing this matrix as (1.7)).This algorithm should guarantee lack of zeros and poles of the matrix G(s) in C + at an arbitrary spectrum of the matrix A.
In [1,11], algorithm of factorization of the matrix (1.3), which guaranteed a lack of zeros and poles of the matrix G(s) in C + , was offered.Algorithm has permitted to solve the problem with singular matrix R.However, it was based on the assumption that the matrix A has no eigenvalues on the imaginary axis.
It is possible to construct algorithm of J-factorization of the matrix (1.3), which has zeros and poles on the imaginary axis, by generalizing algorithms of factorization [11,14] and using for solution of ARE the algorithms in [12,13,14] (see Appendix A).This algorithm allows to solve the problem with singular matrix R. In contrast to the algorithm in [12,13], in this algorithm, the assumption that the matrix A has no eigenvalues on the imaginary axis is not used.

Existence of
As it was noted, if the matrix A has eigenvalues in C + , the algorithm of J-factorization, defined by relations (1.7), (1.10), does not ensure a lack of poles of the matrix G(s) in C + .Therefore, if the matrix A has eigenvalues in C + , it is necessary to modify this algorithm.This modification is connected with an additional procedure of J-factorizations of some auxiliary matrix.In this connection, we will deduce the expression for a matrix Φ −T (s)(Φ −T (s) = (Φ −1 (s)) T ) using the relation (1.9): (2.1) Note that the matrix A c has no eigenvalues in C + .In an outcome of J-factorization, similar to (1.7), (1.10), we will present the matrix Φ −T (s) as follows: The matrix S is a stabilizing solution of the following ARE: that is a solution by which the matrix A T − KR KT S has no eigenvalues in C + .We will note that matrices G 0 (s) and have no poles in C + because the matrix has no eigenvalues in C + .It is essential that its spectrum is determined only by the spectrum of the matrix A. Really, the Hamiltonian matrix, which is corresponding to the ARE (2.3), has the form Let Λ − be eigenvalues of the matrix A, at which Re(λ) < 0 and, accordingly, . Eigenvalues of the matrix H will be accordingly ±Λ − , ±Λ 0 , and ±Λ + .Therefore, the spectrum of a matrix A cc will be a union of Λ − , Λ 0 , and −Λ + .Thus, relation (2.2) allows to modify algorithm of factorization (1.7) as follows: (2.7) The matrices P and S are stabilizing solutions of AREs (1.10) and (2.3), respectively.

Computation of the matrix S.
In a common case (when the matrix (1.3) has zeros on the imaginary axis) for determination of the matrices P,S, it is possible to use algorithms in [7,12,13,14].However, for construction of a solution of ARE (2.3) (determination of the matrix S), it is possible to reduce to a determination of a solution of the Lyapunov equation.So, let the orthogonal matrix U reduces the matrix A to the upper triangular Schur form where Re λ(A + ) > 0, Re λ(A − ) ≤ 0. For a determination of a matrix U , it is possible to use procedures schur.m and schord.m of the Matlab package.We will transform ARE (2.3) by using the matrix U : The sizes of blocks g i (i = 1, 2, 3) correspond to the partitioning (3.1).We will search for the solution of ARE (2.3) as The sizes of blocks S + in (3.3) and A + in (3.1) coincide.
As follows from (3.1) and (3.3), the matrix S + satisfies the following ARE: We will assume that ARE (3.4) has a stabilizing solution S + and the matrix S + is invertible.In this case, the matrix X = S −1 + is uniquely determined by the Lyapunov equation Thus, the desired solution of ARE (2.3) has the form where the matrix X is determined by the Lyapunov equation (3.5).Note that if the matrix A has no eigenvalues in C + , the stabilizing solution of ARE (2.3) is S = 0.In this case, the matrices G(s) and G−1 (s), which are appearing in (2.7), coincide with G(s) and G −1 (s), which are determined by expressions (1.8) and (1.9).[1,11], it is possible to modify problem of J-factorization of matrix (1.3) as follows.After multiplying the matrix Φ(s) on the left-hand side by the matrix Ψ * (s) and on the right-hand side by Ψ (s) (the matrices Ψ (s) and Ψ −1 (s) have no poles in C + ), we obtain

Case
We note that the zeros and poles of the matrix Ψ (s) should not coincide with the zeros and poles of the matrix Φ(s), that is, in the process of evaluation of the matrix Φ 1 (s) cancellation of the zeros and the poles should not happen.
After the matrix Φ 1 (s) has been factorized (the matrix G 1 (s) is found), desired solution (matrix G(s)) is determined by the following relation: It is expedient to select the matrix Ψ (s) in such a manner that the structure of the matrix Φ 1 (s) would be similar to the matrix (1.3), but Φ 1 (∞) would be invertible.Being guided by these reasons, we select as the matrix Ψ (s) the following matrix: where the constant α > 0 does not belong to the spectrum of matrices A and −A.This matrix satisfies the above-noted conditions of a lack of poles of matrices Ψ (s), Ψ −1 (s) in C + .Besides, it is impossible to cancel the poles of Φ(s) and zeros of Ψ (s), Ψ * (s).
Further, for simplification of the calculations, we consider α = 1.We note that this restriction (±1 does not belong to a spectrum of a matrix A) is not essential.Really, substituting s by αs (α > 0), it is possible to transform the matrix Φ(s) in such a manner that the matrix Φ(s) has no poles equal to ±1 [1] (see Example 4.3).Taking into consideration that for appropriate sizes of the identity matrices, which appear in the left-and right-hand sides of (4.4), and also we have, for Thus, in the case of existence of R −1 1 and C T B = B T C, for determination of the matrix G 1 (s), it is possible to use the above-circumscribed algorithm of J-factorization.
So, according to (2.7), the matrices P and S are stabilizing solutions of the following ARE: We will pass to determination of the matrix G(s), which is defined by (4.2).According to assumption, the numbers ±1 do not belong to the spectrum of the matrix A. Taking into consideration the above-mentioned structure of the spectrum of the matrix A cc (which differs from Ã only by transposition), it is possible to state that −1 does not belong to a spectrum of the matrix Ã.It guarantees that the matrix I + Ã would be invertible and will allow to copy the matrix G 1 (s) as follows: Using relations (4.4), (4.5), we will obtain the following expression for G 1 (s): We will show that I − KT K 1 = 0, and therefore, according to (4.1), (4.2), where Really, according to (4.1), we have However, the matrices G 1 * (−1) and R 1 are invertible, therefore I − KT K 1 = 0 and the outcome of the J-factorization of the matrix (1.3) for R = 0 is determined by expression (4.12).Note that the matrices G(s) and G −1 (s) = (Is + I)G −1 1 (s) have no poles in C + because the matrices G 1 (s) and G −1 1 (s) have no poles in C + .
Thus, as shown in the problem of J-factorization of the matrix (1.3), in which R = 0, it is possible to reduce to an equivalent problem with an invertible matrix R. The similar procedure of reduction of an initial problem can be successfully used in the general case if in such problem matrix R is singular, but not equal to zero (in [1,11] such problem for Q = 0 was explicitly considered).

.14)
The matrix Φ(s) has zeros (s = 0) and poles (s = ±i) on the imaginary axis.The Hamiltonian matrix, which is corresponding to the ARE (4.8), will have zeros on the imaginary axis.According to (4.6), we have R 1 = 1.Using algorithms [12,13,14] for solution ARE (4.8), we will obtain the following matrices: Using these matrices in the procedure ss2tf.m of the Matlab package, we get the following expression for G(s): Obviously, zeros and poles of G(s) do not lay in C + .
Example 4.2.In the issue of Example 4.1, we will increase multiplicity of zeros and poles located on the imaginary axis:

.18)
In this case, Having executed the procedures mentioned in Example 4.1, we will find that the matrix P has all zero elements except for P (4, 4) = 4; the matrix S looks as follows: As well as is in Example 4.1, using these matrices in procedure ss2tf.m of the Matlab package, we get the following expression for G(s): Obviously, zeros and poles of G(s) do not lay in C + .
Example 4.3 [1].Let in (1.3)A = −I, B = I, C = 0, R = 0, and let the matrix Q be invertible.In this case, the matrix Φ(s) has poles equal to ±1, which will be cancelled with zeros of the matrices Ψ (s), Ψ * (s), in the process of evaluation of matrix Φ 1 (s), according to (4.6).
We will illustrate in this example, as mentioned above, that the passage to the variable s removes the complexities, which is connected with coinciding zeros of the matrices Ψ (s), Ψ * (s) and poles of the matrix Φ(s).Let α = 0.5, that is, s = 0.5s.In this case,

Appendices
A. Construction of the stabilizing solution of ARE.Following [13,14], we will briefly consider the algorithm of construction of the stabilizing solution of ARE Solution, which will be searched below, has the following property: matrix F −SG has no eigenvalues in C + .This algorithms can be interpreted as generalization of sign function method [9] of construction of the stabilizing solution of ARE, whose Hamiltonian matrix has eigenvalues on the imaginary axis.The algorithm includes the procedure of excluding stable unobservable modes, as it permits to reduce the initial problem.So, we will assume that there is the orthogonal matrix Z, which transforms the matrices F,Q to the following form (for algorithm of construction of matrix Z, see Appendix B): In (A.2), the sizes of the blocks F 1 and Q 1 are coinciding, the matrix f − has no eigenvalues in C + (Re(λ(f − )) ≤ 0).The entries of the matrix F , denoted by * , are not important.That is, the matrix f − determines stable unobservable modes.We will transform the ARE (A.1) to the following form: The sizes of the blocks g 1 , g 2 , g 3 correspond to the partitioning (A.2).We will search for the solution of the ARE (A.1) in the form where the size of the block S 1 is coinciding with the size of the matrix F 1 .As it follows from (A.2), (A.3), and (A.4), the matrix S 1 satisfies the ARE which (by assumption) has symmetric solution.This solution ensures absence of the matrix F 1 − g 3 S 1 in C + eigenvalues.The Hamiltonian matrix, which corresponds to the ARE (A.5), has the form Let the matrix T transform the matrix (A.6) to the block-diagonal form where Re(λ(A)) = 0, λ(B) = 0, and Re(λ(C)) = 0, λ(C) = 0.The sizes of the matrices B and C are equal to 2nb × 2nb and 2nc × 2nc, respectively.Let ±iω,...,±iω m be the various eigenvalues of the matrix C. As f (λ) is denoted by the polynomial, degree of which is equal to 2m and the zeros of which are coinciding with the above-mentioned eigenvalues.It is supposed that there are the integers kb, kc ≥ 1 such that the ranks of the matrices B kb and (f (C)) kc are equal to nb and nc, respectively.
At this suppositions, the unknown solution of the ARE (A.5) is determined by the following relation: where h 1 = I + sign(A), h 2 = B kb , and h 3 = (f (C)) kc .Finally, we find, according to (A.4), the solution of the ARE (A.1) B. Elimination of unobserved modes.It is possible to use orthogonal transformations, which are transforming system's matrices to this or that canonical form (see, e.g., [17]), for constructing the matrix Z, which appears in (A.2).So, let the matrices F,Q ∈ R n×n be given.Let the rank of a matrix Q be equal to r (0 < r ≤ n).We will represent the matrix Q as where Λ ∈ R r ×r is a diagonal matrix, c ∈ R r ×n .It is supposed that the pair of matrices (F , c) is not completely observable, that is, the observability matrix has a rank p < n.Using algorithm of orthogonal transformations (it is possible to use procedure obsvf.m of the Matlab package), we will transform the pair of matrices (F , c) to canonical observability form.Namely, we will construct an orthogonal matrix U such that By orthogonal transformation, with a matrix V , we will transform the square matrix f 11 (which determines unobserved modes) to the block triangular form It is possible to accept, as the matrix Z, the following orthogonal matrix: (the size of the unity block I in the matrix D is coinciding with the size of the matrix f 22 ).Actually, where q) .It is possible to accept the lower diagonal block of the matrix ZQZ T , as the matrix Q 1 ∈ R (n−q)×(n−q) , which appears in (A.2).

Call for Papers
As a multidisciplinary field, financial engineering is becoming increasingly important in today's economic and financial world, especially in areas such as portfolio management, asset valuation and prediction, fraud detection, and credit risk management.For example, in a credit risk context, the recently approved Basel II guidelines advise financial institutions to build comprehensible credit risk models in order to optimize their capital allocation policy.Computational methods are being intensively studied and applied to improve the quality of the financial decisions that need to be made.Until now, computational methods and models are central to the analysis of economic and financial decisions.However, more and more researchers have found that the financial environment is not ruled by mathematical distributions or statistical models.In such situations, some attempts have also been made to develop financial engineering models using intelligent computing approaches.For example, an artificial neural network (ANN) is a nonparametric estimation technique which does not make any distributional assumptions regarding the underlying asset.Instead, ANN approach develops a model using sets of unknown parameters and lets the optimization routine seek the best fitting parameters to obtain the desired results.The main aim of this special issue is not to merely illustrate the superior performance of a new intelligent computational method, but also to demonstrate how it can be used effectively in a financial engineering environment to improve and facilitate financial decision making.In this sense, the submissions should especially address how the results of estimated computational models (e.g., ANN, support vector machines, evolutionary algorithm, and fuzzy models) can be used to develop intelligent, easy-to-use, and/or comprehensible computational systems (e.g., decision support systems, agent-based system, and web-based systems) This special issue will include (but not be limited to) the following topics: • Computational methods: artificial intelligence, neural networks, evolutionary algorithms, fuzzy inference, hybrid learning, ensemble learning, cooperative learning, multiagent learning

•
Application fields: asset valuation and prediction, asset allocation and portfolio selection, bankruptcy prediction, fraud detection, credit risk management • Implementation aspects: decision support systems, expert systems, information systems, intelligent agents, web service, monitoring, deployment, implementation