An Improved Iterative Method for Solving the Discrete Algebraic Riccati Equation

+e discrete algebraic Riccati equation has wide applications, especially in networked systems and optimal control systems. In this paper, according to the damped Newton method, two iterative algorithms with a stepsize parameter is proposed to solve the discrete algebraic Riccati equation, one of which is an extension of Algorithm (4.1) in Dai and Bai (2011). A numerical example demonstrates the convergence effect of the presented algorithm.


Introduction and Preliminaries
e discrete algebraic Riccati equation plays an important part in engineering, such as optimal control systems [1], modified filtering [2,3], and networked systems [4][5][6][7]. Consider the following discrete-time linear system: where x(k) ∈ R n is the state variable, u(k) ∈ R r is the input variable, B ∈ R n×r is the input matrix, and A ∈ R n×n is the system matrix and is always invertible [8]. e optimal state feedback controller of (1) is which minimizes the quadratic performance index of (1) and is closely related to the discrete algebraic Riccati equation (DARE): where Q ∈ R n×n is semipositive definite, G ∈ R r×r is positive definite, and P ∈ R n×n is the positive definite solution of the DARE (3). Let R � BG − 1 B T ≥ 0. According to the matrix identity, equation (3) can be transformed into Due to the wide applications of the DARE, many works have been proposed to discuss the DARE. Various bounds and solutions about the DARE have been provided, such as upper and lower solution bounds [9][10][11][12][13][14], bounds about sum and product of eigenvalues [15,16], determinant of the solution [17], and the existence of the solution [18][19][20][21]. However, in an optimal control system, we often need to compute the solution of the DARE to find the optimal state feedback controller which minimizes the quadratic performance index. It is very difficult to solve the DARE, especially when the dimensions of the coefficient matrices are high. So, many researchers provide a lot of iterative methods to solve this equation. Komaroff present a fixed-point iterative algorithm that needs to compute twice matrix inversion at each step [22]. By Newton's method, Guo derived the maximal symmetric solution of the DARE in [23]. e structure-preserving doubling algorithms are discussed in [24][25][26][27].
e Schur method is adopted to solve algebraic Riccati equations [28]. Recently, Dai and Bai propose an iterative algorithm that partially avoids computing the matrix inversions by making use of the Schulz iteration [29].
In Section 2, we propose two iterative algorithms with a stepsize parameter to solve the DARE by the damped Newton method. One of the iterative algorithms is an extension of Algorithm 4.1 in [29]. Numerical example is given in Section 3 to demonstrate the convergence effect of our algorithms.
We first introduce some symbol conventions. R denotes the real number field. R n×m denotes the set of n × m real matrices. For X � (x ij ) ∈ R n×n , let X T , X − 1 , ‖X‖, and λ min (X) denote the transpose, inverse, spectral norm, and the minimal eigenvalue of the matrix X, respectively. e inequality X > ( ≥ )0 means X is a symmetric positive (semi-) definite matrix; and the inequality X > ( ≥ )Y means X − Y is a symmetric positive (semi-) definite matrix. e identity matrix with appropriate dimensions is represented by I.
Lemma 1 (see [30]). If A, B ∈ R n×n are symmetric positive definite matrices, then Lemma 2 (see [31]). Let C and P be Hermitian matrices of the same order and let P > 0. en, Lemma 3 (see [32]). Let S and T be symmetric positive definite matrices. en,

Improved Iterative Algorithms for Solving the DARE
To find the positive definite solution of the DARE (5), Dai and Bai, in [29], proposed an algorithm that partially avoids computing the matrix inversion as follows.
In this section, we propose two iterative algorithms to solve the DARE (5), which are motivated by the damped Newton method [33] and the methods in [34,35]. Let us recall the damped Newton method to find the root of (10) where t > 0 is a stepsize parameter. If the initial matrix is near the solution of the problem, the unit stepsize t � 1 can be accepted in the local Newton method. However, it is not suitable to choose t � 1 if the initial matrix is far from the solution of the problem [33].
e DARE (5) can be translated into F(P) � 0, where en, to find the root of F(P) is equivalent to find the root of F(Z), we can solve the DARE (5) by constructing an iterative scheme. According to (10), we present the following iterative algorithms for the DARE (5).
Step 2: compute About Algorithms 2 and 3, we have the following results.

Theorem 1.
Let P − be the positive definite solution of the DARE (5) and Q > 0. e iterative sequences P k and Y k are generated by Algorithm 2 with t ∈ (0, 1]; then, Proof. We first prove P k and Y k are monotone increasing by induction. Since P − is positive definite solution of DARE (5), then us, P − ≥ Q.
(i) Since 2 Mathematical Problems in Engineering then by Lemma 1, we obtain From (17), we also obtain P 1 ≥ Q � P 0 ; then, By Lemma 2 and (20), we have By (20) and Lemma 1, we obtain thereby, By (21), we obtain us, from the above-mentioned proof, we have (ii) Assume that From (26) us, By (28) and (29), we have So, we obtain us, the proof of induction is completed. Moreover, as P k and Y k are monotone increasing and they are bounded, then lim  (5). After k steps of iteration for Algorithm 2, we have ‖I − Y k (P −1 k + R)‖ < ε; then, Proof. According to (15), we have en, by Algorithm 2, Lemma 3, and (34), we obtain because of P k ≤ P − . As the proof method is similar to eorem 1, we list the monotonicity and convergence of Algorithm 3 without proof. □ Theorem 3. Let P − be the positive definite solution of the DARE (5) and Q > 0. e iterative sequences P k and Y k are generated by Algorithm 3 with t ∈ 0, 1] and start from Y 0 � (Q − 1 + R) − 1 and P 0 � Q; then, P k is monotone increasing and converges to P − , and Y k is monotone increasing and converges to (P −1 − + R) − 1 .
Remark 1. For Algorithms 2 and 3, we find the steps of iteration for Algorithm 2 are less than Algorithm 3 and the convergence speed of the Algorithm 2 is faster than Algorithm 3 from the numerical examples. erefore, in the following example, we only discuss the superiority and effectiveness of Algorithm 2.

Numerical Examples
In this section, we present the following numerical example to show the effectiveness of our results. We also discuss the performance of Algorithm 2 with different t values. e whole process is carried out on Matlab 7.1 and the precision is 10 − 8 .
In [29], Dai and Bai choose the starting matrix Y 0 � (Q − 1 + R) − 1 . After 17 steps of iteration, the required precision is derived, and the residual ‖A For Algorithm 2, we choose P 0 � Q, Y 0 � (Q − 1 + R) − 1 and give the steps of iteration and the residual as Table 1 with a different parameter t when the process is stopped under the required precision. When t is near 1, we find that the steps of iteration are less than [29]. Especially, when t � 1.2, it only needs 10 steps for Algorithm 2 to converge to the iterative solution: with the residual ‖A T (P − 1 + R) − 1 A + Q − P‖ � 5.6438e− 009, and Algorithm 2 has faster convergence speed than Algorithm 1 from Figure 1. Moreover, from Table 1, we see that Algorithm 2 is more efficient when t > 1. Although we only prove the convergence of Algorithm 2 when t ∈ 0, 1], in this paper, Algorithm 2 works well in practical computation when t > 1.

Data Availability
All data generated or analyzed during this study are included within this article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.