An Augmented Lagrangian Method for the Optimal H ∞ Model Order Reduction Problem

This paper treats the computational method of the optimal H∞ model order reduction (MOR) problem of linear time-invariant (LTI) systems. Optimal solution of MOR problem of LTI systems can be obtained by solving the LMIs feasibility coupling with a rank inequality constraint, which makes the solutions much harder to be obtained. In this paper, we show that the rank inequality constraint can be formulated as a linear rank function equality constraint. Properties of the linear rank function are discussed. We present an iterative algorithm based on augmented Lagrangian method by replacing the rank inequality with the linear rank function. Convergence analysis of the algorithm is given, which is distinct to the now available heuristic methods. Numerical experiments for the MOR problems of continuous LTI system illustrate the practicality of our method.


Introduction
Model order reduction (MOR) problem has received considerable attention since it was put up.In physical or engineering system, the mathematical modeling of the system often results in the high-order controllers, while the simulation and physical implementation of the higher order controllers are more difficult to realize due to the high order.The purpose of the MOR is to obtain a stable lower-order system that approximates the higher one according to some given criterion.Many techniques and results on MOR problems have been reported, and there are various approaches such as the aggregation method [1], balanced truncation approach [2], and optimal Hankel norm approximations methods [3,4].It is a difficult work to list all the references since we have many references in this field.The  ∞ MOR problem has been studied by many researchers such as [4][5][6][7][8][9][10][11][12][13] and references therein.It mainly consists of finding a lowerorder model Ĝ of the original system  such that the  ∞ error ‖ − Ĝ‖ is small.It has received quite a few attentions since the  ∞ norm of the difference between two systems is one of the most meaningful measures of approximations.Linear matrix inequality (LMI) is a wellknown convex feasibility problem which can be solved by the semidefinite programming (SDP) interior algorithms [14].LMI is also a powerful tool in control and systems theory and was widely used in output feedback stabilization, reducedorder  ∞ synthesis,  synthesis with constant scaling [15], and so on.In [6], an LMI approach was followed for the  ∞ MOR problem, utilizing the Bounded Real Lemma [16], and an iterative two-step algorithm was proposed but with no guaranteed convergence to the global optimum.More recently, [5] studied the deterministic  ∞ MOR problem for deterministic case, necessarily and sufficient conditions are derived for − suboptimal MOR problems in terms of LMIs, numerical techniques based on alternating projections are proposed to solve the MOR problems, and [7] extends the results to the stochastic MOR problem.
The optimal index  of  ∞ MOR problem is computed by minimizing the model given in [5], and an algorithm to solve the related LMIs feasibility coupling with a rank constraint is also proposed in that paper.The rank constrained LMI problem can be considered as a generalization of LMI problem, but the lack of convexity makes the rank constrained LMI problem much harder to solve.Now available algorithms for the rank constrained LMI problems are largely heuristic in nature and most of them are not convergent.The existing methods for LMI coupling with rank constrained are those based on linearization [17], alternating projections [18,19], trace minimization methods that solve the problem by solving a related convex problem [20], augmented Lagrangian methods (ALF) [21], sequential semidefinite programming [22], and so on.Aside from [21,22], these methods did not give the proof of convergence and the challenge still remains to find the numerical schemes with verifiable local/global convergence.More recently, nuclear norm minimizations method is a good relaxation approximation method to this problem [23].
The contribution of this paper is as follows: an algorithm is presented using the augmented Lagrangian method by means of the equivalent condition to the rank constraints; ALF method has more advantages compared with the PF method.For PF method, the penalty item becomes more and more possibly ill-conditioned when the penalty parameter become decreasing, which makes the solution more difficult to find.One of the modification methods to solve this problem for PF is the ALF method, which has better performance in the aspect of robustness and convergence rate.So we proposed an ALF method for Problem 10.Convergence of the algorithm is also investigated in this paper.Numerical experiments showed that our algorithm is effective.
The rest of this paper is structured as follows.Section 2 gives an introduction to the  ∞ MOR problem and the feasible set of convex LMI constraints coupling with rank constraint.Section 3 presents an equivalent condition of the rank constraints and reformulates the MOR problem as an NLP problem using a cost function which combines  ∞ gains index  and a penalty term accounting for the rank constraint.An augmented Lagrangian method solving the NLP problem is proposed in Section 4, as well as convergence results.Section 5 gives some computational details about the algorithms.Numerical experiments are given in Section 6 for continuous LTI MOR problems.Section 7 concludes this paper.
The standard notation used throughout this paper is as follows.  denotes the set of all symmetric matrix in   , and   denotes the  identity matrix.For a matrix , its transpose, rank, inverse, and trace are denoted by   , rank(),  −1 , and trace(), respectively.If  is a symmetric matrix,  > (≥) 0 means that  is positive (semidefinite) definite.Symbols ∇ and ∇ 2 mean the gradient and Hessian matrix of a function, () denotes the maximum singular value of a matrix, ‖ ⋅ ‖ denotes the  ∞ norm of a rational transfer function, and ⟨, ⟩ denotes the inner product of two matrices.

Problem Formulation
Consider an asymptotically stable th−order linear, timeinvariant continuous system  with a state space representation where  ∈  × ,  ∈  × ,  ∈  × , and  ∈  × .The optimal  ∞ MOR problem is to find a stable nth order system Ĝ(n ≪ ) with a state space representation where Â ∈  n×n , B ∈  n× , Ĉ ∈  ×n , and  ∈  × , such that the  ∞ gain index of ‖ − Ĝ‖ ∞ with respect to Ĝ is minimized.The balanced truncation method for MOR problem [2] and the optimal Hankel norm MOR method [3] are the most popular among the now available methods.Methods based on the LMIs are the ones which can reformulate the MOR problem as LMIs by means of the Bounded real lemma.The optimal MOR solution can be obtained by solving a feasible point of LMIs feasible set coupling with a rank constraint.The method is proposed in [6] in terms of time domain response of the error system and [5] extends the result to the  ∞ MOR problem in the state space.Reference [7] extends the results to the stochastic case.More recently, basing on LMIs method, [24] presents a sufficient and necessary condition for positivity-preserving  ∞ MOR of LTI positive system.
Theorem 1 (see [5]).There exists a stable system Ĝ such that the  ∞ MOR problem of the th-order continuous-time system in the form of ( 1) is solvable for a given  > 0 if and only if there exists positive definite matrix  > 0,  > 0 satisfying min ,, The optimal  ∞ error and the corresponding feasible solution for ,  can be obtained by solving the following optimization Problem 2, and all − suboptimal of nth− order models that correspond to a feasible matrix pair (, ) are given by where  ∈  (+n)×(+n) is any matrix such that ‖‖ < 1, and where and   ∈  n×n is an arbitrary positive definite matrix,   ∈  ×n is an arbitrary matrix factor such that      = − +   +   < 0, Assumption 3. Throughout the paper, we assume that Problem 2 has at least one feasible solution.
Theorem 4 is a lemma in [12]; we give a detailed different proof here.
Proof.Given a symmetric positive semidefinite matrix  ≥ 0, rank() ≤ ,  > 0, there exists an orthogonal matrix  such that where the eigenvalues are ordered  1 ≥  2 ≥ ⋅ ⋅ ⋅ ≥   ≥ 0, since the rank of matrix  is less than or equal to ; there must be at most  eigenvalues that are not 0. The columns of matrix  are the corresponding eigenvectors of the related eigenvalues; take  = ( +1 , . . .,   ) ∈  ×(−) ; the columns of  consist of the eigenvectors corresponding to the  −  smallest eigenvalues of , and    =  − ; this ends the necessity part.
Proof.From Theorem 1, matrices ,  are functions in variable ; according to the definition of function ℎ() and the inner product of matrix, we have Its obvious that ℎ() is a linear function in ,  for a given  and  = (     ) but keep in mind that ,  are related to variable , and  varies with matrix .Matrix  can be obtained by the singular decomposition of the specific matrix .
Proof.From Theorem 4, matrix  is a submatrix of the orthogonal transformation matrix, so    has the same eigenvalues as matrix .The value of ℎ(; ) is the addition of some eigenvalues of matrix .From the definition of eigenvalues of a matrix and Ostrowski Theorem, the eigenvalues of matrix  are a continuous and differentiable function in the entries of the matrix  (while matrix  is not the case).The proof of this lemma is obvious.Conclusions can be drawn according to the continuity of ℎ() and Proposition 6.
Matrix  =  2 , where  is also a symmetric positive definite matrix for the reason that  is a symmetric positive definite matrix; then we have rank ( 2  − ) = rank { −1 ( 2  − ) } ; (20) there must exist an orthogonal matrix  such that so we have rank so there must exist at least  − n eigenvalues which are zero; that is,  2 =    ,  = 1, 2, . . .,  − n.The property ℎ(0) > 0 comes from the positive definite of matrix , .Propositions 5-8 show that the rank constraint in (15) can be replaced by the equality constraint ℎ(; ) = 0.

Reformulation of Problem 2
By means of the function ℎ(, , ) and Theorem 1, Problem 2 can be reformulated as follows.
The above minimization Problem 9 can be written into the following standard optimization Problem 10, but keep in mind that ,  is related to .We make a notation  = (, , ) (we omit the dimension of matrix here), which is the decision variable.
Problem 10 is an optimization problem with equality constraint and closed feasible set; standard optimization techniques and algorithms can be used to solve this problem.Recently, [12] presented a penalty function (PF) algorithm by means of the rank function given in this paper.Numerical experiments showed that the algorithm is effective while there is no convergence analysis given in that paper.Also there are some disadvantages such as ill conditions for penalty function methods.This motivates us to present a convergent algorithm with good numerical performance for Problem 10 (also Problem 2) by means of the function ℎ(; ).

Augmented Lagrangian Function Algorithm
4.1.ALF Algorithm.One of the effective convergent algorithms for Problem 10 with equality constraints is the augmented Lagrangian function (ALF) method.ALF method has more advantages compared with PF method.For PF method, the penalty term becomes more and more possibly ill-conditioned when the penalty parameter is decreasing, which makes the solution more difficult to find.One of the modification methods to overcome this problem for PF is the ALF method, which has better performance in the aspect of robustness and convergence rate.So we proposed an ALF method for Problem 10 in this paper.For more detailed information about the ALF methods, refer to [25][26][27][28].
Problem 10 can be considered as a standard equality constrained nonlinear programming problem (NLP).Augmented Lagrangian function method is an effective method to solve an optimization with equality constraints.We reformulate Problem 10 as the form of augmented Lagrangian function optimization Problem 11, where  is the multiplier and  is the penalty parameter.( Suppose the th iteration solution of Problem 11 is   = (  ,   ,   ).The rationale for the ALF method is based on the fact that when   is bounded and   → ∞, then the term ℎ(; ) + (/2)ℎ 2 (; ) tends to infinity if ℎ(; ) ̸ = 0 and is equal to 0 if ℎ(; ) = 0.The minimum of Problem 10 can be obtained by solving the unconstrained Problem 11 under some conditions.
Proposition 12. ALF algorithm is well defined.
Proof.From Assumption 3, the feasible set of Problem 11 is nonempty; Propositions 5 and 6 showed that there must exist  such that ℎ() = 0; it is obvious that the algorithm is well defined.

The Optimality Conditions for Problem 11.
We note the Lagrangian part of Φ  (, ) as Φ(, ); that is, Let  * be a local minimizer of Problem 10; suppose there exists a Lagrangian multiplier  * such that the firstand second-order optimality conditions for Problem 10 are satisfied [25].Then there exists a large enough  > 0 for any  > , at the point ( * ,  * ); we have so we can obtain the optimality conditions for Problem 11, for any  ∈ Ω, (First order condition) for any  ̸ =  * ≥ 0. (Second order optimality condition).
Proposition 13.From the above inequality (35), we conclude that there must exist a local minimum of Φ  (, ) that is close to  * for every  >  when  is close to  * .

The Convergence of ALF Algorithm
then determine  0 ,  0 ,  0 such that ℎ() is as close as possible to 0. The initial value  0 can be chosen as the square root of minimum eigenvalue of matrix , which shows in Theorem 1 that  →  as  → ∞.

The Inner Optimization for the Subproblem in Step 2.
About the inner optimization for the subproblem in Step 2, we use the interior point algorithm for LMI feasibility such as SeDuMi in [29] and YALMIP in [30], respectively, in our numerical experiments.27) and ( 28) becomes illconditioned, also   should not be increased too slowly in that the first-order update of the Lagrange multiplier has a poor convergence rate otherwise.

5.3.2.
Lagrangian Multiplier  0 .Any prior knowledge should be exploited to select  0 close to  * , but this is generally difficult.Many trials need to be done in our experiments.

Numerical Experiments
We provide two examples for the applicability of the above method for determining the suboptimal model errors and the corresponding reduced state space.We use the benchmark model reduction examples: Wilson's example and Boiler system.Both examples are continuous-time LTI system  ∞ model reduction problems.
Example 1.Consider the fourth-order system reduces to a second system, Wilson example in [31].
Example 2. Consider the ninth-order system reduce to a third-order system, Boiler System in [32].
Tables 1 and 2 list the optimal indexes obtained by our algorithm.Figures 1 and 2 indicate the error and the convergence by the ALF algorithm.The computation parameters used in the ALF algorithms were  = 1 − 12,  = 0.85,  = 1.1,  0 = 1000, and  0 = 1000.Our experience showed that this algorithm works quite well for both problems compared with the now available algorithms [31,32] under the frame of feasibility coupled with a rank constraint.The iterations were both less than 100 times when the termination were satisfied; this showed that our algorithm has good convergence performance and also the error is less than the results obtained by other methods, especially for Boiler System; our algorithm performed quite better than that in [32].The suboptimal errors can be obtained as well as the corresponding positive matrices ,  by means of our algorithm.The reduced state space can be evaluated according to ( 6)-( 8) (Theorem 1 in [5]), and the obtained reduced system is stable.

Conclusion
We presented an ALF algorithm for optimal  ∞ MOR problem of the LTI system by means of an augmented Lagrangian method.First, we give a rank function which can reformulate the rank constraint of the optimal  ∞ MOR problem as a linear function equality constraint; then we reformulate the original problem as an optimization problem with LMIs constraints and linear equality constraints; an ALF algorithm is presented in the following.Compared with the now available algorithms, the goodness of our algorithm is the better convergence, which is opposite to the heuristic algorithm; our algorithm can also overcome the ill condition of PF algorithm in the process iteration.The algorithm is applicable to both continuous system and discrete system.Numerical experiments showed that the algorithm is effective for deterministic  ∞ MOR problem.MOR is an interesting and practical topic; not only is it applied in physics or   engineering systems, but also it can be used in some models in biological systems [33][34][35][36].

Proposition 7 Proposition 8 .
suggests that |ℎ(; )| ≤  can be used as the iteration termination criterion, although the rank condition for matrix  in Theorem 1 does not hold.If the rank of matrix  is full, function ℎ(; ) > 0 for any  ≥ 0 and ,  satisfies the LMIs; if the rank of matrix  is less than or equal to  + n (n ≪ ), there must exists a  > 0 such that ℎ(; ) = 0. Proof. = (     ), it is obvious that rank() ≥  since  has a block matrix  which is positive definite.If rank() ≤  + n, then rank () = rank (  0 0  2  −  ) = rank () + rank ( 2  − ) ≤  + n.
[25] Propositions 5 and 7, (), ℎ() are continuous in closed set Ω; the proof of the theorem is similar to that of Bertsekas in[25].Remark 15.The constraint set of Problem 11 is a closure of the open set in Problem 2. According to Theorem 14 and Weierstrass Theorem, Problem 11 must have a global minimum.But whether the ALF algorithm will obtain a unique global minimum point or not depends on the concrete problem given.Usually the global point is not unique for the LMIs problem.But the feasible set for Problem 2 is not closed, so the convergent point of Problem 11 may be a local solution to Problem 2.Remark 16.The convergence rate of the ALF algorithm presented in this paper depends on the optimization method in the subproblem.If the inner algorithm in the subproblem is superlinear convergent, and the ALF algorithm will be superlinear convergent.
Theorem 14 (local convergence of the algorithm).Suppose the feasible set { |  = (, , ) ∈ Ω} is nonempty.For  = 1, 2, ..., let   be global minima of the objective function in Problem 10 over the feasible set.If the Lagrangian multiplier sequence   is bounded, for all , 0 <   <  +1 , and   → 0, then every convergent point of the sequence   is a global optimal solution of Problem 10.Proof.