AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 146137 10.1155/2013/146137 146137 Research Article Exact Asymptotic Stability Analysis and Region-of-Attraction Estimation for Nonlinear Systems Wu Min 1 Yang Zhengfeng 1 0000-0003-4467-2186 Lin Wang 2 Camilli Fabio M. 1 Shanghai Key Laboratory of Trustworthy Computing East China Normal University Shanghai 200062 China ecnu.edu.cn 2 College of Mathematics and Information Science Wenzhou University Wenzhou Zhejiang 325035 China wzu.edu.cn 2013 24 02 2013 2013 14 11 2012 11 02 2013 20 02 2013 2013 Copyright © 2013 Min Wu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We address the problem of asymptotic stability and region-of-attraction analysis of nonlinear dynamical systems. A hybrid symbolic-numeric method is presented to compute exact Lyapunov functions and exact estimates of regions of attraction of nonlinear systems efficiently. A numerical Lyapunov function and an estimate of region of attraction can be obtained by solving an (bilinear) SOS programming via BMI solver, then the modified Newton refinement and rational vector recovery techniques are applied to obtain exact Lyapunov functions and verified estimates of regions of attraction with rational coefficients. Experiments on some benchmarks are given to illustrate the efficiency of our algorithm.

1. Introduction

Lyapunov's stability theory, which concern is with the behavior of trajectories of differential systems, plays an important role in analysis and synthesis of nonlinear continuous systems. Lyapunov functions can be used to verify the asymptotic stability, including locally asymptotic stability and globally asymptotic stability. In the literature, there has been lots of work on constructing Lyapunov functions . For example, [4, 5] used the linear matrix inequality (LMI) method to compute quadratic Lyapunov functions. In [8, 13], based on sum of squares (SOS) decomposition, a method is proposed to construct high-degree numerical polynomial Lyapunov functions. Reference  proposed a new method for computing polynomials Lyapunov functions by solving semialgebraic constraints via the tool DISCOVERER . Reference  constructed Lyapunov functions beyond polynomials by using radial basis functions. In , the Gröbner based method is used to choose the parameters in Lyapunov functions in an optimal way.

Since the analysis of asymptotic stability is not sufficient for safety critical systems, the analysis of the region of attraction (ROA) of an asymptotically stable equilibrium point is a topic of significant importance. The ROA is the set of initial states from which the system converges to the equilibrium point. Computing exact regions of attraction (ROAs) for nonlinear dynamical systems is very hard if not impossible; therefore, researchers have focused on finding estimates of the actual ROAs. There are many well-established techniques for computation of ROAs [5, 1622]. Among all methods, those based on Lyapunov functions are dominant in the literature. These methods compute both a Lyapunov function as a stability certificate and the sublevel sets of this Lyapunov function, which give rise to estimates of the ROA. In , the authors computed quadratic Lyapunov functions to optimize the volume of an ellipsoid contained in ROA by using nonlinear programming. Reference  employed LMI based method to compute the optimal quadratic Lyapunov function for maximizing the volume of an ellipsoid contained in the ROA of odd polynomial systems. In , based on SOS decomposition, a method is presented to search for polynomial Lyapunov functions that enlarge a provable ROA of nonlinear polynomial system. Reference  used discretization (in time) to flow invariant sets backwards along the flow of the vector field, obtaining larger and larger estimates for the ROA. Reference  employed quantifier elimination (QE) method via QEPCAD to find Lyapunov functions for estimating the ROA.

Taking advantage of the efficiency of numerical computation and the error-free property of symbolic computation, in this paper, we present a hybrid symbolic-numeric algorithm to compute exact Lyapunov functions for asymptotic stability and verified estimates of the ROAs of continuous systems. The algorithm is based on SOS relaxation  of a parametric polynomial optimization problem with LMI or bilinear matrix inequality (BMI) constraints, which can be solved directly using LMI or BMI solver in MATLAB, such as SOSTOOLS , YALMIP , SeDuMi , PENBMI , and exact SOS representation recovery techniques presented in [28, 29]. Unlike the numerical approaches, our method can yield exact Lyapunov functions and verified estimates of ROAs, which can overcome the unsoundness in analysis of nonlinear systems caused by numerical errors . In comparison with some symbolic approaches based on qualifier elimination technique, our approach is more efficient and practical, because parametric polynomial optimization problem based on SOS relaxation with fixed size can be solved in polynomial time theoretically.

The rest of the paper is organized as follows. In Section 2, we introduce some definitions and notions about Lyapunov stability and ROA. Section 3 is devoted to transform the problem of computing Lyapunov functions and estimates of ROAs into a parametric program with LMI or BMI constraints. In Section 4, a symbolic-numeric approach via SOS relaxation and exact rational vector recovery is proposed to compute Lyapunov functions and estimates of ROAs, and an algorithm is described. In Section 5, experiments on some benchmarks are shown to illustrate our algorithm on asymptotic stability and ROA analysis. At last, we conclude our results in Section 6.

2. Lyapunov Stability and Region of Attraction

In this section, we will present the notion of Lyapunov stability and regions of attraction (ROAs) of dynamical systems.

Consider the autonomous system (1)x˙=f(x), where f:nn is continuous and f satisfies the Lipschitz condition. Denote by ϕ(t;x0) the solution of (1) corresponding to the initial state x0=x(0), evaluated at time t>0.

A vector xn is an equilibrium point of the system (1) if f(x)=0. Since any equilibrium point can be shifted to the origin 0 via a change of variables, without loss of generality, we may always assume that the equilibrium point of interest occurs at the origin.

Lyapunov theory is concerned with the behavior of the solution ϕ(t;x0) where the initial state x0 is not at the equilibrium 0 but is “close” to it.

Definition 1 (<italic>Lyapunov stability</italic>).

The equilibrium point 0 of (1) is

stable, if for any ϵ>0, there exists δ=δ(ϵ) such that (2)x0<δϕ(t;x0)<ϵ,t>0, here · denotes any norm defined on n;

unstable, if it is not stable,

asymptotically stable, if it is stable, and δ can be chosen such that (3)x0<δlimtϕ(t;x0)=0;

globally asymptotically stable if it is stable, and for all x0n, (4)limtϕ(t;x0)=0.

Intuitively, the equilibrium point 0 is stable if all solutions starting near the origin (meaning that the initial conditions are in a neighborhood of the origin) remain near the origin for all time. The equilibrium point 0 is asymptotically stable if all solutions starting at nearby points not only stay nearby but also converge to the equilibrium point as time approaches infinity. And the equilibrium point 0 is globally asymptotically stable if it is asymptotically stable for all initial conditions x0n. The stability is an important property in practice, because it means arbitrarily small perturbations of the initial state about the equilibrium point 0 result in arbitrarily small perturbations in the corresponding solution trajectories of (1).

A sufficient condition for stability of the zero equilibrium is the existence of a Lyapunov function, as shown in the following theorem.

Theorem 2 ([<xref ref-type="bibr" rid="B16">31</xref>, Theorem 4.1]).

Let Dn be a domain containing the equilibrium point 0 of (1). If there exists a continuously differentiable function V:D such that (5)V(0)=0,V(x)>0in  D{0},(6)V˙(x):=Vx·f(x)0in  D, then the origin is stable. Moreover, if (7)V˙(x)<0in  D{0}, then the origin is asymptotically stable.

A function V(x) satisfying the conditions (5) and (6) in Theorem 2 is commonly known as a Lyapunov function. And we can verify globally asymptotic stability of system (1) by using Lyapunov functions stated as follows.

Theorem 3 ([<xref ref-type="bibr" rid="B16">31</xref>, Theorem 4.2]).

Let the origin be an equilibrium point for (1). If there exists a continuously differentiable function V:n such that (8)V(0)=0,V(x)>0x0,(9)xV(x),(10)V˙(x)<0      x0, then the origin is globally asymptotically stable.

Remark that a function satisfying condition (9) is said to be radially unbounded.

It is known that globally asymptotic stability is very desirable, but in many applications it is difficult to achieve. When the equilibrium point is asymptotically stable, we are interested in determining how far from the origin the trajectory can be and still converge to the origin as t approaches . This gives rise to the following definition.

Definition 4 (region of attraction).

The region of attraction (ROA) Ω of the equilibrium point 0 is defined as Ω={xnlimtϕ(t;x)=0}.

The ROA of the equilibrium point 0 is a collection of all points x such that any trajectory starting at initial state x at time 0 will be attracted to the equilibrium point. In the literature, the terms “domain of attraction” and “attraction basin” are also used instead of “region of attraction”.

Definition 5 (<italic>positively invariant set</italic>).

A set Mn is called a positively invariant set of the system (1), if x0M implies ϕ(t;x0)M for all t0. Namely, if a solution belongs to a positively invariant set M at some time instant, then it belongs to M for all future time.

In general, finding the exact ROA analytically might be difficult or even impossible. Usually, Lyapunov functions are used to find underestimates of the ROA, that is, sets contained in the region of attraction, as stated in the following theorem.

Theorem 6 ([<xref ref-type="bibr" rid="B12">20</xref>, Theorem 10]).

Let Dn be a domain containing the equilibrium point 0 of (1). If there exists a continuously differentiable function V:D satisfying (5) and (7), then any region Ωc={xnV(x)c} with c0 such that ΩcD is a positively invariant set contained in the ROA of the equilibrium 0.

Theorem 6 describes an approach to compute estimates of the ROA through Lyapunov functions. More specifically, in the case of asymptotic stability, if Ωc={xnV(x)c} is bounded and contained in D, then every trajectory starting in Ωc remains in Ωc and approaches the origin as t. Thus, Ωc is an estimate of the ROA. Remark that when the origin is globally asymptotically stable, the region of attraction is the whole space n.

3. Problem Reformulation

In this section, we will transform the problem of the asymptotic stability and ROA analysis of system (1) to a parametric program with LMI or BMI constraints. In the sequel, we suppose that system (1) is a polynomial dynamical system with fi(x)[x] for 1in, where [x] denotes the ring of real polynomials in the variables x.

3.1. Asymptotic Stability

Firstly, we consider the asymptotic stability of system (1). As shown in Theorem 2, the existence of a Lyapunov function V(x) which satisfies the conditions (5) and (7) is a certificate for asymptotical stability of the equilibrium point 0, and the problem of computing such a V(x) can be transformed into the following problem: (11)findV(x)[x]s.t.V(0)=0,V(x)>0xD{0},V˙(x)=Vxf(x)<0xD{0}. In general, D can be an arbitrary neighborhood of the equilibrium point 0. However, to simplify calculation, in practice, we can assume D={xn:g(x)0}, where g(x)[x] is chosen to be g(x)=i=1nxi2-r2 for 1in, with a given constant r+, for instance, r=10-2.

Let us first predetermine a template of Lyapunov functions with the given degree d. We assume that (12)V(x)=αcαxα, where xα=x1α1,,xnαn, α=(α1,,αn)0n with i=1nαid, and cα are parameters. We can rewrite V(x)=cT·T(x), where T(x) is the (column) vector of all terms in x1,,xn with total degree d, and cν, with ν=(n+dn), is the coefficient vector of V(x). In the sequel, we write V(x) as V(x,c) for clarity.

For a polynomial φ(x)[x], we say that φ(x) is positive definite (resp., positive semidefinite), if φ(x)>0 for all xn{0} (resp., φ(x)0 for all xn). Observe that, if φ(x) is a sum of squares (SOS), then φ(x) is globally nonnegative. And, to ensure positive definiteness of φ(x), we can use a polynomial l(x) of the form: (13)l(x)=i=1nϵixik, where ϵi+ and k is assumed to be even. Clearly, the condition that φ(x)-l(x) is an SOS polynomial guarantees the positive definiteness of φ(x). Therefore, to ensure positive definiteness of the second and third constraints in (11), we can use two polynomials l1(x), l2(x) of the form of (13). It is notable that SOS programming can be applied to determine the nonnegativity of a multivariate polynomial over a semialgebraic set. Consider the problem of verifying whether the implication (14)i=1m(pi(x)0)q(x)0 holds, where pi(x)[x] for 1im and q(x)[x]. According to Stengle's Positivstellensatz, Schmüdgen's Positivstellensatz, or Putinar's Positivstellensatz , if there exist SOS polynomials σi[x] for i=0,,m, such that (15)q(x)=σ0(x)+i=1mσi(x)pi(x), then the assertion (14) holds. Therefore, the existence of SOS representations provides a sufficient condition for determining the nonnegativity of q(x) over {xn:i=1mpi(x)0}.

Based on the above observation, the problem (11) can be further transformed into the following SOS programming: (16)findcνs.t.V(0,c)=0,V(x,c)-l1(x)=ρ1(x)-σ1(x)g(x),-Vxf(x)-l2(x)=ρ2(x)-σ2(x)g(x), where σj(x) and ρj(x) are SOS in [x] for j=1,2. Moreover, the degree bound of those unknown SOS polynomials σj, ρj is exponential in n, deg(V), deg(f), and deg(g). In practice, to avoid high computational complexity, we simply set up a truncated SOS programming by fixing a priori (much smaller) degree bound 2e, with e+, of the unknown SOS polynomials. Consequently, the existence of a solution c of (16) can guarantee the asymptotic stability of the given system.

Similarly, the problem of computing the Lyapunov function V(x) for globally asymptotic stability of system (1), that satisfies the conditions in Theorem 3 can be rewritten as the following problem: (17)findV(x)[x]s.t.V(0)=0,V(x)>0    x0,xV(x),V˙(x)<0x0. By introducing two polynomials li(x), i=1,2 of the form (13), the condition, that V(x,c)-l1(x) is SOS, guarantees the radially unboundedness of V(x,c). Consequently, the problem (17) can be further transformed into the following SOS programming: (18)findcνs.t.V(0,c)=0,V(x,c)-l1(x)=ρ1(x),-Vxf(x)-l2(x)=ρ2(x), where ρj(x) are SOS polynomials in [x] for j=1,2. The decision variables are the coefficients of all the polynomials appearing in (18), such as V(x,c),li(x), and ρj(x).

3.2. Region of Attraction

Suppose that the equilibrium point 0 is asymptotically stable. In this section, we will consider how to find a large enough underestimate of the ROA. In the case where the equilibrium point 0 is globally asymptotically stable, the ROA becomes the whole space n.

Our idea of computing the estimate of ROA is similar to that of the algorithm in [20, Section 4.2.2]. Suppose that D is a semialgebraic set (19){xnV(x)1} given by a Lyapunov function V(x). In order to enlarge the computed positively invariant set contained in the ROA, we define a variable sized region (20)Pβ={xnp(x)β}, where p(x)[x] is a fixed and positive definite polynomial, and maximize β subject to the constraint PβD.

Fix a template of V of the form (12). The problem of finding an estimate Pβ of the ROA can be transformed into the following polynomial optimization problem: (21)maxcνβs.t.V(0,c)=0,V(x,c)>0xD{0},V˙(x,c)=Vxf(x)<0xD{0},p(x)βV(x,c)1. By introducing two polynomials li(x), i=1,2 of the form (13) and based on SOS relaxation, the problem (21) can be further transformed into the following SOS programming: (22)maxcνβs.t.V(0,c)=0,V(x,c)-l1(x)=ρ1(x)-σ1(x)(V(x,c)-1),-Vxf(x)-l2(x)=ρ2(x)-σ2(x)(V(x,c)-1),1-V(x,c)=ρ3(x)-σ3(x)(p(x)-β), where σi(x) and ρi(x) are SOS in [x] for 1i3. In (22), the decision variables are β and the coefficients of all the polynomials appearing in (22), such as V(x,c), σi(x), and ρi(x). Since β and the coefficients of V(x,c) and σi(x) are unknown, some nonlinear terms that are products of these coefficients, will occur in the second, third, and fourth constraints of (22), which yields a nonconvex bilinear matrix inequalities (BMI) problem. We will discuss in Section 4 how to handle the SOS programming (22) directly using the BMI solver or iterative method.

4. Exact Certificate of Sum of Squares Decomposition

According to Theorems 2, 3, and 6, the key of asymptotic stability analysis and ROA estimation lies in finding a real function V(x) and a constant β satisfying the desired conditions. We will present a symbolic-numeric hybrid method, based on SOS relaxation and rational vector recovery, to compute the exact polynomial V(x) and constant β.

4.1. Approximate Solution from LMI or BMI Solver

In this section, we will discuss how to handle the SOS programmings (16), (18), and (22) directly using the LMI and BMI solver or iterative method.

Using Gram matrix representation method  (also known as square matrix representation (SMR) ), a polynomial ψ(x) of degree 2m is SOS if and only if there exists a positive semidefinite matrix Q such that (23)ψ(x)=Z(x)TQZ(x), where Z(x) is a monomial vector in x of degree m. Thus, the SOS programming (16) is equivalent to the following semidefinite program (SDP) problem: (24)infW[j],Q[j]j=12Trace(W[j]+Q[j])s.t.V(0)=0,V(x,c)-l1(x)=m1(x)T·W·m1(x)-p1(x)T·Q·p1(x)g(x),-Vxf(x)-l2(x)=m2(x)T·W·m2(x)-p2(x)T·Q·p2(x)g(x), where all the matrices W[j],Q[j] are symmetric and positive semidefinite, and j=12Trace(W[j]+Q[j]) denotes the sum of traces of all these matrices, which acts as a dummy objective function commonly used in SDP for optimization problem with no objective function.

Similarly, the SOS programming (18) can be rewritten as the following SDP problem: (25)infW,WTrace(W)+Trace(W)s.t.V(0)=0,V(x,c)-l1(x)=m1(x)T·W·m1(x),-Vxf(x)-l2(x)=m2(x)T·W·m2(x), where matrices W, W are symmetric and positive semidefinite.

Many Matlab packages of SDP solvers, such as SOSTOOLS , YALMIP , and SeDuMi , are available to solve the problem (24) and (25) efficiently.

Now, let us consider the problem (22). The following example shows how to transform nonlinear parametric polynomial constraints into a BMI problem.

Example 7.

To find a polynomial φ(x) satisfying φ(x)0-x2+102x(dφ/dx)0, it suffices to find φ(x) such that (26)2xdφdx=ϕ0(x)+ϕ1(x)(1-x2)+ϕ2(x)φ(x), where ϕ0(x), ϕ1(x), ϕ2(x) are SOSes. Suppose that deg(φ)=1, deg(ϕ0)=2 and deg(ϕ1)=deg(ϕ2)=0 and that φ(x)=u0+u1x, ϕ1=u2 and ϕ2=v1, with u0,u1,u2,v1 parameters. From (1) we have (27)ϕ0(x)=u2x2+(2u1-u1v1)x-u2-u0v1, whose square matrix representation (SMR)  is ϕ0(x)=ZTQZ, where (28)Q=[-u2-u0v1u1-12u1v1u1-12u1v1u2],Z=[1x]. Since all the ϕi(x) are SOSes, we have u20, v10 and Q0. Therefore, the original constraint is translated into a BMI constraint: (29)(u0,u1,u2,v1)=u1+u2[1000000000-100001]+v1+u0v1[0000000000-100000]  +u1v1[00000000000-1200-120]0.

Similar to Example 7, the SOS programming (22) can be transformed into a BMI problem of the form (30)inf-βs.t.(u,v)=A0+i=1muiAi+j=1kvjAm+j+1im1jkuivjBij0, where Ai, Bij are constant symmetric matrices, u=(u1,,um), v=(v1,,vk) are parameter coefficients of the SOS polynomials occurring in the original SOS programming (22).

Many methods can be used to solve the BMI problem (30) directly, such as interior-point constrained trust region methods  an augmented Lagrangian strategy . PENBMI solver  is a Matlab package to solve the general BMI problem, whose idea is based on a choice of penalty/barrier function Φp that penalizes the inequality constraints. This function satisfies a number of properties  such that, for any p>0, we have (31)(u,v)0Φp((u,v))0. This means that, for any p>0, the problem (30) is equivalent to the following augmented problem: (32)inf-βs.t.  Φp((u,v))0. The associated Lagrangian of (32) can be viewed as a (generalized) augmented Lagrangian of (30): (33)F(u,v,U,p)=-β+Tr(UΦp((u,v))), where U is the Lagrangian multiplier associated with the inequality constraint. Remark that U is a real symmetric matrix of the same order as the matrix operator . For more details please refer to .

Alternatively, observing (30), (u,v) involves no crossing products like uiuj and vivj. Taking this special form into account, an iterative method can be applied by fixing u and v alternatively, which leads to a sequential convex LMI problem . Remark that although the convergence of the iterative method cannot be guaranteed, this method is easier to implement than PENBMI solver and can yield a feasible solution (u-,v-) efficiently in practice.

4.2. Exact SOS Recovery

Since the SDP (LMI) or BMI solvers in Matlab are running in fixed precision, applying the techniques in Section 4.1 only yields numerical solutions of (16), (18), and (22). In the sequel, we will propose an improved algorithm based on a modified Newton refinement and rational vector recovery technique, to compute exact solutions of polynomial optimization problems with LMI or BMI constraints.

Without loss of generality, we can reduce the problems (16), (18), and (22) to the following problem: (34)findcνs.t.V1(x,c)=m1(x)T·W·m1(x),V2(x,c)=m2(x)T·W·m2(x)+(m3(x)T·W·m3(x))·V3(x,c),W[i]0,i=1,2,3, where the coefficients of the polynomials Vi(x,c), 1i3 are affine in c. Note that (34) involves both LMI and BMI constraints. After solving the SDP system (34) by applying the techniques in Section 4.1, the numerical vector c and the numerical positive semidefinite matrices W[i], i=1,2,3 may not satisfy the conditions in (34) exactly, that is, (35)V1(x,c)m1(x)T·W·m1(x),V2(x,c)m2(x)T·W·m2(x)+(m3(x)T·W·m3(x))·V3(x,c), as illustrated by the following example.

Example 8.

Consider the following nonlinear system: (36)[x˙1x˙2]=[-0.5x1+x2+0.24999x12+0.083125x13+0.0205295x14+0.0046875x15+0.0015191x16-x2+x1x2-0.49991x13+0.040947x15]. We want to find a certified estimate of the ROA. In the associated SOS programming (22) with BMI constraints, we suppose p(x1,x2)=x12+x22. When d=2, we obtain (37)V(x1,x2)=3.112937368x12+3.112937368x22,β=0.32124. However, V(x1,x2) and β cannot satisfy the conditions in (21) exactly, because there exists a sample point (15/32,85/256) such that the third condition in (21) cannot be satisfied. Therefore, (38)ΩV={(x1,x2)2V(x1,x2)1} is not an estimate of the ROA of this system.

In our former papers [36, 37], we applied Gauss-Newton iteration and rational vector recovery to obtain exact solutions that satisfy the constraints in (34), exactly. However, these techniques may fail in some cases, as shown in [38, Example 2]. The reason may lie in that we recover the vector c and the associated positive semidefinite matrices separately. Here, we will compute exact solutions of (34) by using a modified Newton refinement and rational vector recovery technique , which applies on the vector c and the associated positive semidefinite matrices simultaneously. The main ideal is as follows.

We first convert W to a nearby rational positive semidefinite matrix W~ by nonnegative truncated PLDLTPT-decomposition. In practice, W is numerical diagonal matrix; in other words, the off-diagonal entries are very tiny and the diagonal entries are nonnegative. Therefore, by setting the small entries of W to zeros, we easily get the nearby rational positive semidefinite matrix W~. We then apply Gauss-Newton iteration to refine c, W, and W simultaneously with respect to a given tolerance τ.

Now, we will discuss how to recover from the refined c,W,W, the rational vector c~, and rational positive semidefinite matrices W~ and W~ that satisfy exactly(39)V1(x,c~)-m1(x)T·W~·m1(x)=0,V2(x,c~)-m2(x)T·W~·m2(x)-V3(x,c~)m3(x)T·W~·m3(x)=0. Since the equations in (39) are affine in entries of c~ and W~1, W~2, one can define an affine linear hyperplane (40)={W~c,W,WV1(x,c)-m1(x)T·W·m1(x)=0,V2(x,c)-m2(x)T·W·m2(x)-V3(x,c)m3(x)T·W~·m3(x)=0}. Note that the hyperplane (40) can be constructed from a linear system Ay=b, where y consists of the entries of c, W and, W. If A has full row rank, such a hyperplane is guaranteed to exist. Then, for a given bound D of the common denominator, the rationalized SOS solutions of (34) can be computed by orthogonal projection if the matrices W,W have full ranks with respect to τ, or by the rational vector recovery otherwise. Finally, we check whether the matrices W~[i] are positive semidefinite for i=1,2. If so, the rational vector c~ and rational positive semidefinite matrices W~[i]1i3 can satisfy the conditions of problem (34) exactly. For more details the reader can refer to .

4.3. Algorithm

The results in Sections 4.1 and 4.2 yield an algorithm to find exact solutions to the problem (34).

Algorithm 9.

Verified Parametric Optimization Solver.

Input

A polynomial optimization problem of the form (34).

D > 0 : the bound of the common denominator.

e 0 : the degree bound 2e of the SOSes used to construct the SOS programming.

τ > 0 : the given tolerance.

Output

The verified solution c~ of (34) with the W~[i],1i3 positive semidefinite.

(Compute the numerical solutions) Apply LMI or BMI solver to compute numerical solutions of the associated polynomial optimization problem (34). If this problem has no feasible solutions, return “we can't find solutions of (34) with the given degree bound 2e.” Otherwise, obtain c and W[i]0,1i3.

(Compute the verified solution c~)

Convert W to a nearby rational positive semidefinite matrix W~ by non-negative truncated   PLDLTPT-decomposition.

For the tolerance τ, apply the modified Newton iteration to refine c and W, W.

Determine the singularity of W and W with respect to τ. Then, for a given common denominator, the rational vector c~, and rational matrices W~, W~ can be obtained by orthogonal projection if W, W are of full rank, or by rational vector recovery method otherwise.

Check whether the matrices W~, W~ are positive semidefinite. If so, return c~, and W~[i], 1i3. Otherwise, return “we can't find the solutions of (34) with the given degree bound”.

5. Experiments

Let us present some examples of asymptotic stability and ROAs analysis of nonlinear systems.

Example 10 ([<xref ref-type="bibr" rid="B26">8</xref>, Example 1]).

Consider a nonlinear continuous system (41)x˙=f(x)=[x˙1x˙2x˙3x˙4x˙5x˙6]=[-x13+4x23-6x3x4-x1-x2+x53x1x4-x3+x4x6x1x3+x3x6-x43-2x23-x5+x6-3x3x4-x53-x6]. Firstly, we will certify the locally asymptotic stability of this system. According to Theorem 2, we need to find a Lyapunov function V(x), which satisfies all the conditions in Theorem 2. We can set up an associated SOS programming (16) with g(x)=x12+x22+x32-0.0001. When d=4, we obtain a feasible solution of the associated SDP system. Here we just list one approximate polynomial (42)V(x)=0.81569x12+0.18066x22+1.6479x32+2.1429x42+1.2996x62++0.67105x44+0.52855x54. Let the tolerance τ=10-2, and let the bound of the common denominator of the polynomial coefficients vector be 100. By use of the rational SOS recovery technique described in Section 4.2, we can obtain an exact Lyapunov function and the corresponding SOS polynomials. Here we only list the Lyapunov function: (43)V~(x)=4150x12+950x22+3320x32+10750x42+69100x52+1310x62++67100x44+53100x54. Therefore, the locally asymptotic stability is certified.

Furthermore, we will consider the globally asymptotic stability. It suffices to find a Lyapunov function V(x) with rational coefficients, which satisfies all the conditions in Theorem 3. By solving the associated SOS programming (18), we obtain (44)V(x)=0.78997x12+0.19x22+2.2721x32+1.4213x62++1.4213x24+0.71066x54. By use of the rational SOS recovery technique described in Section 4.2, we then obtain (45)V~(x)=79100x12+1950x1x2+9100x1x5+19100x22+227100x32+199100x42+9100x2x5+71100x52+7150x62+7150x24+71100x54, which exactly satisfies the conditions in Theorem 3. Therefore, the globally asymptotic stability of this system is certified.

Table 1 shows the performance of Algorithm 9 on another six examples, for globally asymptotic stability analysis of dynamical systems in the literature. All the computations have been performed on an Intel Core 2 Duo 2.0 GHz processor with 2 GB of memory. In Table 1 Examples  1–3 correspond to [9, Examples 2, 3, 9] and Examples  4–6 correspond to [39, Example 7], [6, Example 22], and [40, page 1341]. For all these examples, let the degree bound of the SOSes be 4, and set τ=10-2, D=100. Here n and Num denote the number of system variables and the number of decision variables in the LMI problem, respectively; deg(V~) denotes the degree of V~(x) obtained from Algorithm 9; Time is that for the entire computation run Algorithm 9 in seconds.

Algorithm performance on benchmarks (globally asymptotic stability).

Example n Num deg(V~) Time(s)
1 2 11 2 1.391
2 2 11 2 1.418
3 3 18 2 1.674
4 4 81 4 2.951
5 2 24 4 1.832
6 2 24 4 1.830
Example 11 ([<xref ref-type="bibr" rid="B38">41</xref>, Example A]).

Consider a nonlinear continuous system (46)x˙=f(x)=[x˙1x˙2]=[-x2x1+(x12-1)x2]. Firstly, we will certify the locally asymptotic stability of this system. It suffices to find a Lyapunov function V(x1,x2) with rational coefficients, which satisfies all the conditions in Theorem 2. We can set up an associated SOS programming (16) with g(x)=x12+x22-0.0001. When d=2, we obtain (47)V(x)=1.4957x12-0.7279x1x2+1.1418x22. Let the tolerance τ=10-2, and, the bound of the common denominator of the polynomial coefficients vector be 100. By use of the rational SOS recovery technique described in Section 4.2, we then obtain a Lyapunov function: (48)V~(x)=-6285x1x2+12785x12+9785x22. Therefore, the locally asymptotic stability is certified.

We now construct Lyapunov functions to find certified estimates of the ROA. In the associated SOS programming (22) with BMI constraints, we suppose p(x1,x2)=x12+x22. When d=2, we obtain (49)V(x1,x2)=0.6174455x12-0.40292x1x2+0.43078x22,β=1.3402225. Let the tolerance τ=10-5, and the bound of the common denominator of the polynomial coefficients vector be 105. By use of the rational SOS recovery technique described in Section 4.2, we obtain (50)V~(x1,x2)=673210903x12-439310903x1x2+4227110903x22,β~=67015000, which exactly satisfy the conditions in Theorem 6. Therefore, (51)ΩV~={(x1,x2)2V~(x1,x2)1} is a certified estimate of the ROA of the given system.

Table 2 shows the performance of Algorithm 9 on another three examples, for computing verified estimates of ROAs of dynamical systems with the same fixed positive definite polynomial p(x) given in the literature. All the computations have been performed on an Intel Core 2 Duo 2.0 GHz processor with 2 GB of memory. In Table 2, Examples  1–3 correspond to [22, Examples 1, 2] and [20, page 75]. For all these examples, let the degree bound of the SOSes be 4, τ=10-4 and D=10000. Here n and Num denote the number of system variables and the number of decision variables in the BMI problem, respectively; deg(V~) and β~ denote, respectively, the degree of V~(x) and value of β obtained from Algorithm 9, whereas β is reported in the literature; time is that for the entire computation run Algorithm 9 in seconds.

Algorithm performance on benchmarks (ROA).

Example n Num deg(V~) β ~ β Time (s)
1 2 28 2 5415 / 9277 0.593 19.417
2 3 46 2 2650 / 9902 2.76 26.635
3 2 28 2 99 / 41 2.05 9.915
6. Conclusion

In this paper, we present a symbolic-numeric method on asymptotic stability and ROA analysis of nonlinear dynamical systems. A numerical Lyapunov function and an estimate of ROA can be obtained by solving an (bilinear) SOS programming via BMI solver. Then a method based on modified Newton iteration and rational vector recovery techniques is deployed to obtain exact polynomial Lyapunov functions and verified estimates of ROAs with rational coefficients. Some experimental results are given to show the efficiency of our algorithm. For future work, we will consider the problem of stability region analysis of nonpolynomial systems by applying a rigorous polynomial approximate technique to compute an uncertain polynomial system, whose set of trajectories contains that of the given nonpolynomial system.

Acknowledgments

This material is supported in part by the National Natural Science Foundation of China under Grants 91118007 and 61021004 (Wu Yang), the Fundamental Research Funds for the Central Universities under Grant 78210043 (Wu Yang), and the Education Department of Zhejiang Province Project under Grant Y201120383 (Lin).

Polański A. Lyapunov function construction by linear programming Transactions on Automatic Control 1997 42 7 1013 1016 10.1109/9.599986 MR1469846 ZBL0885.93050 Johansen T. A. Computation of Lyapunov functions for smooth nonlinear systems using convex optimization Automatica 2000 36 11 1617 1626 10.1016/S0005-1098(00)00088-1 MR1831718 ZBL0971.93069 Grüne L. Wirth F. Computing control Lyapunov functions via a Zubov type algorithm 3 Proceedings of the 39th IEEE Confernce on Decision and Control December 2000 IEEE 2129 2134 2-s2.0-0034439176 Oliveira R. C. L. F. Peres P. L. D. LMI conditions for robust stability analysis based on polynomially parameter-dependent Lyapunov functions Systems & Control Letters 2006 55 1 52 61 10.1016/j.sysconle.2005.05.003 MR2185593 ZBL1129.93485 Chesi G. Garulli A. Tesi A. Vicino A. LMI-based computation of optimal quadratic Lyapunov functions for odd polynomial systems International Journal of Robust and Nonlinear Control 2005 15 1 35 49 10.1002/rnc.967 MR2106207 ZBL1056.93059 Liu J. Zhan N. Zhao H. Automatically discovering relaxed Lyapunov functions for polynomial dynamical systems Mathematics in Computer Science 2012 6 4 395 408 Nguyen T. Mori Y. Mori T. Kuroe Y. QE approach to common Lyapunov function problem Journal of Japan Society for Symbolic and Algebraic Computation 2003 10 1 52 62 Papachristodoulou A. Prajna S. On the construction of Lyapunov functions using the sum of squares decomposition 3 Proceedings of the 41st IEEE Conference on Decision and Control December 2002 IEEE 3482 3487 2-s2.0-0036992823 She Z. Xia B. Xiao R. Zheng Z. A semi-algebraic approach for asymptotic stability analysis Nonlinear Analysis: Hybrid Systems 2009 3 4 588 596 10.1016/j.nahs.2009.04.010 MR2561675 ZBL1217.93153 Grosman B. Lewin D. R. Lyapunov-based stability analysis automated by genetic programming Automatica 2009 45 1 252 256 10.1016/j.automatica.2008.07.014 MR2531520 ZBL1154.93405 Forsman K. Construction of Lyapunov functions using Grobner bases Proceedings of the 30th IEEE Conference on Decision and Control December 1991 IEEE 798 799 2-s2.0-0026618360 She Z. Xue B. Zheng Z. Algebraic analysis on asymptotic stability of continuous dynamical systems Proceedings of the 36th International Symposium on Symbolic and Algebraic Computation 2011 ACM 313 320 Prajna S. Parrilo P. A. Rantzer A. Nonlinear control synthesis by convex optimization Transactions on Automatic Control 2004 49 2 310 314 10.1109/TAC.2003.823000 MR2034359 Xia B. DISCOVERER: a tool for solving semi-algebraic systems ACM Communications in Computer Algebra 2007 41 3 102 103 Giesl P. Construction of a local and global Lyapunov function using radial basis functions IMA Journal of Applied Mathematics 2008 73 5 782 802 10.1093/imamat/hxn018 MR2443668 ZBL1162.39006 Chiang H.-D. Thorp J. S. Stability regions of nonlinear dynamical systems: a constructive methodology Transactions on Automatic Control 1989 34 12 1229 1241 10.1109/9.40768 MR1029373 ZBL0689.93046 Davison E. J. Kurak E. M. A computational method for determining quadratic Lyapunov functions for non-linear systems 1971 7 5 627 636 MR0395989 ZBL0225.34027 Cruck E. Moitie R. Seube N. Estimation of basins of attraction for uncertain systems with affine and Lipschitz dynamics Dynamics and Control 2001 11 3 211 227 10.1023/A:1015244102061 MR1904883 ZBL1047.93043 Genesio R. Tartaglia M. Vicino A. On the estimation of asymptotic stability regions: state of the art and new proposals Transactions on Automatic Control 1985 30 8 747 755 10.1109/TAC.1985.1104057 MR794208 ZBL0568.93054 Jarvis-Wloszek Z. Lyapunov based analysis and controller synthesis for polynomial systems using sum-of-squares optimization [Ph.D. thesis] 2003 Berkeley, Calif, USA University of California Prakash S. Vanualailai J. Soma T. Obtaining approximate region of asymptotic stability by computer algebra: a case study The South Pacific Journal of Natural and Applied Sciences 2002 20 1 56 61 Weehong T. Packard A. Stability region analysis using sum of squares programming Proceedings of the American Control Conference (ACC '06) June 2006 IEEE 2297 2302 2-s2.0-34047243150 Parrilo P. A. Semidefinite programming relaxations for semialgebraic problems Mathematical Programming B 2003 96 2 293 320 10.1007/s10107-003-0387-5 MR1993050 ZBL1043.14018 Prajna S. Papachristodoulou A. Parrilo P. 2002, SOSTOOLS: Sum of squares optimization toolbox for MATLAB, http://www.cds.caltech.edu/sostools Löfberg J. YALMIP: a toolbox for modeling and optimization in MATLAB Proceedings of the 2004 IEEE International Symposium on Computer Aided Control System Design September 2004 284 289 2-s2.0-20344396845 Sturm J. F. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones Optimization Methods and Software 1999 11 1–4 625 653 10.1080/10556789908805766 MR1778433 ZBL0973.90526 Kocvara M. Stingl M. PENBMI User's Guide (version 2. 0), http://www.penopt.com, 2005 Kaltofen E. Li B. Yang Z. Zhi L. Exact certification of global optimality of approximate factorizations via rationalizing sums-of-squares with floating point scalars Proceedings of the 21th International Symposium on Symbolic and Algebraic Computation (ISSAC '08) July 2008 Hagenberg, Austria ACM 155 164 Kaltofen E. L. Li B. Yang Z. Zhi L. Exact certification in global polynomial optimization via sums-of-squares of rational functions with rational coefficients Journal of Symbolic Computation 2012 47 1 1 15 10.1016/j.jsc.2011.08.002 MR2854844 ZBL1229.90115 Platzer A. Clarke E. M. The image computation problem in hybrid systems model checking 4416 Proceedings of the 10th International Conference on Hybrid Systems: Computation and Control (HSCC '07) April 2007 Springer 473 486 Khalil H. Nonlinear Systems 2002 3rd Upper Saddle River, NJ, USA Prentice hall Bochnak J. Coste M. Roy M. Real Algebraic Geometry 1998 Berlin, Germany Springer Chesi G. LMI techniques for optimization over polynomials in control: a survey Transactions on Automatic Control 2010 55 11 2500 2510 10.1109/TAC.2010.2046926 MR2721892 Leibfritz F. Mostafa E. M. E. An interior point constrained trust region method for a special class of nonlinear semidefinite programming problems SIAM Journal on Optimization 2002 12 4 1048 1074 10.1137/S1052623400375865 MR1922508 ZBL1035.90102 Kočvara M. Stingl M. Pennon: a code for convex nonlinear and semidefinite programming Optimization Methods & Software 2003 18 3 317 333 10.1080/1055678031000098773 MR1989707 ZBL1044.90082 Wu M. Yang Z. Generating invariants of hybrid systems via sums-of-squares of polynomials with rational coefficients Proceedings of the International Workshop on Symbolic-Numeric Computation 2011 ACM Press 104 111 Lin W. Wu M. Yang Z. Zeng Z. Exact safety verification of hybrid systems using sums-of-squares representation Science China Information Sciences 2012 15 Yang Z. Wu M. Lin W. Exact verification of hybrid systems based on bilinear SOS representation 19 pages, 2012, http://arxiv.org/abs/1201.4219 Papachristodoulou A. Prajna S. A tutorial on sum of squares techniques for systems analysis Proceedings of the American Control Conference (ACC '05) June 2005 IEEE 2686 2700 2-s2.0-23944499179 Löfberg J. Modeling and solving uncertain optimization problems in YALMIP Proceedings of the 17th World Congress, International Federation of Automatic Control (IFAC '08) July 2008 1337 1341 2-s2.0-79961019221 10.3182/20080706-5-KR-1001.1729 Topcu U. Packard A. Seiler P. Local stability analysis using simulations and sum-of-squares programming Automatica 2008 44 10 2669 2675 10.1016/j.automatica.2008.03.010 MR2531066 ZBL1155.93417