A Novel Dissipativity-Based Control for Inexact Nonlinearity Cancellation Problems

When dealing with linear systems feedback interconnected with memoryless nonlinearities, a natural control strategy is making the overall dynamics linear at first and then designing a linear controller for the remaining linear dynamics. By canceling the original nonlinearity via a first feedback loop, global linearization can be achieved. However, when the controller is not capable of exactly canceling the nonlinearity, such control strategy may provide unsatisfactory performance or even induce instability. Here, the interplay between accuracy of nonlinearity approximation, quality of state estimation, and robustness of linear controller is investigated and explicit conditions for stability are derived. An alternative controller design based on such conditions is proposed and its effectiveness is compared with standard methods on a benchmark system.


Introduction and Motivations
One of the most prolific areas of interest in the nonlinear control theory deals with the existence of coordinate changes and nonlinear inputs which are capable of making the complete system linear, so that the mathematical tools of the linear control framework can be successfully exploited.The idea of suppressing the process nonlinearities by means of a properly designed controller dates back to the early stages of the control theory [1], and it can be well illustrated by considering a system in the Lur' e form [2, ch. 7], which features a linear dynamic part feedback interconnected through a static (or memoryless) nonlinear operator: ẋ =  +  ( −  ()) . (1) Such a system, indeed, can be globally feedback linearized by exploiting the control input that transforms the original nonlinear model into the new linear system which can be controlled through  using standard linear control techniques [3].Observe that (3) is qualitatively different from the direct linearization of (1) around the desired nominal solution [2, ch. 4] as it has global validity.Despite this noteworthy result, the nonlinear input (2) can only be used under two very strict conditions which often prevent its application to real cases: (i) the nonlinearity  must be exactly known a priori and (ii) the controller must be capable of exactly reproducing it.The extension to generic systems, for which only an output  = () is measurable, is known as output feedback linearization and it exploits a linearizing control input of the form  = () + () to make the relationship between the output  and the new input  linear.In his pioneering work, Krener was the first providing sufficient conditions for the exact cancellation of the nonlinearity [4], while the complete solution of the output feedback problem was found 2 Mathematical Problems in Engineering later by Isidori [5].The state feedback linearization, instead, aims to design a suitable change of coordinates such that the transformed system turns out to be locally diffeomorphic to a linear system.Necessary and sufficient conditions for the single input case were already presented in the work of Krener [4], while the multiple inputs problem has been independently solved by Su [6] and Hunt et al. [7].Moreover, a large number of techniques have been developed to partially feedback linearize the system when it does not satisfy the previous scenarios (see, e.g., [8,9] and the references therein).
Despite the rigorous results available in the literature, all the feedback linearization methods suffer from the lack of robustness with respect to inexact nonlinearity cancellations when applied to real world cases.Indeed, the inexact cancellations give rise to unmodeled dynamics, which usually have detrimental effects on the controlled system performance, and which can even lead to instability (see, e.g., [10][11][12] and the references therein).Unfortunately, such an issue is quite common when dealing with real problems, where the exact cancellation of the nonlinearity may not be possible for several complications.The detrimental effect of inexact nonlinearity estimation has been highlighted in autopilot design [13], aeroelastic systems [14,15] and DC motor control [16], just to cite a few examples.With reference to system (1)-( 2) the following negative scenarios can occur.
(1) The nonlinear operator  cannot be completely and exactly known a priori.For instance, only a limited set of sample points are available, or just its lower and upper bound can be experimentally investigated.
(2) The system state  cannot be directly observed and must be reconstructed.The use of a state observer negatively affects the cancellation, since the state estimates may deviate from the true values during transients.
(3) The control input  can be designed by using only a finite set of basis functions, preventing a perfect reproduction of the nonlinear operator .
These issues can be well illustrated by considering a Lur' e system of the form where  ∈ R   is the state,  ∈ R the control input,  ∈ R   the measured output, and  ∈ R the actual argument of the nonlinear operator that does not necessarily coincide with .Here, for the sake of the simplicity, let us assume that  : R → R, although the general case  : R   → R   is straightforward by considering multidimensional signals for  and the signals (, ) defined in Section 2.1.Finally, without any loss of generality, suppose that (0) = 0 so that the desired equilibrium sits on the origin of the state space.
Let us assume that the pair (, ) is observable.The standard theory then recommends the use of a state observer to recover the missing information on  and  [17].Hence, if (, ) is also controllable, the control input can be designed as where the function  is the approximation of the nonlinearity .Here the controller might not be able to exactly reproduce the actual nonlinear operator  via , potentially causing an inexact nonlinearity cancellation.However,  is still supposed to satisfy at least the condition (0) = 0, in order to preserve the equilibrium in the origin.By defining the cancellation residual and the state estimation error the controlled system in the traditional framework can be written as In Figure 1 the corresponding block diagram is reported.It is worth noticing that when the cancellation residual is the null signal, that is,  ≡ 0, the separation principle holds and therefore the linear dynamics of the controlled system can be arbitrarily set by separately choosing  and  [18, ch. 16].Unfortunately, such condition is hardly met in any real world scenario, since it requires to achieve both exact nonlinearity cancellation and perfect state estimation.Hence, in the most common situations the cancellation residual plays the role of a disturbance that simultaneously affects both the original system state and its estimation, thus introducing a mutual interplay between the controller and the observer that prevents one to independently design  and .The stability of the system (7)-( 9) under the interference of the cancellation residual  can be investigated through standard approaches by observing that it is in (extended) Lur' e form; that is, it consists of a linear subsystem feedback interconnected with nonlinearity (7).For the sake of completeness a short review of traditional methods is presented hereafter, highlighting the pros and cons of each solution.(i) Stability through the Analysis of the Linearized Dynamics around the Equilibrium [2, ch. 4].This category groups all of the methods which aim to set up the poles of the system linearized around the equilibrium.Therefore, they assume that the local behaviour of  at the fixed point, that is, the value of its derivative there, is known.On one hand, when  and  have already been fixed the computation of the equilibrium eigenvalues can be efficiently carried out via numerical techniques.On the other hand, the complex mutual dependence between the controller and the observer prevents deriving explicit formulas for designing the gains  and  so that all the eigenvalues at the equilibrium have negative real parts.This problem is usually overcome by enforcing a sufficient degree of separation between the observer and the controller, bringing on a very high degree of conservatism in the solution [17].
(ii) Closed Loop Methods Based on the Lur' e Problem Formulation [2, ch. 7].These techniques analyze the equilibrium stability by exploiting the closed loop form that features the feedback interconnection between a linear subsystem and a static nonlinearity satisfying a sector condition.The originating method of this family is the circle criterion.Since these techniques explicitly take into account the nonlinearity , that is, the cancellation residual, their results are partially based on the knowledge of the interplay between controller and observer.Unfortunately, this advantage vanishes by the very complicated dependence of the linear subsystem properties from  and  that prevents their explicit computation.
(iii) Closed Loop Methods Based on the Input-to-Output Properties of the Loop Branches [2, ch. 6].This family counts approaches inspired by the passivity theorem for feedback systems.The basic idea is based in representing the system as a loop of subsystems, whose input-to-output properties prevent the existence of self-sustained nonzero signals.Suitable ad hoc mathematical manipulations allow one to decouple the model so to enclose  and  in different subsystems, but this causes the presence of nonlinear dynamic subsystems, whose input-to-output features are quite formidable to be explicitly designed using  and .
As highlighted above, designing a stabilizing controller with guaranteed performance in presence of state estimation and inexact nonlinearity cancellation is a challenging problem.In the rest of the paper a simple effective strategy based on the dissipativity theory [19, ch.9] will be presented along with a comparative example with the traditional approaches.
and   is referred to as its Lipschitz constant.In particular,  is globally Lipschitz continuous if (10) holds over the entire real axis, whereas the function will be called locally Lipschitz continuous if (10) holds only on limited subsets of R.
Given a signal () : [0, ] → R   , its -norm is defined as where | ⋅ | is the Euclidean norm.The set of signals  with finite -norm ‖‖  < ∞ forms a Banach space denoted by   .Finally, given an operator H :   →   , its induced (, )norm is defined as The induced norm ‖H‖ 2,2 is also referred to as the  ∞ -norm of H.

An Alternative Control Strategy
In the previous section the remarkable conservatism of methods based on the linearized dynamics around the equilibrium has been highlighted, whereas the approaches inspired by circle criterion and passivity theory would be more suitable for taking into account the interplay between observer and controller.Nonetheless, it has also been stressed out that the complexity of these latter methods prevents an explicit solution of the problem.Here an alternative strategy to overcome the controller design difficulties that are intrinsic in the traditional observer-based approach is introduced in Section 2.1, whereas in Sections 2.2 and 2.3 the cases of globally and locally Lipschitz nonlinearities  are considered, respectively.Finally, an explicit derivation for the optimal Mathematical Problems in Engineering piecewise linear approximation of  is introduced in Section 2.4.

Control Scheme.
Let us consider the nonlinear system (4) along with an output-feedback controller of the form By denoting the implicit estimation error as () :=  2 () − () and defining the useful output signal  := [, ]  , the complete system can be rewritten as The corresponding scheme is illustrated in Figure 2. The proposed model features two feedback loops: the inner interconnection explicitly accounts for the actual control input  1 regulating the linear subsystem, while the outer branch is responsible for the nonlinearity cancellation residual, acting as a disturbance.Conditions for the inner loop to compensate the negative effects of the outer one are investigated hereafter.
A comparison with the previous model ( 7)-( 9) also highlights a similar (extended) Lur' e structure, comprising a linear part feedback connected through the very same nonlinear function , representing the cancellation residual.However, despite the strong analogies, the two control strategies turn out to be not completely equivalent, as stated by the following proposition (see also [20] for further insights).
Proof.First, notice that the observer ( 5) and ( 6) can be rewritten as Therefore, by comparing these equations with controller (13), the two schemes are algebraically equivalent if and only if there exists an invertible transformation V = x such that The above conditions prove that for each given (, ) any invertible  provides an equivalent controller (, , , ).On the other hand, by substituting  =  1  and  =  −1  in the first equation, it follows that, given a certain (, , , 0), there exists an equivalent observer-based scheme, if and only if the following quadratic and linear matricial equations admit an invertible solution .Observe that the equivalence condition can be relaxed by noticing that the feedback acts on a signal that is not directly measured, so that all the differences between the two schemes can be conveniently put into the nonlinear operator, eventually kept out of the problem.This is indeed the same as requiring that only the linear and directly accessible part of the system needs to participate in the equivalence condition that consequently reduces to only equation (19).In general, even the existence of a solution of just (19) will require the right combination of the matrices , , , , , and  that cannot be assumed a priori.For example, in the scalar case   = 1 equation ( 19) reduces to which admits real solutions  if and only if The rest of this section will be devoted to setting up results necessary for proving the following proposition.
Proposition 2. The interconnection scheme ( 14)-( 16) can be studied via closed loop techniques based on the dissipativity of the linear subsystem and the cancellation residual, also providing under mild conditions an explicit solution in terms of the matrices (, , , ) representing controller (13).
In order to show the validity of such proposition, let us consider the linear subsystem described by ( 14)-( 15) and observe that it is the feedback interconnection of (, , , 0) and (, , , ).The input-to-output behavior of this subsystem from the signal  to  is related to dissipativity by the existence of a positive definite storage function   (, V) and a supply function   (, ) such that or equivalently Assuming that both the storage and the supply functions are quadratic (see [19, ch. 9]) condition ( 23) boils down to the matricial inequalities where ] ] ] through the nonlinear transformations and for nonsingular matrices U and V satisfying Proof.The proof follows the same outline as in [21, Section 4.2].
Observe that problem (27) is more appealing than (25) because it can be cast as a LMI, depending on the nature of the supply function and in particular of   .

Globally Lipschitz Nonlinearity.
Thanks to Proposition 3 the dissipativity conditions (25) can be enforced onto the linear subsystem ( 14)-( 15) via the controller (, , , ).Moreover, such a result can be achieved by designing the controller as in (29) by solving (27).The following result ensures that such controller is capable of stabilizing the equilibrium in the origin if the supply function   (, ) is properly chosen on the basis of the original nonlinearity  and its approximation .Proposition 5. Suppose that   satisfies the hypothesis of Proposition 4 and assume that the functions  and  −  are globally Lipschitz with constants   and  − , respectively.Then, a sufficient condition for the controller (29) to make the equilibrium in the origin globally asymptotically stable is the existence of  ∈ (0, 1) such that   ,   , and   satisfy Proof.First, consider the nonlinear operator () in ( 16) and observe that Then, notice that, for any ,  ∈ R + and ,  ∈ R + , a sufficient condition for  ∈ R + to satisfy is Indeed, by substituting the above  into (33), one obtains that is equivalent to inequality which is always satisfied.Therefore, combining (32) and (33), it follows that which directly implies that that is, the nonlinearity ( 16) on the outer feedback loop of the proposed control scheme is finite gain stable.Observe that such a property can be conveniently expressed by means of a constant storage function   and a corresponding supply function   for the nonlinear operator () as follows: (39) Assume now that  ∈ (0, 1) satisfies (31) and consider the candidate Lyapunov function By assumption the controller (, , , ) assures that the linear subsystem ( 14)-( 15) satisfies the dissipativity conditions (25).Therefore, by choosing   ≡ 0, one has that (, V) ≥ 0 with (, V) = 0 if and only if  = V = 0  .Moreover, exploiting (39), it follows that showing that (, V) is a proper Lyapunov function for the complete system, and therefore the equilibrium in the origin is globally asymptotically stable.

Corollary 6.
A controller (, , , ) ensuring the dissipativity conditions (25) onto the linear subsystem for where can be designed via LMI resolution, and moreover it makes the equilibrium in the origin of the complete system globally asymptotically stable.
Proof.First, observe that the hypotheses of Proposition 4 are satisfied with  =  and  = −.Indeed, using   =  2 ,   = 0  , and   = −, one obtains which can be transformed into proper LMIs by applying Schur's complement formula to (46): Then, notice that (31) becomes that is equivalent to the inequalities which have to be solved for  ∈ (0, 1) under assumption (43).This problem is equivalent to finding  such that Finally, notice that such a problem is well posed and can be solved if and only if  48)).Nonetheless, the approach presented so far can be effectively applied only if the problem nonlinearity is globally Lipschitz continuous.Such assumption might be quite restrictive in many applications; for instance, it prevents the exploitation of the proposed technique for systems having polynomial nonlinearities.Then, to widen the extent of the proposed approach, in the following the previous results will be adjusted assuming that the nonlinearities  and  are just locally Lipschitz continuous.
Remark 7. The assumptions on the nonlinear operators cannot be relaxed any further, as local Lipschitz continuity is necessary for the existence and uniqueness of the solution of any ordinary differential equation [22].
Suppose that  is designed so that the local finite L and define the two projection vectors Proposition 8.A controller (, , , ), designed via Proposition 5 or Corollary 6 using the gains in (53), locally stabilizes the equilibrium in the origin.Moreover, all the initial conditions satisfying belong to its domain of attraction, K being defined as in ( 28) and C  as in (26).
Proof.First, observe that (, , , ) is guaranteed to stabilize the equilibrium only if the system dynamics satisfy for an open set of initial conditions containing it.Indeed, in such a region the nonlinear operator ( 16) on the outer feedback loop satisfies condition (38) thanks to (32) and (53).The candidate Lyapunov function (40) with   ≡ 0 is again a proper Lyapunov function, though this time with only local validity.Therefore, in the region where (, V) is a local Lyapunov function its level curves represent invariant ellipsoids of the system; that is, any initial condition inside one of these ellipsoids always generates trajectories which do not exit it.Moreover, observe that these regions can also be characterized via for positive values of , by getting rid of the  in (, V), that can be eliminated without any loss of generality thanks to   ≡ 0.Then, since   ,  − > 0, the region pointed out by ( 56) is not empty and therefore one can always find a sufficiently small  such that the invariant ellipsoid (57) is completely contained inside it.This is sufficient to guarantee the local stability of the equilibrium.Let us then look for the biggest ellipsoid (57) belonging to the domain of attraction of the equilibrium.A given initial condition ( 0 , V 0 ) belongs to an invariant ellipsoid if and only if Since by assumption K > 0, (58) can be written as using the Schur complement.Then, consider the matrix C  as in (26) and observe that Hence, by using again the Schur complement, one finally obtains which represents an output invariant ellipsoid for the linear subsystem ( 14)- (15).In order to find when such an ellipsoid is tangent to the level curves described by    =   , one has to impose that its Jacobian is parallel to them, that is, for some real value .The corresponding outputs, that is, the tangent points, satisfy and then, since  and   are related through (56), one obtains Substituting the tangent point into the equation of the output invariant ellipsoid (61), instead, one obtains and therefore the maximum and minimum values for  derive from Combining ( 64) and (66) it follows that Then, repeating the same process with respect to  − ,  − and taking the strictest constraint conclude the proof.
The previous results highlight that in presence of inexact cancellation, that is,  − ̸ = 0, a (local) stabilizing controller can be designed referring to the combined (local) gain √ 2   +  2 − .Therefore, the approximating nonlinearity  should be aimed to minimize this value rather than the cancellation error ‖ − ‖ 2 alone.Such a problem admits an interesting interpretation when both  and  are continuous in R, differentiable almost everywhere, and their derivatives are locally bounded in every compact subset of R. Indeed, under these assumptions the above L 2 -gains directly depend on the maxima and minima of the derivatives of  and  − , and the approximation process can be split over a finite partition of R resulting in a convex optimization problem.
In order to illustrate such a situation and, for the sake of the simplicity, in the following,  will be locally approximated on a limited interval by means of a piecewise linear function .

Optimal Piecewise
where Ω = { − = −, . . .,  0 = 0, . . .,   = }.To ensure that  is continuous, observe that the condition must be satisfied.Similarly, one has to impose in order to preserve the equilibrium in the origin.Finally, let us suppose that  is only partially known, but that at least the maxima and minima of its derivative   =   () inside the natural partition of I given by Ω are known, that is, the values In order to solve the above optimization problem, define the functions and observe that the argument of the previous minimum is Points of maxima and minima of  + and  − can be derived by setting the derivatives     + and     − to zero, thus obtaining giving the values Observe that the minimum value of  for which (47)-(48) are solvable is  = 1 and therefore the maximum inexact cancellation Lipschitz constant for which the system is guaranteed to be stable is √ 2  +  2 − = 1 according to Corollary 6, although such bound is conservative.
The controller is designed by solving (47)-( 48) with  = ( 2  +  2 − ) −1/2 in order to take the worst case scenario in terms of expected performance.The resulting controlled dynamics are now stable and converge to the equilibrium in the origin, as shown in Figure 5.In Figure 6 the predicted invariant ellipsoid (in green) is plotted together with the trajectories (solid lines) and the bounds defined by   and  − (red).Note that the invariant ellipsoid is the biggest one contained within the bounds and that the trajectories are entirely contained within such ellipsoid, as expected.
To compare the performance of the proposed technique with the one of the standard observed-based approach, a controller was designed according to the procedure described by Boyd et al. in [17], that allows one to take into account also the discrepancy of the slope of  and  in the origin.A comparison of the dynamics controlled by the two techniques is shown in Figure 7, where it is evident that both schemes are capable of stabilizing the dynamics, but the convergence is faster when using the control scheme proposed in this  paper.In Figure 8 the Bode diagrams of the two controllers highlight the dynamical properties differences making the proposed solution preferable.First observe that the traditional controller has higher gains at low frequencies.This is due to its overconservative design, that is mainly based on the knowledge of nonlinearity's slope at the origin.In comparison, the proposed solution uses less power when the trajectory is close to the equilibrium; that is, it is more efficient around the fixed point.Moreover, notice also that this latter controller has a larger bandwidth with respect to the observed-based one.Hence, the traditional solution is less responsive when the system dynamics are faster, as it happens, for instance, when the trajectory is not close to the equilibrium.This explains the better convergence rate exhibited by the proposed controller in Figure 7, where the starting point is sufficiently far from the fixed point.Therefore, the proposed solution is more effective on larger domains around the equilibrium.Finally, the traditional approach does not provide any a priori guarantee that the closed loop dynamics will be stable within a given region, that is, it is not possible to plot a figure like Figure 6 in the traditional framework.
The above comparison suggests that the approach presented in this work is able to fruitfully exploit a minimal   and the traditional solution described in [17] (dashed green).The first controller exhibits a larger bandwidth with lower gains at low frequencies, as a result of a better efficiency on a larger domain of effectiveness.
amount of information about the quality of the nonlinearity approximation (basically only the Lipschitz constants   and  − ) in order to achieve a better performance than the traditional technique.

Conclusions
Classical nonlinear control techniques based on nonlinearity cancellation exhibit poor robustness properties with respect to model uncertainty and inexact cancellations, especially when a state estimation must be used in place of the true state.This may induce poor closed loop performance and even instability.In this paper, a new control strategy has been introduced to explicitly take into account the interplay between accuracy of nonlinearity cancellation and performance achievable by the controller acting on the linear part of the system.By rewriting the problem as a nonstandard robust control problem and using dissipativity methods, sufficient conditions for the system to be input-to-output stable have been provided and exploited to derive explicit controller design and nonlinearity approximation strategies.Both globally and locally Lipschitz nonlinearities can be dealt with in this framework.For the case of locally Lipschitz nonlinearities, an estimation of a region of the state space where the closed loop is guaranteed a priori to be stable is also provided.The effectiveness of the proposed technique has been tested on a benchmark Chua system, showing that better performance can be achieved when using the design introduced in this paper.

Figure 1 :
Figure 1: The traditional connection scheme made of a nonlinear state observer, a canceling nonlinearity, and a linear static controller.

Figure 2 :
Figure 2: The alternative connection scheme featuring the double feedback loop comprising the input-to-output controller and the nonlinearity cancellation residual.

Proposition 4 .
Sufficient conditions for problem (27) to have a LMI formulation are either   = 0 or   =    −1  for matrices  and  of compatible dimensions such that  < 0. Proof.See [21, Lemma 4.2] and the references therein.

Figure 6 :
Figure 6: Examples of controlled trajectories (solid lines) projected on the  = (, ) plane, together with the a priori bounds defined by   and  − (dashed red line), and the predicted invariant ellipsoid (dashed green line).

Figure 7 :
Figure 7: Comparison between the trajectories obtained by simulating the Chua system (82) controlled by the proposed controller (solid blue) and an observer-based controller designed according to[17] (dashed green).

Figure 8 :
Figure8: Bode diagrams of the proposed controller (solid blue) and the traditional solution described in[17] (dashed green).The first controller exhibits a larger bandwidth with lower gains at low frequencies, as a result of a better efficiency on a larger domain of effectiveness.