New directions in model predictive control (MPC) are introduced. On the one hand, we combine the input-to-state dynamical stability (ISDS) with MPC for single and interconnected systems. On the other hand, we introduce MPC schemes guaranteeing input-to-state stability (ISS) of single systems and networks with time delays. In both directions, recent results of the stability analysis from the mentioned areas are applied using Lyapunov function(al)s to show that the corresponding cost function(al) of the MPC scheme is a Lyapunov function(al). For networks, we show that under a small-gain condition and with an optimal control obtained by an MPC scheme for networks, it has the ISDS property or ISS property, respectively.
1. Introduction
The approach of MPC started in the late 1970s and spread out in the 1990s by an increasing usage of automation processes in the industry. It has a wide range of applications, see the survey papers [1, 2].
The aim of MPC is to control a system to follow a certain trajectory or to steer the solution of a system into an equilibrium point under constraints and unknown disturbances. Additionally, the control should be optimal in view of defined goals, for example, optimal regarding effort. An overview about MPC can be found in the books [3–5] and the Ph.D. theses [6–8], for example.
We consider systems with disturbances of the form,
(1.1)x˙(t)=f(x(t),w(t),u(t)),
where w∈𝒲⊆L∞(ℝ+,ℝP) is the unknown disturbance and 𝒲 is a compact and convex set containing the origin. The input u is a measurable and essentially bounded control subject to input constraints u∈𝒰, where 𝒰⊆ℝm is a compact and convex set containing the origin in its interior. The function f is assumed to be locally Lipschitz in x uniformly in w and u to guarantee that a unique solution of (1.1) exists, which is denoted by x(t;x0,w,u) or x(t) in short.
The control input is obtained by an MPC scheme and applied to the system. We are interested in stability of MPC. It was shown in [9] that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable. For stability of a system in applications, it is desired to analyze under which conditions stability of a system can be achieved using an MPC scheme. An overview about existing results regarding stability and MPC for systems without disturbances can be found in [10] and recent results are included in [5–8]. To design stabilizing MPC controllers for nonlinear systems, a general framework can be found in [11].
Taking the unknown disturbance w∈𝒲 into account, MPC schemes which guarantee input-to-state stability (ISS) were developed. First results can be found in [12] regarding ISS for MPC of nonlinear discrete-time systems. Furthermore, results using the ISS property with initial states from a compact set, namely, regional-ISS, are given in [6, 13]. In [14, 15], an MPC scheme that guarantees ISS using the so-called min-max approach was given. The approach uses a closed-loop formulation of the optimization problem to compensate the effect of the unknown disturbance.
Stable MPC schemes for interconnected systems were investigated in [6, 16, 17], where in [6, 16] conditions to assure ISS of the whole system were derived and in [17] asymptotically stable MPC schemes without terminal constraints were provided. Note that in [17], the subsystems are not directly connected, but they exchange information over the network to control themselves according to state constraints.
One research topic of this paper provides a new direction in MPC: we combine the input-to-state dynamical stability (ISDS) property, introduced in [18], with MPC for single and interconnected systems. The provided MPC scheme uses the min-max approach (see [14, 15]). Conditions are derived such that single closed-loop systems and whole closed-loop networks with an optimal control obtained by an MPC scheme have the ISDS property. The results of [18] for single systems and the ISDS small-gain theorem for networks (see [19]) are applied to prove the main results of the corresponding section.
The advantage of the usage of ISDS over ISS for MPC is that the ISDS estimation takes only recent values of the disturbance into account due to the memory fading effect, see [18, 19]. In particular, if the disturbance tends to zero, then the ISDS estimation tends to zero, for example. Moreover, the decay rate can be derived using ISDS-Lyapunov functions. This information can be useful for applications of MPC.
In practice, there are problems, where the advantages of ISDS over ISS, in particular the memory fading effect of the ISDS estimation, lead to more efficient controllers with respect to costs. Examples are the control of air planes, robots, or automatic transportation vehicles.
A second research topic of this paper is the stability analysis of MPC schemes for systems with time-delays. In many applications, there occur time-delays, for example, in communication networks, logistic networks, or biological systems. The presence of time-delays can lead to instability of a network, see [9], where it was shown that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable.
Therefore, we are interested in the analysis of networks with time-delays in view of input-to-state stability (ISS). In [2, 20], tools based on the Lyapunov-Razumikhin and Lyapunov-Krasovskii approaches were developed to check, whether a single system with time-delays has the ISS property. Considering networks with time-delays recent results regarding ISS were given in [21] using a small-gain condition.
Considering time-delay systems (TDSs) and MPC, recent results for asymptotically stable MPC schemes of single systems can be found in [22, 23]. In these works, continuous-time TDSs were investigated and conditions were derived, which guarantee asymptotic stability of a TDS using a Lyapunov-Krasovskii approach. Moreover, by the help of Lyapunov-Razumikhin arguments it was shown, how to determine the terminal cost and terminal region, and to compute a locally stabilizing controller.
As a second part of this paper, we investigate the ISS property for MPC of single systems and networks with time-delays. Conditions are derived such that single closed-loop TDSs and whole closed-loop time-delay networks with an optimal control obtained by an MPC scheme have the ISS property. The results of the Lyapunov-Krasovskii approach, introduced in [20] for single systems and the corresponding small-gain theorem proved in [21] for networks with time-delays, are applied to prove the main results of the corresponding section.
Since time-delays and disturbances appear in many problems, the results of the second part of this paper regarding ISS for MPC of time-delay systems can be applied to a huge range of practical problems. Classical examples are not only communication networks, transportation, or production systems, but also biological networks or chemical networks.
In comparison to existing results in the literature, where only ISS for MPC for single systems (see [6, 13–15]) and networks (see [6, 16]) without time-delays was investigated, we use, on the one hand, the advantages of ISDS for MPC, in particular the memory fading effect. On the other hand, we use the stability notion ISS for MPC of systems with time-delays and disturbances, where in the literature only MPC schemes for single time-delay systems without disturbances were investigated in view of asymptotic stability (see [22, 23]). Both approaches presented in this paper were never done before, and this paper is a first theoretical step in the mentioned directions.
This paper is organized as follows: the preliminaries are given in Section 2. In Section 3.1, an MPC scheme of single systems guaranteeing ISDS is provided. ISDS for MPC of networks is investigated in Section 3.2, where we prove that each subsystem has the ISDS property and the whole network has the ISDS property using the control obtained by an MPC scheme. In Section 4.1, the ISS property for MPC of single systems is investigated. Networks with time-delays are considered in Section 4.2. Finally, the conclusions and an outlook for future research possibilities can be found in Section 5.
2. Preliminaries
By xT we denote the transposition of a vector x∈ℝn,n∈ℕ; furthermore, ℝ+∶=[0,∞) and ℝ+n denotes the positive orthant {x∈ℝn:x≥0}, where we use the standard partial order for x,y∈ℝn given by
(2.1)x≥y⟺xi≥yi,i=1,…,n,x≱y⟺∃i:xi<yi,x>y⟺xi>yi,i=1,…,n.
|·| denotes the Euclidean norm in ℝn. The essential supremum norm of a (Lebesgue-) measurable function f:ℝ→ℝn is denoted by ∥f∥. We denote the set of essentially bounded (Lebesgue-) measurable functions u from ℝ to ℝm by
(2.2)L∞(ℝ,ℝm)∶={u:ℝ→ℝmmeasurable∣∃K>0:|u(t)|≤K,foralmostall(f.a.a.)t}.
∇V is the gradient of a function V:ℝn→ℝ+.
For t1,t2∈ℝ,t1<t2, let C([t1,t2];ℝN) denotes the Banach space of continuous functions defined on [t1,t2] equipped with the norm ∥ϕ∥[t1,t2]∶=supt1≤s≤t2|ϕ(s)| and values in ℝN. Let θ∈ℝ+. The function xt∈C([-θ,0];ℝN) is given by xt(τ)∶=x(t+τ),τ∈[-θ,0].
For a function v:ℝ+→ℝm, we define its restriction to the interval [s1,s2] by
(2.3)v[s1,s2](t)∶={v(t)ift∈[s1,s2],0otherwise,t,s1,s2∈ℝ+.
Definition 2.1.
We define the following classes of functions:
(2.4)𝒫∶={f:ℝn→ℝ+∣f(0)=0,f(x)>0,x≠0},𝒦∶={γ:ℝ+→ℝ+∣γiscontinuous,γ(0)=0andstrictlyincreasing},𝒦∞∶={γ∈𝒦∣γisunbounded},ℒ∶={γ:ℝ+→ℝ+∣γiscontinuousanddecreasingwithlimt→∞γ(t)=0},𝒦ℒ∶={β:ℝ+×ℝ+→ℝ+∣βiscontinuous,β(·,t)∈𝒦,β(r,·)∈ℒ,∀t,r≥0},𝒦ℒ𝒟∶={μ∈𝒦ℒ∣μ(r,t+s)=μ(μ(r,t),s),∀r,t,s≥0}.
We will call functions of class 𝒫positive definite.
Now, we recall some results related to ISDS. Therefore, we consider systems of the form
(2.5)x˙(t)=f(x(t),u(t)),
where t∈ℝ+ is the (continuous) time, x˙ denotes the derivative of the state x∈ℝN, and u∈L∞(ℝ+,ℝm) is the input. The function f:ℝN+m→ℝN, N,m∈ℕ, is assumed to be locally Lipschitz continuous in x uniformly in u to have existence and uniqueness of the solution, denoted by x(t;x0,u) or x(t) for short, for the given initial value x(0)=x0.
The system (2.5) is called input-to-state dynamically stable (ISDS) if there exist μ∈𝒦ℒ𝒟, η,γ∈𝒦∞ such that for all initial values x0 and all inputs u, it holds that
(2.6)|x(t)|≤max{μ(η(|x0|),t),esssupτ∈[0,t]μ(γ(|u(τ)|),t-τ)},
for all t∈ℝ+. μ is called decay rate, η is called overshoot gain, and γ is called robustness gain.
A useful tool to check whether a system has the ISDS property is the following.
Definition 2.3 (ISDS-Lyapunov function).
Given ε>0, a function V:ℝN→ℝ+, which is locally Lipschitz on ℝN∖{0}, is called an ISDS-Lyapunov function of the system (2.5) if there exist η,γ∈𝒦∞,μ∈𝒦ℒ𝒟 such that
(2.7)|x|1+ε≤V(x)≤η(|x|),∀x∈ℝN,(2.8)V(x)>γ(|u|)⇒∇V(x)f(x,u)≤-(1-ε)g(V(x))
holds, for almost all x∈ℝN∖{0} and all u, where μ solves
(2.9)ddtμ(r,t)=-g(μ(r,t)),r,t>0,
for a locally Lipschitz continuous function g:ℝ+→ℝ+.
The equivalence of ISDS and the existence of an ISDS-Lyapunov function were proved in [18].
Theorem 2.4.
The system (2.5) is ISDS with μ∈𝒦ℒ𝒟 and η,γ∈𝒦∞ if and only if for each ε>0 there exists an ISDS-Lyapunov function V.
Remark 2.5.
Note that for a system, which possesses the ISDS property, it holds that the decay rate μ and gains η,γ in Definition 2.2 are exactly the same as in Definition 2.3.
Now, consider networks of the form
(2.10)x˙i(t)=fi(x1(t),…,xn(t),ui(t)),i=1,…,n,
where n∈ℕ, xi∈ℝNi,Ni∈ℕ, ui∈L∞(ℝ+,ℝMi), and fi:ℝ∑j=1nNj+Mi→ℝNi are locally Lipschitz in x=(x1T,…,xnT)T uniformly in ui,i=1,…,n. If we define N∶=∑Ni,m=∑Mi, and f∶=(f1T,…,fnT)T, then (2.10) can be written as a system of the form (2.5), which we call the whole system.
The ith subsystem of (2.10) is called ISDS if there exists a 𝒦ℒ𝒟-function μi and functions ηi,γi, and γij∈𝒦∞∪{0} such that the solution xi(t;xi0,ui)=xi(t) for all initial values xi0, all inputs xj,j≠i,ui, and for all t∈ℝ+ satisfies
(2.11)|xi(t)|≤max{μi(ηi(|xi0|),t),maxj≠iνij(xj,t),νi(ui,t)},νi(ui,t)∶=esssupτ∈[0,t]μi(γi(|ui(τ)|),t-τ),νij(xj,t)∶=supτ∈[0,t]μi(γij(|xj(τ)|),t-τ)i,j=1,…,n,i≠j. γij are called gains.
We collect all the gains in a matrix Γ, defined by Γ∶=(γij)n×n with γii≡0,i,j=1,…,n. This defines a map Γ:ℝ+n→ℝ+n for s∈ℝ+n by
(2.12)Γ(s)∶=(maxjγ1j(sj),…,maxjγnj(sj))T.
In view of ISDS of the whole network, we say that Γ satisfies the small-gain condition (SGC) (see [24]) if
(2.13)Γ(s)≱s,∀s∈ℝ+n∖{0}.
To recall the Lyapunov version of the small-gain theorem for ISDS, we need the following.
Definition 2.6.
A continuous path σ∈𝒦∞n is called an Ω-path with respect to Γ if
for each i, the function σi-1 is locally Lipschitz continuous on (0,∞);
for every compact set P⊂(0,∞), there are constants 0<K1<K2 such that for all points of differentiability of σi-1 and i=1,…,n we have
(2.14)0<K1≤(σi-1)′(r)≤K2,∀r∈P;
it holds that Γ(σ(r))<σ(r), for all r>0.
More details about an Ω-path can be found in [24–26].
The following proposition is useful for the construction of an ISDS-Lyapunov function for the whole system.
Proposition 2.7.
Let Γ∈(𝒦∞∪{0})n×n be a gain-matrix. If Γ satisfies the small-gain condition (2.13), then there exists an Ω-path σ with respect to Γ.
The proof can be found in [24], for example.
We assume that for each subsystem of (2.10) there exists a function Vi:ℝNi→ℝ+, which is locally Lipschitz continuous and positive definite. Given εi>0, a function Vi:ℝNi→ℝ+, which is locally Lipschitz continuous on ℝNi∖{0}, is an ISDS-Lyapunov function of the ith subsystem in (2.10) if it satisfies the following:
there exists a function ηi∈𝒦∞ such that for all xi∈ℝNi it holds
(2.15)|xi|1+εi≤Vi(xi)≤ηi(|xi|);
there exist functions μi∈𝒦ℒ𝒟,γi∈𝒦∞∪{0}, γij∈𝒦∞∪{0},j=1,…,n,i≠j such that for almost all xi∈ℝNi∖{0}, all inputs xj,j≠i, and ui it holds that
(2.16)Vi(xi)>max{maxj≠iγij(Vj(xj)),γi(|ui|)}⇒∇Vi(xi)fi(x,u)≤-(1-εi)gi(Vi(xi)),
where μi∈𝒦ℒ𝒟 solves (d/dt)μi(r,t)=-gi(μi(r,t)),r,t>0 for some locally Lipschitz function gi:ℝ+→ℝ+.
Now, we recall the main result of [19], which establishes ISDS for networks using Lyapunov functions.
Theorem 2.8.
Assume that each subsystem of (2.10) has the ISDS property. This means that for each subsystem and for each εi>0 there exists an ISDS-Lyapunov function Vi, which satisfies (2.15) and (2.16). Let Γ be given by (2.12), satisfying the small-gain condition (2.13), and let σ∈𝒦∞n be an Ω-path from Proposition 2.7 with respect to Γ. Then, the whole system (2.5) has the ISDS property and its ISDS-Lyapunov function is given by
(2.17)V(x)=ψ-1(maxi{σi-1(Vi(xi))}),
where ψ(|x|)=miniσi-1(|x|/n).
As a second topic of this paper, we are going to establish ISS with the help of MPC for TDSs of the form
(2.18)x˙(t)=f(xt,u(t)),t∈ℝ+,x0(τ)=ξ(τ),τ∈[-θ,0],
where x∈ℝN,u∈L∞(ℝ+,ℝm), and “·” represents the right-hand side derivative. θ is the maximum involved delay, and f:C([-θ,0];ℝN)×ℝm→ℝN is locally Lipschitz continuous on any bounded set. This guarantees that the system (2.18) admits a unique solution on a maximal interval [-θ,Tmax), 0<Tmax≤+∞, which is locally absolutely continuous, see [27, Section 2.6]. We denote the solution by x(t;ξ,u) or x(t) for short, satisfying the initial condition x0≡ξ for any ξ∈C([-θ,0],ℝN).
The notion of ISS for TDSs reads as follows.
Definition 2.9 (ISS for TDSs).
The system (2.18) is called ISS if there exist β∈𝒦ℒ and γ∈𝒦 such that for all ξ, all u, and all t∈ℝ+ it holds that
(2.19)|x(t)|≤max{β(∥ξ∥[-θ,0],t),γ(∥u∥)}.
In [20], ISS-Lyapunov-Krasovskii functionals are introduced to check whether a TDS has the ISS property. Given a locally Lipschitz continuous functional V:C([-θ,0];ℝN)→ℝ+, the upper right-hand side derivate D+V of the functional V along the solution x(t;ξ,u) is defined according to [27, Chapter 5.2]
(2.20)D+V(ϕ,u)∶=limsuph→0+1h(V(xt+h)-V(ϕ)),
where xt+h∈C([-θ,0];ℝN) is generated by the solution x(t;ϕ,u) of x˙(t)=f(xt,u(t)), and t∈(t0,t0+h) with xt0∶=ϕ∈C([-θ,0];ℝN).
Remark 2.10.
Note that in contrast to (2.20), the definition of D+V in [20] is slightly different, since there the functional is assumed to be only continuous and in that case, D+V can take infinite values. Nevertheless, the results in [20] also hold true if the functional is chosen to be locally Lipschitz, according to the results in [28] and using (2.20).
By ∥·∥a, we indicate any norm in C([-θ,0];ℝN) such that for some c1,c2∈ℝ+∖{0} the following inequalities hold:
(2.21)c1|ϕ(0)|≤∥ϕ∥a≤c2∥ϕ∥[-θ,0],∀ϕ∈C([-θ,0];ℝN).
A locally Lipschitz continuous functional V:C([-θ,0];ℝN)→ℝ+ is called an ISS-Lyapunov-Krasovskii functional for the system (2.18) if there exist functions ψ1,ψ2∈𝒦∞ and functions χ,α∈𝒦 such that
(2.22)ψ1(|ϕ(0)|)≤V(ϕ)≤ψ2(∥ϕ∥a),(2.23)V(ϕ)≥χ(|u|)⇒D+V(ϕ,u)≤-α(V(ϕ)),
for all ϕ∈C([-θ,0];ℝN),u∈L∞(ℝ+,ℝm).
The next theorem was proved in [20].
Theorem 2.12.
If there exists an ISS-Lyapunov-Krasovskii functional V for the system (2.18), then the system (2.18) has the ISS property.
Now, we investigate networks with time-delays: we consider n∈ℕ interconnected TDSs of the form
(2.24)x˙i(t)=fi(x1t,…,xnt,ui(t)),i=1,…,n,
where xit∈C([-θ,0];ℝNi),xit(τ)∶=xi(t+τ),τ∈[-θ,0],xi∈ℝNi, and ui∈L∞(ℝ+,ℝMi). θ denotes the maximal involved delay and xjt,j≠i can be interpreted as internal inputs of the ith subsystem. The functionals fi:C([-θ,0];ℝN1)×⋯×C([-θ,0];ℝNn)×ℝMi→ℝNi are locally Lipschitz continuous on any bounded set. We denote the solution of a subsystem by xi(t;ξi,u) or xi(t) for short, satisfying the initial condition xi0≡ξi for any ξi∈C([-θ,0],ℝNi).
The ISS property for a subsystem of (2.24) reads as follows: the i subsystem of (2.24) is ISS if there exist βi∈𝒦ℒ, γij,γi∈𝒦∞∪{0},j=1,…,n,j≠i such that for all t∈ℝ+ it holds
(2.25)|xi(t)|≤max{βi(∥ξi∥[-θ,0],t),maxj,j≠iγij(∥xj∥[-θ,t]),γi(∥ui∥)}.
If we define N∶=∑Ni, m∶=∑Mi, x∶=(x1T,…,xnT)T, u=(u1T,…,unT)T, and f∶=(f1T,…,fnT)T, then (2.24) can be written as a system of the form (2.18), which we call the whole system. The Krasovskii functionals for subsystems are as follows.
A locally Lipschitz continuous functional Vi:C([-θ,0];ℝNi)→ℝ+ is an ISS-Lyapunov-Krasovskii functional of the ith subsystem of (2.24) if there exist functionals Vj,j=1,…,n, which are positive definite and locally Lipschitz continuous on C([-θ,0];ℝNj), functions ψ1i,ψ2i∈𝒦∞, χ~ij,χ~i∈𝒦∪{0}, and α~i∈𝒦, j=1,…,n,i≠j such that for all ϕi∈C([-θ,0],ℝNi)(2.26)ψ1i(|ϕi(0)|)≤Vi(ϕi)≤ψ2i(∥ϕi∥a),Vi(ϕi)≥max{maxj,j≠iχ~ij(Vj(ϕj)),χ~i(|ui|)}⇒D+Vi(ϕi,u)≤-α~i(Vi(ϕi)),
for all ϕi∈C([-θ,0],ℝNi), u∈L∞(ℝ+,ℝm).
The gain-matrix is defined by Γ∶=(χij)n×n, χii≡0,i=1,…,n, which defines a map Γ:ℝ+n→ℝ+n as in (2.12).
The next theorem is one of the main results of [21] and provides a construction for an ISS-Lyapunov-Krasovskii functional of the whole system.
Theorem 2.13 (ISS-Lyapunov-Krasovskii theorem for general networks with time-delays).
Consider an interconnected system of the form (2.24). Assume that each subsystem has an ISS-Lyapunov-Krasovskii functional Vi, which satisfies the conditions (2.26), i=1,…,n. If the corresponding gain-matrix Γ satisfies the small-gain condition (2.13), then
(2.27)V(ϕ)∶=maxi{σi-1(Vi(ϕi))}
is the ISS-Lyapunov-Krasovskii functional for the whole system of the form (2.18), which is ISS, where σ=(σ1,…,σn)T is an Ω-path as in Definition 2.6 and ϕ=(ϕi,…,ϕn)T∈C([-θ,0];ℝN). The Lyapunov gain is given by χ(r)∶=maxiσi-1(χi(r)),r>0.
Now, we present the new directions in MPC: ISDS and ISS for single and interconnected systems with and without time-delays. We start with MPC schemes guaranteeing ISDS.
3. MPC and ISDS
In this section, we combine ISDS and MPC for nonlinear single and interconnected systems. Conditions are derived, which assure ISDS of a system is obtained by application of the control to the system (1.1), and calculated by an MPC scheme.
3.1. Single Systems
We consider systems of the form (1.1) and we use the min-max approach to calculate an optimal control: to compensate the effect of the disturbance w, we apply a feedback control law π(t,x(t)) to the system. An optimal control law is obtained by solving the finite horizon optimal control problem (FHOCP), which consists of minimization of the cost function J with respect to π(t,x(t)) and maximization of the cost function J with respect to the disturbance w. The following definition is taken from [14, 15] with a slightly adjustment using ε here to apply the ISDS property to the FHOCP.
Definition 3.1 (Finite horizon optimal control problem (FHOCP)).
Let 1>ε>0 be given. Let T>0 be the prediction horizon and u(t)=π(t,x(t)) a feedback control law. The finite horizon optimal control problem for a system of the form (1.1) is formulated as
(3.1)minπmaxwJ(x-0,π,w;t,T):=minπmaxw(1-ε)∫tt+T(l(x(t′),π(t′,x(t′)))-lw(w(t′)))dt'+Vf(x(t+T))subjecttox˙(t′)=f(x(t′),w(t′),u(t′)),x(t)=x-0,t'∈[t,t+T],x∈𝒳,w∈𝒲,π∈Π,x(t+T)∈Ω⊆ℝN,
where x-0∈ℝN is the initial value of the system at time t, the terminal region Ω is a compact and convex set with the origin in its interior, and π(t,x(t)) is essentially bounded, locally Lipschitz in x and measurable in t. l-lw is the stage cost, where l:ℝN×ℝm→ℝ+ penalizes the distance of the state from the equilibrium point 0 of the system and it penalizes the control effort. lw:ℝP→ℝ+ penalizes the disturbance, which influences the systems behavior. l and lw are locally Lipschitz continuous with l(0,0)=0,lw(0)=0, and Vf:Ω→ℝ+ is the terminal penalty.
The FHOCP will be solved at the sampling instants t=kΔ,k∈ℕ,Δ∈ℝ+. The optimal solution is denoted by π*(t',x(t');t,T) and w*(t'), t'∈[t,t+T]. The optimal cost function is denoted by J*(x-0,π*,w*;t,T). The control input to the system (1.1) is defined in the usual receding horizon fashion as
(3.2)u(t′)=π*(t′,x(t′);t,T),t'∈[t,t+Δ].
In the following, we need some definitions, which can be found, for example, in [5].
Definition 3.2.
(i) A feedback control π is called a feasible solution of the FHOCP at time t, if for a given initial value x-0 at time t the feedback π(t',x(t')),t'∈[t,t+T] controls the state of the system (1.1) into Ω at time t+T, that is, x(t+T)∈Ω, for all w∈𝒲.
(ii) A set Ω⊆ℝN is called positively invariant if for all x0∈Ω a feedback control π keeps the trajectory of the system (1.1) in Ω, that is,
(3.3)x(t;x0,w,π)∈Ω,∀t∈(0,∞),
for all w∈𝒲.
To prove that the system (1.1) with the control obtained by solving the FHOCP has the ISDS property, we need the following assumption.
Assumption 3.3.
(1) There exist functions αl,αw∈𝒦∞, where αl is locally Lipschitz continuous such that
(3.4)l(x,π)≥αl(|x|),x∈𝒳,π∈Π,lw(w)≤αw(|w|),w∈𝒲.
(2) The FHOCP in Definition 3.1 admits a feasible solution at the initial time t=0.
(3) There exists a controller u(t)=π(t,x(t)) such that the system (1.1) has the ISDS property.
(4) For each 1>ε>0, there exists a locally Lipschitz continuous function Vf(x) such that the terminal region Ω is a positively invariant set and we have
(3.5)Vf(x)≤η(|x|),∀x∈Ω,(3.6)V˙f(x)≤-(1-ε)l(x,π)+(1-ε)lw(w),f.a.a.x∈Ω,
where η∈𝒦∞, w∈𝒲, and V˙f denotes the derivative of Vf along the solution of system (1.1) with the control u≡π from point 3 of this assumption.
(5) For each sufficiently small ε>0, it holds that
(3.7)(1-ε)∫tt+Tl(x(t′),π(t′,x(t′)))dt'≥|x(t)|1+ε.
(6) The optimal cost function J*(x-0,π*,w*;t,T) is locally Lipschitz continuous.
Remark 3.4.
In [6], it is discussed that a different stage cost, for example, by the definition of ls∶=l-lw, can be used for the FHOCP. In view of stability, the stage cost ls has to fulfill some additional assumptions, see [6, Chapter 3.4].
Remark 3.5.
The assumption (3.7) is needed to assure that the cost function satisfies the lower estimation in (2.7). However, we did not investigated whether this condition is restrictive or not. In case of discrete-time systems and the according cost function, the assumption (3.7) is not necessary, see the proofs in [6, 12–15].
The following theorem establishes ISDS of the system (1.1), using the optimal control input u≡π* obtained from solving the FHOCP.
Theorem 3.6.
Consider a system of the form (1.1). Under Assumption 3.3, the system resulting from the application of the predictive control strategy to the system, namely, x˙(t)=f(x(t),w(t),π*(t,x(t))),t∈ℝ+,x(0)=x0, possesses the ISDS property.
Remark 3.7.
Note that the gains and the decay rate of the definition of the ISDS property, Definition 2.2, can be calculated using Assumption 3.3, as it is partially displayed in the following proof.
Proof.
We show that the optimal cost function J*(x-0,π*,w*;t,T)=:V(x-0) is an ISDS-Lyapunov function, following the steps:
the control problem admits a feasible solution π for all times t>0;
J*(x-0,π*,w*;t,T) satisfies the conditions (2.7) and (2.8).
Then, by application of Theorem 2.4, the ISDS property follows.
Let us prove the following feasibility: we suppose that a feasible solution π~(t',x(t')),t'∈[t,t+T] at time t exists. For Δ>0, we construct a control by
(3.8)π^(t′,x(t′))={π~(t′,x(t′)),t′∈[t+Δ,t+T],π(t′,x(t′)),t′∈(t+T,t+T+Δ],
where π is the controller from Assumption 3.3, point 3. Since π~ controls x(t+Δ) into x(t+T)∈Ω and Ω is a positively invariant set, π(t',x(t')) keeps the systems trajectory in Ω for t+T<t'≤t+T+Δ under the constraints of the FHOCP. This means that from the existence of a feasible solution for the time t, we have a feasible solution for the time t+Δ. Since we assume that a feasible solution for the FHOCP at the time t=0 exists (Assumption 3.3, point 2), it follows that a feasible solution exists for every t>0.
We replace π~ in (3.8) by π*. Then, it follows from (3.6) that
(3.9)J*(x-0,π*,w*;t,T+Δ)≤J(x-0,π^,w*;t,T+Δ)=(1-ε)∫tt+T(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt'+(1-ε)∫t+Tt+T+Δ(l(x(t′),π(t′,x(t′)))-lw(w*(t′)))dt'+Vf(x(t+T+Δ))=J*(x-0,π*,w*;t,T)-Vf(x(t+T))+Vf(x(t+T+Δ))+(1-ε)∫t+Tt+T+Δ(l(x(t′),π(t′,x(t′)))-lw(w*(t′)))dt'≤J*(x-0,π*,w*;t,T)
holds. From this and with (3.5), it holds that
(3.10)J*(x-0,π*,w*;t,T)≤J*(x-0,π*,w*;t,0)=Vf(x-0)≤η(|x-0|).
Now, with Assumption 3.3, point 5, we have
(3.11)V(x-0)≥J(x-0,π*,0;t,T)≥(1-ε)∫tt+Tl(x(t′),π*(t′,x(t′)))dt'≥|x-0|1+ε.
This shows that J* satisfies (2.7). Now, denote x~0∶=x(t+h). From J*(x-0,π*,w*;t,T+Δ)≤J*(x-0,π*,w*;t,T), we get
(3.12)(1-ε)∫tt+h(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt'+J*(x~0,π*,w*;t+h,T+Δ-h)≤(1-ε)∫tt+h(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt'+J*(x~0,π*,w*;t+h,T-h),
and therefore
(3.13)J*(x~0,π*,w*;t+h,T+Δ-h)≤J*(x~0,π*,w*;t+h,T-h).
Now, we show that J* satisfies the condition (2.8). Note that by Assumption 3.3, point 6, J* is locally Lipschitz continuous. With (3.13), it holds that
(3.14)J*(x-0,π*,w*;t,T)=(1-ε)∫tt+h(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt'+J*(x~0,π*,w*;t+h,T-h)≥(1-ε)∫tt+h(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt′+J*(x~0,π*,w*;t+h,T).
This leads to
(3.15)J*(x~0,π*,w*;t+h,T)-J*(x-0,π*,w*;t,T)h≤-1h(1-ε)∫tt+h(l(x(t′),π*(t′,x(t′);t,T))-lw(w*(t′)))dt′.
For h→0 and using the first point of Assumption 3.3, we obtain
(3.16)V˙(x-0)≤-(1-ε)αl(|x-0|)+(1-ε)αw(|w*|),f.a.a.x-0∈𝒳,∀w∈𝒲.
By definition of γ(r)∶=η(αl-1(2αw(r))) and g(r)∶=(1/2)αl(η-1(r)),r≥0, this implies
(3.17)V(x-0)>γ(|w*|)⇒V˙(x-0)≤-(1-ε)g(V(x-0)),
where the function g is locally Lipschitz continuous. We conclude that J* is an ISDS-Lyapunov function for the system
(3.18)x˙(t)=f(x(t),w(t),π*(t,x(t))),
and by application of Theorem 2.4 the system has the ISDS property.
In the next subsection, we transform the analysis of ISDS for MPC of single systems to interconnected systems.
3.2. Interconnected Systems
We consider interconnected systems with disturbances of the form
(3.19)x˙i(t)=fi(x1(t),…,xn(t),wi(t),ui(t)),i=1,…,n,
where ui∈ℝMi, measurable and essentially bounded, are the control inputs and wi∈ℝPi are the unknown disturbances. We assume that the states, disturbances, and inputs fulfill the constraints
(3.20)xi∈𝒳i,wi∈𝒲i,ui∈𝒰i,i=1,…,n,
where 𝒳i⊆ℝNi,𝒲i⊆L∞(ℝ+,ℝPi), and 𝒰i⊆ℝMi are compact and convex sets containing the origin in their interior.
Now, we are going to determine an MPC scheme for interconnected systems. An overview of existing distributed and hierarchical MPC schemes can be found in [29]. The used scheme in this work is inspired by the min-max approach for single systems as in Definition 3.1, see [14, 15].
At first, we determine the cost function of the ith subsystem by
(3.21)Ji(x-i0,(xj)j≠i,πi,wi;t,T)∶=(1-εi)∫tt+T(li(xi(t′),πi(t′,x(t′)))-(lw)i(wi(t′))-∑j≠ilij(xj(t′)))dt′+(Vf)i(xi(t+T)),
where 1>εi>0, x-i0∈𝒳i is the initial value of the ith subsystem at time t and πi∈Πi is a feedback, essentially bounded, locally Lipschitz in x and measurable in t, where Πi⊆ℝMi is a compact and convex set containing the origin in its interior. li-(lw)i-∑lij is the stage cost, where li:ℝNi×ℝMi→ℝ+. (lw)i:ℝPi→ℝ+ penalizes the disturbance and lij:ℝNj→ℝ+ penalizes the internal input for all j=1,…,n,j≠i. li,(lw)i and lij are locally Lipschitz continuous functions with li(0,0)=0,(lw)i(0)=0,lij(0)=0, and (Vf)i:Ωi→ℝ+ is the terminal penalty of the ith subsystem, Ωi⊆ℝNi.
In contrast to single systems, we add the terms lij(xj),j≠i, to the cost function due to the interconnected structure of the subsystems. Here, two problems arise: the formulation of an optimal control problem for each subsystem and the calculation/determination of the internal inputs xj,j≠i.
We conserve the minimization of Ji with respect to πi and the maximization of Ji with respect to wi as in Definition 3.1 for single systems. In the spirit of ISS/ISDS, which treat the internal inputs as “disturbances,” we maximize the cost function with respect to xj,j≠i (worst-case approach). Since we assume that xj∈𝒳j, we get an optimal solution πi*,wi*,xj*,j≠i, of the control problem.
The drawbacks of this approach are that, on the one hand, we do not use the systems equations (3.19) to predict xj,j≠i and, on the other hand, the computation of the optimal solution could be numerically inefficient, especially if the number of subsystems n is “huge” or/and the sets 𝒳i are “large.” Moreover, taking into account the worst-case approach, the maximization over xj, the obtained optimal control πi* for each subsystem could be extremely conservative, which leads to extremely conservative ISS or ISDS estimations.
To avoid these drawbacks of the maximization of Ji with respect to xj,j≠i, one could use the system equation (3.19) to predict xj,j≠i instead.
A numerically efficient way to calculate the optimal solutions πi*,wi* of the subsystems is a parallel calculation. Due to interconnected structure of the system, the information about systems states of the subsystems should be exchanged. But this exchange of information causes that an optimal solution πi*,wi* could not be calculated. To the best of our knowledge, no theorem is proved that provides the existence of an optimal solution of the optimal control problem using such a parallel strategy. We conclude that a parallel calculation cannot help in our case.
Another approach of an MPC scheme for networks is inspired by the hierarchical MPC scheme in [30]. One could use the predictions of the internal inputs xj,j≠i, as follows: at sampling time t=kΔ,k∈ℕ,Δ>0 all subsystems calculate the optimal solution iteratively. This means that for the calculation of the optimal solution for the ith subsystem,the currently “optimized” trajectories of the subsystems 1,…,i-1 will be used, denoted by xpopt,kΔ,p=1,…,i-1, and the “optimal” trajectories of the subsystems i+1,…,n of the optimization at sampling time t=(k-1)Δ will be used, denoted by xpopt,(k-1)Δ,p=i+1,…,n.
The advantage of this approach would be that the optimal solution is not that much conservative as the min-max approach and the calculation of the optimal solution could be performed in a numerically efficient way, due to the usage of the model to predict the “optimal” trajectories and that the maximization over xj,j≠i will be avoided. The drawback is that the optimal cost function of each subsystem depends on the trajectories xjopt,·,j≠i using this hierarchical approach. Then, to the best of our knowledge, it is not possible to show that the optimal cost functions are ISDS-Lyapunov functions of the subsystems, which is a crucial step for proving ISDS of a subsystem or the whole network, because no helpful estimations for the Lyapunov function properties can be performed due to the dependence of the optimal cost functions of the trajectories xjopt,·,j≠i.
The FHOCP for the ith subsystem reads as follows:
(3.22)minπimaxwimax(xj)j≠iJi(x-i0,(xj)j≠i,πi,wi;t,T)subjecttox˙i(t′)=fi(x1(t′),…,xn(t′),wi(t′),ui(t′)),t'∈[t,t+T],xi(t)=x-i0,xj∈𝒳j,j=1,…,n,wi∈𝒲i,πi∈Πi,xi(t+T)∈Ωi⊆ℝNi,
where the terminal region Ωi is a compact and convex set with the origin in its interior.
The resulting optimal control of each subsystem is a feedback control law, that is, ui*(t)=πi*(t,x(t)), where x=(x1T,…,xnT)T∈ℝN, N=∑iNi, and πi*(t,x*i(t)) is essentially bounded, locally Lipschitz in x, and measurable in t, for all i=1,…,n.
To show that each subsystem and the whole system have the ISDS property using the mentioned distributed MPC scheme, we suppose the following assumption for the ith subsystem of (3.19).
Assumption 3.8.
(1) There exist functions αil,αiw,αij∈𝒦∞, j=1,…,n,j≠i such that
(3.23)li(xi,πi)≥αil(|xi|),xi∈𝒳i,πi∈Πi,(lw)i(wi)≤αiw(|wi|),wi∈𝒲i,lij(xj)≤αij(Vj(xj)),xj∈𝒳j,j=1,…,n,j≠i.
(2) The FHOCP admits a feasible solution at the initial time t=0.
(3) There exists a controller ui(t)=πi(t,x(t)) such that the ith subsystem of (3.19) has the ISDS property.
(4) For each 1>εi>0, there exists a locally Lipschitz continuous function (Vf)i(xi) such that the terminal region Ωi is a positively invariant set and we have
(3.24)(Vf)i(xi)≤ηi(|xi|),∀xi∈Ωi,(V˙f)i(xi)≤-(1-εi)li(xi,πi)+(1-εi)(lw)i(wi)+(1-εi)∑j≠ilij(xj),
for almost all xi∈Ωi, where ηi∈𝒦∞, wi∈𝒲i, and (V˙f)i denotes the derivative of (Vf)i along the solution of the ith subsystem of (3.19) with the control ui≡πi from point 3 of this assumption.
(5) For each sufficiently small εi>0 it holds that
(3.25)(1-εi)∫tt+Tli(xi(t′),πi*(t′,x(t′)))-∑j≠ilij(xj(t′))dt′≥|x(t)|1+εi.
(6) The optimal cost function Ji*(x-i0,(xj)j≠i*,πi*,wi*;t,T) is locally Lipschitz continuous.
Now, we can state that each subsystem possesses the ISDS property using the mentioned MPC scheme.
Theorem 3.9.
Consider an interconnected system of the form (3.19). Let Assumption 3.8 be satisfied for each subsystem. Then, each subsystem resulting from the application of the control obtained by the FHOCP for each subsystem to the system, namely,
(3.26)x˙i(t)=fi(x1(t),…,xn(t),wi(t),πi*(t,x(t))),t∈ℝ+,xi0=xi(0),
possesses the ISDS property.
Proof.
Consider the ith subsystem. We show that the optimal cost function Vi(x-i0)∶=Ji*(x-i0,(xj)j≠i*,πi*,wi*;t,T) is an ISDS-Lyapunov function for the ith subsystem. We abbreviate xj=(xj)j≠i*.
By following the steps of the proof of Theorem 3.6, we conclude that there exists a feasible solution for all times t>0 and that by (3.25) the functional Vi(x-i0) satisfies the condition
(3.27)|x-i0|(1+εi)≤Vi(x-i0)≤ηi(|x-i0|),
using |x-0|≥|x-i0|. Note that by Assumption 3.8, point 6, Ji* is locally Lipschitz continuous. We have that it holds
(3.28)V˙i(x-i0)≤-(1-εi)αil(ηi-1(Vi(x-i0)))+(1-εi)αiw(|wi*|)+(1-εi)∑j≠iαij(Vj((x-j0))),
and equivalently
(3.29)V˙i(x-i0)≤-(1-εi)αil(ηi-1(Vi(x-i0)))+(1-εi)max{nαiw(|wi*|),maxj≠inαij(Vj((x-j0)))},
which implies
(3.30)Vi(x-i0)>max{γi(|wi*|),maxj≠iγij(Vj((x-i0)))}⇒V˙i(x-i0)≤-(1-εi)gi(Vi(x-i0)),
for almost all x-i0∈𝒳i and all wi*∈𝒲i, where γi(r)∶=ηi((αil)-1(2nαiw(r))), γij(r)∶=ηi((αil)-1(2nαij(r))) and gi(r)∶=(1/2)αil(ηi-1(r)), where gi is locally Lipschitz continuous.
Since i can be chosen arbitrarily, we conclude that each subsystem has an ISDS-Lyapunov function. It follows that each subsystem has the ISDS property.
To investigate whether the whole system has the ISDS property, we collect all functions γij in a matrix Γ∶=(γij)n×n, γii≡0, which defines a map as in (2.12).
Using the small-gain condition for Γ, the ISDS property for the whole system can be guaranteed.
Corollary 3.10.
Consider an interconnected system of the form (3.19). Let Assumption 3.8 be satisfied for each subsystem. If Γ satisfies the small-gain condition (2.13), then the whole system possesses the ISDS property.
Proof.
Each subsystem has an ISDS-Lyapunov function with gains γij. This follows from Theorem 3.9. The matrix Γ satisfies the SGC, and all assumptions of Theorem 2.8 are satisfied. It follows that with x=(x1T,…,xnT)T, w=(w1T,…,wnT)T, and π*(·,x(·))=((π1*(·,x(·)))T,…,(πn*(·,x(·)))T)T, the whole system of the form
(3.31)x˙(t)=f(x(t),w(t),π*(t,x(t)))
has the ISDS property.
In the next section, we investigate the ISS property for MPC of TDS.
4. MPC and ISS for Time-Delay Systems
Now, we introduce the ISS property for MPC of TDS. We derive conditions to assure that a single system, a subsystem of a network, and the whole system possess the ISS property applying the control obtained by an MPC scheme for TDS.
4.1. Single Systems
We consider systems of the form (2.18) with disturbances,
(4.1)x˙(t)=f(xt,w(t),u(t)),t∈ℝ+,x0(τ)=ξ(τ),τ∈[-θ,0],
where w∈𝒲⊆L∞(ℝ+,ℝP) is the unknown disturbance and 𝒲 is a compact and convex set containing the origin. The input u is an essentially bounded and measurable control subject to input constraints u∈𝒰, where 𝒰⊆ℝ⇕ is a compact and convex set containing the origin in its interior. The functional f has to satisfy the same conditions as in the previous section to assure that a unique solution exists, which is denoted by x(t;ξ,w,u) or x(t) in short.
The aim is to find an (optimal) control u such that the system (4.1) has the ISS property.
Due to the presence of disturbances, we apply a feedback control structure, which compensates the effect of the disturbance. This means that we apply a feedback control law π(t,xt) to the system. In the rest of this section, we assume that π(t,xt)∈Π is essentially bounded, locally Lipschitz in xt, and measurable in t. The set Π⊆ℝm is assumed to be compact and convex containing the origin in its interior. We obtain an MPC control law by solving the control problem.
Definition 4.1 (Finite horizon optimal control problem with time-delays (FHOCPTD)).
Let T be the prediction horizon and π(t,xt) a feedback control law. The finite horizon optimal control problem with time-delays for a system of the form (4.1) is formulated as
(4.2)minπmaxw(ξ-,π,w;t,T)≔minπmaxw∫tt+T(l(x(t′),π(t′,xt′))-lw(w(t′)))dt′+Vf(xt+T)subjecttox˙(t′)=f(xt′,w(t′),u(t′)),t'∈[t,t+T],x(t+τ)=ξ-(τ),τ∈[-θ,0],xt'∈𝒳,w∈𝒲,π∈Π,xt+T∈Ω⊆C([-θ,0],ℝN),
where ξ-∈C([-θ,0],ℝN) is the initial function of the system at time t, and the terminal region Ω and the state constraint set 𝒳⊆C([-θ,0],ℝN) are compact and convex sets with the origin in their interior. l-lw is the stage cost, where l:ℝN×ℝm→ℝ+ and lw:ℝP→ℝ+ are locally Lipschitz continuous with l(0,0)=0,lw(0)=0, and Vf:Ω→ℝ+ is the terminal penalty.
The control problem will be solved at the sampling instants t=kΔ,k∈ℕ,andΔ∈ℝ+. The optimal solution is denoted by π*(t',xt';t,T) and w*(t'), t'∈[t,t+T] and the optimal cost functional is denoted by J*(ξ-,π*,w*;t,T). The control input to the system (4.1) is defined in the usual receding horizon fashion as
(4.3)u(t′)=π*(t′,xt′;t,T),t'∈[t,t+Δ].
Definition 4.2.
(i) A feedback control π is called a feasible solution of the FHOCPTD at time t if for a given initial function ξ- at time t the feedback π(t',xt'),t'∈[t,t+T] controls the state of the system (4.1) into Ω at time t+T, that is, xt+T∈Ω, for all w∈𝒲.
(ii) A set Ω⊆C([-θ,0],ℝN) is called positively invariant if for all initial functions ξ-∈Ω a feedback control π keeps the trajectory of the system (4.1) in Ω, that is,
(4.4)xt∈Ω,∀t∈(0,∞),
for all w∈𝒲.
For the goal of this section, establishing ISS of TDS with the help of MPC, we need the following.
Assumption 4.3.
(1) There exist functions αl,αw∈𝒦∞ such that
(4.5)l(ϕ(0),π)≥αl(|ϕ|a),ϕ∈𝒳,π∈Π,lw(w)≤αw(|w|),w∈𝒲.
(2) The FHOCPTD in Definition 4.1 admits a feasible solution at the initial time t=0.
(3) There exists a controller u(t)=π(t,xt) such that the system (4.1) has the ISS property.
(4) There exists a locally Lipschitz continuous functional Vf(ϕ) such that the terminal region Ω is a positively invariant set and for all ϕ∈Ω we have
(4.6)Vf(ϕ)≤ψ2(|ϕ|a),(4.7)D+Vf(ϕ,w)≤-l(ϕ(0),π)+lw(w),
where ψ2∈𝒦∞, w∈𝒲, and D+Vf denotes the upper right-hand side derivate of the functional V along the solution of (4.1) with the control u≡π from point 3 of this assumption.
(5) There exists a 𝒦∞ function ψ1 such that for all t>0 it holds
(4.8)∫tt+Tl(x(t′),π(t′,xt′))dt'≥ψ1(|ξ-(0)|),ξ-(0)=x(t).
(6) The optimal cost functional J*(ξ-,π*,w*;t,T) is locally Lipschitz continuous.
Now, we can state a theorem that assures ISS of MPC for a single time-delay system with disturbances.
Theorem 4.4.
Let Assumption 4.3 be satisfied. Then, the system resulting from the application of the predictive control strategy to the system, namely, x˙(t)=f(xt,w(t),π*(t,xt)), t∈ℝ+, x0(τ)=ξ(τ),τ∈[-θ,0], possesses the ISS property.
Proof.
The proof goes along the lines of the proof of Theorem 3.6 with changes according to time-delays and functionals, that is, we show that the optimal cost functional V(ξ-)∶=J*(ξ-,π*,w*;t,T) is an ISS-Lyapunov-Krasovskii functional.
For a feasible solution for all times t>0, we suppose that a feasible solution π(t',xt'),t'∈[t,t+T] at time t exists. We construct a control by
(4.9)π^(t′,xt′)={π~(t′,xt′),t'∈[t+Δ,t+T],π(t′,xt′),t'∈(t+T,t+T+Δ],
where π is the controller from Assumption 4.3, point 3, and Δ>0. π~ steers xt+Δ into xt+T∈Ω and Ω is a positively invariant set. This means that π(t',xt') keeps the system trajectory in Ω for t+T<t'≤t+T+Δ under the constraints of the FHOCPTD. This implies that from the existence of a feasible solution for the time t, we have a feasible solution for the time t+Δ. From Assumption 4.3, point 2, there exists a feasible solution for the FHOCPTD at the time t=0 and it follows that a feasible solution exists for every t>0.
Replacing π~ in (4.9) by π*, it follows from (4.7) that
(4.10)J*(ξ-,π*,w*;t,T+Δ)≤J(ξ-,π^,w*;t,T+Δ)=∫tt+T(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt'+∫t+Tt+T+Δ(l(x(t′),π(t′,xt′))-lw(w*(t′)))dt'+Vf(xt+T+Δ)=J*(ξ-,π*,w*;t,T)-Vf(xt+T)+Vf(xt+T+Δ)+∫t+Tt+T+Δ(l(x(t′),π(t′,xt′))-lw(w*(t′)))dt′≤J*(ξ-,π*,w*;t,T)
hold, and with (4.6) this implies
(4.11)J*(ξ-,π*,w*;t,T)≤J*(ξ-,π*,w*;t,0)=Vf(ξ-)≤ψ2(|ξ-|a).
For the lower bound, it holds that
(4.12)V(ξ-)≥J(ξ-,π*,0;t,T)≥∫tt+Tl(x(t′),π*(t′,xt′))dt′,
and by (4.8) we have V(ξ-)≥ψ1(|ξ-(0)|). This shows that J* satisfies (2.22). Now, we use the notation xt(τ)∶=ξ-(τ),τ∈[-θ+t,t]. With J*(xt,π*,w*;t,T+Δ)≤J*(xt,π*,w*;t,T), we have
(4.13)∫tt+h(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt′+J*(xt+h,π*,w*;t+h,T+Δ-h)≤∫tt+h(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt'+J*(xt+h,π*,w*;t+h,T-h).
This implies
(4.14)J*(xt+h,π*,w*;t+h,T+Δ-h)≤J*(xt+h,π*,w*;t+h,T-h).
Note that by Assumption 4.3, point 6, J* is locally Lipschitz continuous. With (4.14) it holds
(4.15)J*(xt,π*,w*;t,T)=∫tt+h(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt'+J*(xt+h,π*,w*;t+h,T-h)≥∫tt+h(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt'+J*(xt+h,π*,w*;t+h,T),
which leads to
(4.16)J*(xt+h,π*,w*;t+h,T)-J*(xt,π*,w*;t,T)h≤-1h∫tt+h(l(x(t′),π*(t′,xt′;t,T))-lw(w*(t′)))dt'.
Let h→0+, and using the first point of Assumption 4.3 we get
(4.17)D+V(xt,w*)≤-αl(|xt|a)+αw(|w*|).
By definition of χ(r)∶=ψ2(αl-1(2αw(r))) and α(r)∶=(1/2)αl(ψ2-1(r)),r≥0, this implies
(4.18)V(xt)≥χ(|w*|)⇒D+V(xt,w*)≤-α(V(xt)),
that is, J* satisfies the condition (2.23).
We conclude that J* is an ISS-Lyapunov-Krasovskii functional for the system
(4.19)x˙(t)=f(xt,w(t),π*(t,xt)),
and by application of Theorem 2.12 the system has the ISS property.
Now, we consider that interconnections of TDS and provide conditions such that the whole network with an optimal control obtained from an MPC scheme has the ISS property.
4.2. Interconnected Systems
We consider interconnected systems with time-delays and disturbances of the form
(4.20)x˙i(t)=f~i(x1t,…,xnt,wi(t),ui(t)),i=1,…,n,
where ui∈ℝMi are the essentially bounded and measurable control inputs and wi∈ℝPi are the unknown disturbances. We assume that the states, disturbances, and inputs fulfill the constraints
(4.21)xi∈𝒳i,wi∈𝒲i,ui∈𝒰i,i=1,…,n,
where 𝒳i⊆C([-θ,0],ℝNi),𝒲i⊆L∞(ℝ+,ℝPi), and 𝒰i⊆ℝMi are compact and convex sets containing the origin in their interior.
We assume the same MPC strategy for interconnected TDS as in Section 3.2. The FHOCPTD for the ith subsystem of (4.20) reads as
(4.22)minπimaxwimax(xj)j≠iJi(ξ-i,(xj)j≠i,πi,wi;t,T)∶=minπimaxwimax(xj)j≠i∫tt+T(li(xi(t′),πi(t′,xit′))-(lw)i(wi(t′))-∑j≠ilij(xj(t′)))dt'+(Vf)i(xit+T)subjecttox˙i(t′)=fi(x1t′,…,xnt′,wi(t′),ui(t′)),t'∈[t,t+T],xi(t+τ)=ξ-i(τ),τ∈[-θ,0],xj∈𝒳j,j=1,…,n,wi∈𝒲i,πi∈Πi,xit+T∈Ωi⊆C([-θ,0],ℝNi),
where ξ-i∈𝒳i is the initial function of the ith subsystem at time t and the terminal region Ω is a compact and convex set with the origin in its interior. πi(t,xt) is essentially bounded, locally Lipschitz in x, and measurable in t and Πi⊆ℝMi is a compact and convex sets containing the origin in its interior. li-(lw)i-∑lij is the stage cost, where li:ℝNi×ℝMi→ℝ+. (lw)i:ℝPi→ℝ+ penalizes the disturbance and lij:ℝNj→ℝ+ penalizes the internal input for all j=1,…,n,j≠i. li,(lw)i, and lij are locally Lipschitz continuous functions with li(0,0)=0,(lw)i(0)=0,lij(0)=0, and (Vf)i:Ωi→ℝ+ is the terminal penalty of the ith subsystem.
We obtain an optimal solution πi*,(xj)j≠i*,wi*, where the control of each subsystem is a feedback control law, which depends on the current states of the whole system, that is, ui(t)=πi*(t,xt), where xt=((x1t)T,…,(xnt)T)T∈C([-θ,0],ℝN), N=∑iNi.
For the ith subsystem of (4.20), we suppose the following assumption.
Assumption 4.5.
(1) There exist functions αil,αiw,αij∈𝒦∞, j=1,…,n,j≠i such that
(4.23)li(ϕi(0),πi)≥αil(|ϕi|a),ϕi∈C([-θ,0],ℝNi),πi∈Πi,(lw)i(wi)≤αiw(|wi|),wi∈𝒲i,lij(ϕj(0))≤αij(Vj(ϕj)),ϕj∈C([-θ,0],ℝNj),j=1,…,n,j≠i.
(2) The FHOCPTD admits a feasible solution at the initial time t=0.
(3) There exists a controller ui(t)=πi(t,xt) such that the ith subsystem of (4.20) has the ISS property.
(4) There exists a locally Lipschitz continuous functional (Vf)i(ϕi) such that the terminal region Ωi is a positively invariant set and for all ϕi∈Ωi we have
(4.24)(Vf)i(ϕi)≤ψ2i(|ϕi|a),D+(Vf)i(ϕi,wi)≤-li(ϕi(0),πi)+(lw)i(wi)+∑j≠ilij(ϕj(0)),
where ψ2i∈𝒦∞, ϕj∈C([-θ,0],ℝNj), j=1,…,n and wi∈𝒲i. D+(Vf)i denotes the upper right-hand side derivate of the functional (Vf)i along the solution of the ith subsystem of (4.20) with the control ui≡πi from point 3 of this assumption.
(5) For each i, there exists a 𝒦∞ function ψ1i such that for all t>0 it holds
(4.25)∫tt+Tli(xi(t′),πi(t′,xt′))dt'≥ψ1i(|ξ-(0)|),ξ-(0)=x(t).
(6) The optimal cost functional Ji*(ξ-i,(xj)j≠i*,πi*,wi*;t,T) is locally Lipschitz continuous.
Now, we state that each subsystem of (4.20) has the ISS property by application of the optimal control obtained by the FHOCPTD.
Theorem 4.6.
Consider an interconnected system of the form (4.20). Let Assumption 4.5 be satisfied for each subsystem. Then, each subsystem resulting from the application of the predictive control strategy to the system, namely, x˙i(t)=fi(x1t,…,xnt,wi(t),πi*(t,xt)),t∈ℝ+,xi0(τ)=ξi(τ),τ∈[-θ,0], possesses the ISS property.
Proof.
Consider the ith subsystem. We show that the optimal cost functional Vi(ξ-i)∶=Ji*(ξ-i,(xj)j≠i*,πi*,wi*;t,T) is an ISS-Lyapunov-Krasovskii functional for the ith subsystem. We abbreviate xjt=((xj)j≠it)*.
Following the lines of the proof of Theorem 4.4, we have that there exists a feasible solution of the ith subsystem for all times t>0 and that the functional Vi(ξ-i) satisfies the condition
(4.26)ψ1i(|ξ-i(0)|)≤Vi(ξ-i)≤ψ2i(|ξ-i|a),
using (4.25) and |ξ-(0)|≥|ξ-i(0)|. Note that by Assumption 4.5, point 6, Ji* is locally Lipschitz continuous. We arrive that the following equation holds:
(4.27)D+Vi(xit,wi*)≤-αil(ψ2i-1(Vi(xit)))+αiw(|wi*|)+∑j≠iαij(Vj(xjt)).
This is equivalent to
(4.28)D+Vi(xit,wi*)≤-αil(ψ2i-1(Vi(xit)))+max{nαiw(|wi*|),maxj≠inαij(Vj(xjt))},
which implies
(4.29)Vi(xit)≥max{χ~i(|wi*|),maxj≠iχ~ij(Vj(xjt))}⇒D+Vi(xit,wi*)≤-α-il(Vi(xit)),
where
(4.30)χ~i(r)∶=ψ2i((αil)-1(2nαiw(r))),χ~ij(r)∶=ψ2i((αil)-1(2nαij(r))),α-il(r)∶=12αil(ψ2i-1(r)).
This can be shown for each subsystem and we conclude that each subsystem has an ISS-Lyapunov-Krasovskii functional. It follows that the ith subsystem is ISS in maximum formulation.
We collect all functions χ~ij in a matrix Γ∶=(χ~ij)n×n, χ~ii≡0, which defines a map as in (2.12).
Using the small-gain condition for Γ, the following corollary from Theorem 4.6.
Corollary 4.7.
Consider an interconnected system of the form (4.20). Let Assumption 4.5 be satisfied for each subsystem. If Γ satisfies the small-gain condition (2.13), then the whole system possesses the ISS property.
Proof.
We know from Theorem 4.6 that each subsystem of (4.20) has an ISS-Lyapunov-Krasovskii functional with gains χ~ij. Since the matrix Γ satisfies the SGC, all assumptions of Theorem 2.13 are satisfied and it follows that the whole system of the form
(4.31)x˙(t)=f(xt,w(t),π*(t,xt))
is ISS in maximum formulation, where xt=((x1t)T,…,(xnt)T)T, w=(w1T,…,wnT)T, and π*(t,xt)=((π1*(t,xt))T,…,(πn*(t,xt))T)T.
5. Conclusions
We have combined the ISDS property with MPC for nonlinear continuous-time systems with disturbances. For single systems, we have derived conditions such that by application of the control obtained by an MPC scheme to the system, it has the ISDS property, see Theorem 3.6. Considering interconnected systems, we have proved that each subsystem possesses the ISDS property using the control of the proposed MPC scheme, which is Theorem 3.9. Using a small-gain condition, we have shown in Corollary 3.10 that the whole network has the ISDS property.
Considering single systems with time-delays, we have proved in Theorem 4.4 that a TDS has the ISS property using the control obtained by an MPC scheme, where we have used ISS-Lyapunov-Krasovskii functionals. For interconnected TDSs, we have established a theorem, that guarantees that each closed-loop subsystem obtained by application of the control obtained by a decentralized MPC scheme has the ISS property, see Theorem 4.6. From this result and using Theorem 2.13, we have shown that the whole network with time-delays has the ISS property under a small-gain condition, see Corollary 4.7.
In future research, we are going to derive conditions for open-loop MPC schemes to assure ISDS and ISS of TDSs, respectively. The differences of both schemes, closed-loop, and open-loop, will be analyzed and applied in practice.
Note that the results presented here are first steps of the approaches of ISDS for MPC and ISS for MPC with time-delays. More detailed studies should be done in these directions, especially in applications of these approaches. Therefore, numerical algorithms for the implementation of the proposed schemes, as in [5, 7], for example, should be developed. It could be analyzed if and how other existing algorithms could be used or how they should be adapted for implementation for the results presented in this work. The advantages of the usage of ISDS for MPC in contrast to ISS for MPC could be investigated and applied in practice.
Furthermore, one can investigate ISDS and ISS for unconstrained nonlinear MPC, as it was done in [17, 31], for example.
Acknowledgment
This research is funded by the German Research Foundation (DFG) as part of the Collaborative Research Centre 637 “Autonomous Cooperating Logistic Processes: A Paradigm Shift and its Limitations” (SFB 637).
QinS. J.BadgwellT. A.AllgöwerF.ZhengA.An overview of nonlinear model predictive control applications2000Berlin, GermanyBirkhäuser369392QinS. J.BadgwellT. A.A survey of industrial model predictive control technology200311733764MaciejowskiJ.2001Prentice-HallCamachoE. F.Bordons AlbaC.2004SpringerGrüneL.PannekJ.2011London, UKSpringerRaimondoD. M.2008Pavia, ItalyUniversity of PaviaPannekJ.2009GermanyUniversity of BayreuthLazarM.2009RomaniaTechnical University of IasiRaffT.HuberS.NagyZ. K.AllgöwerF.Nonlinear model predicitve control of a four tank system: an experimental stability studyProceedings of the IEEE Conference on Control in Applications2006237242MayneD. Q.RawlingsJ. B.RaoC. V.ScokaertP. O. M.Constrained model predictive control: stability and optimality200036678981410.1016/S0005-1098(99)00214-91829182ZBL0949.93003FontesF. A. C. C.A general framework to design stabilizing nonlinear model predictive controllers200142212714310.1016/S0167-6911(00)00084-02006463ZBL0985.93023MarruedoD. L.AlamoT.CamachoE. F.Input-to-state stable MPC for constrained discrete-time nonlinear systems with bounded additive uncertaintiesProceedings of the 41st IEEE Conference on Decision and ControlDecember 2002Las Vegas, Nev, USA46194624MagniL.RaimondoD. M.ScattoliniR.Regional input-to-state stability for nonlinear model predictive control20065191548155310.1109/TAC.2006.8808082260135LimonD.AlamoT.SalasF.CamachoE. F.Input to state stability of min-max MPC controllers for nonlinear systems with bounded uncertainties200642579780310.1016/j.automatica.2006.01.0012207820ZBL1137.93407LazarM.Muñoz de la PeñaD.HeemelsW. P. M. H.AlamoT.On input-to-state stability of min-max nonlinear model predictive control2008571394810.1016/j.sysconle.2007.06.0132365302ZBL1129.93433RaimondoD. M.MagniL.ScattoliniR.Decentralized MPC of nonlinear systems: an input-to-state stability approach200717171651166710.1002/rnc.12142363541ZBL1131.93050GrüneL.WorthmannK.JohanssonR.RantzerA.A distributed NMPC scheme without stabilizing terminal constraints2010Springer259285GrüneL.Input-to-state dynamical stability and its Lyapunov function characterization20024791499150410.1109/TAC.2002.8027611924318DashkovskiyS.NaujokL.ISDS small-gain theorem and construction of ISDS Lyapunov functions for interconnected systems201059529930410.1016/j.sysconle.2010.03.0042668921ZBL1191.93125PepeP.JiangZ.-P.A Lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems200655121006101410.1016/j.sysconle.2006.06.0132267393ZBL1120.93361DashkovskiyS.NaujokL.Lyapunov-Razumikhin and Lyapunov-Krasovskii theorems for interconnected ISS time-delay systemsProceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems (MTNS '10)July 2010Budapest, Hungary11791184EsfanjaniR. M.RebleM.MünzU.NikraveshS. K. Y.AllgöwerF.Model predictive control of constrained nonlinear time-delay systemsProceedings of the 48th IEEE Conference on Decision and Control200913241329RebleM.BrunnerF. D.AllgöwerF.Model predictive control for nonlinear time-delay systems without terminal constraintProceedings of the 18th World Congress of the International Federation of Automatic Control (IFAC '11)August 2011Milano, Italy92549259RüfferB.2007GermanyUniversity of BremenRüfferB. S.Monotone inequalities, dynamical systems, and paths in the positive orthant of Euclidean n-space201014225728310.1007/s11117-009-0016-52657634DashkovskiyS. N.RüfferB. S.WirthF. R.Small gain theorems for large scale systems and construction of ISS Lyapunov functions20104864089411810.1137/0907464832645475ZBL1202.93008HaleJ. K.Verduyn LunelS. M.199399New York, NY, USASpringerApplied Mathematical SciencesPepeP.On Liapunov-Krasovskii functionals under Carathéodory conditions200743470170610.1016/j.automatica.2006.10.0242306715ScattoliniR.Architectures for distributed and hierarchical model predictive control—a review200919723731RichardsA.HowJ. P.Robust distributed model predictive control20078091517153110.1080/002071707014910702353430ZBL1194.93195GrüneL.Analysis and design of unconstrained nonlinear MPC schemes for finite and infinite dimensional systems20094821206122810.1137/0707078532491596ZBL1194.49045