MPC Schemes Guaranteeing ISDS and ISS for Nonlinear ( Time-Delay ) Systems

New directions in model predictive control MPC are introduced. On the one hand, we combine the input-to-state dynamical stability ISDS with MPC for single and interconnected systems. On the other hand, we introduceMPC schemes guaranteeing input-to-state stability ISS of single systems and networks with time delays. In both directions, recent results of the stability analysis from thementioned areas are applied using Lyapunov function al s to show that the corresponding cost function al of the MPC scheme is a Lyapunov function al . For networks, we show that under a small-gain condition and with an optimal control obtained by an MPC scheme for networks, it has the ISDS property or ISS property, respectively.


Introduction
The approach of MPC started in the late 1970s and spread out in the 1990s by an increasing usage of automation processes in the industry.It has a wide range of applications, see the survey papers 1, 2 .
The aim of MPC is to control a system to follow a certain trajectory or to steer the solution of a system into an equilibrium point under constraints and unknown disturbances.Additionally, the control should be optimal in view of defined goals, for example, optimal regarding effort.An overview about MPC can be found in the books 3-5 and the Ph.D. theses 6-8 , for example.
We consider systems with disturbances of the form, ẋ t f x t , w t , u t , 1.1 where w ∈ W ⊆ L ∞ R , R P is the unknown disturbance and W is a compact and convex set containing the origin.The input u is a measurable and essentially bounded control subject to input constraints u ∈ U, where U ⊆ R m is a compact and convex set containing the origin in its interior.The function f is assumed to be locally Lipschitz in x uniformly in w and u to guarantee that a unique solution of 1.1 exists, which is denoted by x t; x 0 , w, u or x t in short.
The control input is obtained by an MPC scheme and applied to the system.We are interested in stability of MPC.It was shown in 9 that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable.For stability of a system in applications, it is desired to analyze under which conditions stability of a system can be achieved using an MPC scheme.An overview about existing results regarding stability and MPC for systems without disturbances can be found in 10 and recent results are included in 5-8 .To design stabilizing MPC controllers for nonlinear systems, a general framework can be found in 11 .
Taking the unknown disturbance w ∈ W into account, MPC schemes which guarantee input-to-state stability ISS were developed.First results can be found in 12 regarding ISS for MPC of nonlinear discrete-time systems.Furthermore, results using the ISS property with initial states from a compact set, namely, regional-ISS, are given in 6, 13 .In 14, 15 , an MPC scheme that guarantees ISS using the so-called min-max approach was given.The approach uses a closed-loop formulation of the optimization problem to compensate the effect of the unknown disturbance.
Stable MPC schemes for interconnected systems were investigated in 6, 16, 17 , where in 6, 16 conditions to assure ISS of the whole system were derived and in 17 asymptotically stable MPC schemes without terminal constraints were provided.Note that in 17 , the subsystems are not directly connected, but they exchange information over the network to control themselves according to state constraints.
One research topic of this paper provides a new direction in MPC: we combine the input-to-state dynamical stability ISDS property, introduced in 18 , with MPC for single and interconnected systems.The provided MPC scheme uses the min-max approach see 14, 15 .Conditions are derived such that single closed-loop systems and whole closed-loop networks with an optimal control obtained by an MPC scheme have the ISDS property.The results of 18 for single systems and the ISDS small-gain theorem for networks see 19 are applied to prove the main results of the corresponding section.
The advantage of the usage of ISDS over ISS for MPC is that the ISDS estimation takes only recent values of the disturbance into account due to the memory fading effect, see 18, 19 .In particular, if the disturbance tends to zero, then the ISDS estimation tends to zero, for example.Moreover, the decay rate can be derived using ISDS-Lyapunov functions.This information can be useful for applications of MPC.
In practice, there are problems, where the advantages of ISDS over ISS, in particular the memory fading effect of the ISDS estimation, lead to more efficient controllers with respect to costs.Examples are the control of air planes, robots, or automatic transportation vehicles.
A second research topic of this paper is the stability analysis of MPC schemes for systems with time-delays.In many applications, there occur time-delays, for example, in communication networks, logistic networks, or biological systems.The presence of timedelays can lead to instability of a network, see 9 , where it was shown that the application of the control obtained by an MPC scheme to a system does not guarantee that a system without disturbances is asymptotically stable.
Therefore, we are interested in the analysis of networks with time-delays in view of input-to-state stability ISS .In 2, 20 , tools based on the Lyapunov-Razumikhin and

Preliminaries
By x T we denote the transposition of a vector x ∈ R n , n ∈ N; furthermore, R : 0, ∞ and R n denotes the positive orthant {x ∈ R n : x ≥ 0}, where we use the standard partial order for x, y ∈ R n given by The essential supremum norm of a Lebesguemeasurable function f : R → R n is denoted by f .We denote the set of essentially bounded Lebesgue-measurable functions u from R to R m by ∇V is the gradient of a function For a function v : R → R m , we define its restriction to the interval s 1 , s 2 by Definition 2.1.We define the following classes of functions:

2.4
We will call functions of class P positive definite.Now, we recall some results related to ISDS.Therefore, we consider systems of the form where t ∈ R is the continuous time, ẋ denotes the derivative of the state x ∈ R N , and , is assumed to be locally Lipschitz continuous in x uniformly in u to have existence and uniqueness of the solution, denoted by x t; x 0 , u or x t for short, for the given initial value x 0 x 0 .The notion of ISDS was introduced in 18 .
Definition 2.2 Input-to-state dynamical stability ISDS .The system 2.5 is called input-tostate dynamically stable ISDS if there exist μ ∈ KLD, η, γ ∈ K ∞ such that for all initial values x 0 and all inputs u, it holds that for all t ∈ R .μ is called decay rate, η is called overshoot gain, and γ is called robustness gain.
A useful tool to check whether a system has the ISDS property is the following.
holds, for almost all x ∈ R N \ {0} and all u, where μ solves d dt μ r, t −g μ r, t , r,t > 0, 2.9 for a locally Lipschitz continuous function g : R → R .
The equivalence of ISDS and the existence of an ISDS-Lyapunov function were proved in 18 .
Theorem 2.4.The system 2.5 is ISDS with μ ∈ KLD and η, γ ∈ K ∞ if and only if for each ε > 0 there exists an ISDS-Lyapunov function V .Remark 2.5.Note that for a system, which possesses the ISDS property, it holds that the decay rate μ and gains η, γ in Definition 2.2 are exactly the same as in Definition 2.3.Now, consider networks of the form ẋi t f i x 1 t , . . ., x n t , u i t , i 1, . . ., n, 2.10 where n ∈ N, , then 2.10 can be written as a system of the form 2.5 , which we call the whole system.
The ith subsystem of 2.10 is called ISDS if there exists a KLD-function μ i and functions η i , γ i , and γ ij ∈ K ∞ ∪ {0} such that the solution x i t; x 0 i , u i x i t for all initial values x 0 i , all inputs x j , j / i, u i , and for all t ∈ R satisfies We collect all the gains in a matrix Γ, defined by Γ : γ ij n×n with γ ii ≡ 0, i, j 1, . . ., n.This defines a map Γ : R n → R n for s ∈ R n by Γ s : max j γ 1j s j , . . ., max j γ nj s j T .

2.12
In view of ISDS of the whole network, we say that Γ satisfies the small-gain condition (SGC) see 24 if

2.13
To recall the Lyapunov version of the small-gain theorem for ISDS, we need the following.

Definition 2.6. A continuous path σ ∈ K n
∞ is called an Ω-path with respect to Γ if i for each i, the function σ −1 i is locally Lipschitz continuous on 0, ∞ ; ii for every compact set P ⊂ 0, ∞ , there are constants 0 < K 1 < K 2 such that for all points of differentiability of σ −1 i and i 1, . . ., n we have iii it holds that Γ σ r < σ r , for all r > 0.
More details about an Ω-path can be found in 24-26 .
The following proposition is useful for the construction of an ISDS-Lyapunov function for the whole system.Proposition 2.7.Let Γ ∈ K ∞ ∪ {0} n×n be a gain-matrix.If Γ satisfies the small-gain condition 2.13 , then there exists an Ω-path σ with respect to Γ.
The proof can be found in 24 , for example.
We assume that for each subsystem of 2.10 there exists a function V i : R N i → R , which is locally Lipschitz continuous and positive definite.Given ε i > 0, a function V i : R N i → R , which is locally Lipschitz continuous on R N i \ {0}, is an ISDS-Lyapunov function of the ith subsystem in 2.10 if it satisfies the following: i there exists a function η i ∈ K ∞ such that for all x i ∈ R N i it holds ii there exist functions , j 1, . . ., n, i / j such that for almost all x i ∈ R N i \ {0}, all inputs x j , j / i, and u i it holds that where μ i ∈ KLD solves d/dt μ i r, t −g i μ i r, t , r, t > 0 for some locally Lipschitz function g i : R → R .Now, we recall the main result of 19 , which establishes ISDS for networks using Lyapunov functions.Theorem 2.8.Assume that each subsystem of 2.10 has the ISDS property.This means that for each subsystem and for each ε i > 0 there exists an ISDS-Lyapunov function V i , which satisfies 2.15 and 2.16 .Let Γ be given by 2.12 , satisfying the small-gain condition 2.13 , and let σ ∈ K n ∞ be an Ω-path from Proposition 2.7 with respect to Γ.Then, the whole system 2.5 has the ISDS property and its ISDS-Lyapunov function is given by where ψ |x| As a second topic of this paper, we are going to establish ISS with the help of MPC for TDSs of the form and "•" represents the right-hand side derivative.θ is the maximum involved delay, and f : C −θ, 0 ; R N × R m → R N is locally Lipschitz continuous on any bounded set.This guarantees that the system 2.18 admits a unique solution on a maximal interval −θ, T max , 0 < T max ≤ ∞, which is locally absolutely continuous, see 27, Section 2.6 .We denote the solution by x t; ξ, u or x t for short, satisfying the initial condition x 0 ≡ ξ for any ξ ∈ C −θ, 0 , R N .The notion of ISS for TDSs reads as follows.
Mathematical Problems in Engineering Definition 2.9 ISS for TDSs .The system 2.18 is called ISS if there exist β ∈ KL and γ ∈ K such that for all ξ, all u, and all t ∈ R it holds that In 20 , ISS-Lyapunov-Krasovskii functionals are introduced to check whether a TDS has the ISS property.Given a locally Lipschitz continuous functional V : C −θ, 0 ; R N → R , the upper right-hand side derivate D V of the functional V along the solution x t; ξ, u is defined according to 27, Chapter 5.2 where x t h ∈ C −θ, 0 ; R N is generated by the solution x t; φ, u of ẋ t f x t , u t , and t ∈ t 0 , t 0 h with x t 0 : φ ∈ C −θ, 0 ; R N .Remark 2.10.Note that in contrast to 2.20 , the definition of D V in 20 is slightly different, since there the functional is assumed to be only continuous and in that case, D V can take infinite values.Nevertheless, the results in 20 also hold true if the functional is chosen to be locally Lipschitz, according to the results in 28 and using 2.20 .
the following inequalities hold:

2.21
Definition 2.11 ISS-Lyapunov-Krasovskii functional .A locally Lipschitz continuous functional V : C −θ, 0 ; R N → R is called an ISS-Lyapunov-Krasovskii functional for the system 2.18 if there exist functions ψ 1 , ψ 2 ∈ K ∞ and functions χ, α ∈ K such that The next theorem was proved in 20 .
Theorem 2.12.If there exists an ISS-Lyapunov-Krasovskii functional V for the system 2.18 , then the system 2.18 has the ISS property.Now, we investigate networks with time-delays: we consider n ∈ N interconnected TDSs of the form ẋi t f i x t 1 , . . ., x t n , u i t , i 1, . . ., n, 2.24 θ denotes the maximal involved delay and x t j , j / i can be interpreted as internal inputs of the ith subsystem.The functionals locally Lipschitz continuous on any bounded set.We denote the solution of a subsystem by x i t; ξ i , u or x i t for short, satisfying the initial condition x 0 i ≡ ξ i for any ξ i ∈ C −θ, 0 , R N i .The ISS property for a subsystem of 2.24 reads as follows: the i subsystem of 2.24 is ISS if there exist β i ∈ KL, γ ij , γ i ∈ K ∞ ∪ {0}, j 1, . . ., n, j / i such that for all t ∈ R it holds If we define N : , and f : f T 1 , . . ., f T n T , then 2.24 can be written as a system of the form 2.18 , which we call the whole system.The Krasovskii functionals for subsystems are as follows.
A locally Lipschitz continuous functional the ith subsystem of 2.24 if there exist functionals V j , j 1, . . ., n, which are positive definite and locally Lipschitz continuous on C −θ, 0 ; R N j , functions The gain-matrix is defined by Γ : χ ij n×n , χ ii ≡ 0, i 1, . . ., n, which defines a map Γ : R n → R n as in 2.12 .
The next theorem is one of the main results of 21 and provides a construction for an ISS-Lyapunov-Krasovskii functional of the whole system.Theorem 2.13 ISS-Lyapunov-Krasovskii theorem for general networks with time-delays .Consider an interconnected system of the form 2.24 .Assume that each subsystem has an ISS-Lyapunov-Krasovskii functional V i , which satisfies the conditions 2.26 , i 1, . . ., n.If the corresponding gain-matrix Γ satisfies the small-gain condition 2.13 , then is the ISS-Lyapunov-Krasovskii functional for the whole system of the form 2.18 , which is ISS, where σ σ 1 , . . ., σ n T is an Ω-path as in Definition 2.6 and φ φ i , . . ., φ n T ∈ C −θ, 0 ; R N .The Lyapunov gain is given by χ r : max i σ −1 i χ i r , r > 0. Now, we present the new directions in MPC: ISDS and ISS for single and interconnected systems with and without time-delays.We start with MPC schemes guaranteeing ISDS.

MPC and ISDS
In this section, we combine ISDS and MPC for nonlinear single and interconnected systems.Conditions are derived, which assure ISDS of a system is obtained by application of the control to the system 1.1 , and calculated by an MPC scheme.

Single Systems
We consider systems of the form 1.1 and we use the min-max approach to calculate an optimal control: to compensate the effect of the disturbance w, we apply a feedback control law π t, x t to the system.An optimal control law is obtained by solving the finite horizon optimal control problem FHOCP , which consists of minimization of the cost function J with respect to π t, x t and maximization of the cost function J with respect to the disturbance w.The following definition is taken from 14, 15 with a slightly adjustment using ε here to apply the ISDS property to the FHOCP.Definition 3.1 Finite horizon optimal control problem FHOCP .Let 1 > ε > 0 be given.Let T > 0 be the prediction horizon and u t π t, x t a feedback control law.The finite horizon optimal control problem for a system of the form 1.1 is formulated as where x 0 ∈ R N is the initial value of the system at time t, the terminal region Ω is a compact and convex set with the origin in its interior, and π t, x t is essentially bounded, locally Lipschitz in x and measurable in t. l − l w is the stage cost, where l : R N × R m → R penalizes the distance of the state from the equilibrium point 0 of the system and it penalizes the control effort.l w : R P → R penalizes the disturbance, which influences the systems behavior.l and l w are locally Lipschitz continuous with l 0, 0 0, l w 0 0, and V f : Ω → R is the terminal penalty.
The FHOCP will be solved at the sampling instants t kΔ, k ∈ N, Δ ∈ R .The optimal solution is denoted by π * t , x t ; t, T and w * t , t ∈ t, t T .The optimal cost function is denoted by J * x 0 , π * , w * ; t, T .The control input to the system 1.1 is defined in the usual receding horizon fashion as u t π * t , x t ; t, T , t ∈ t, t Δ .

3.2
In the following, we need some definitions, which can be found, for example, in 5 .Definition 3.2.i A feedback control π is called a feasible solution of the FHOCP at time t, if for a given initial value x 0 at time t the feedback π t , x t , t ∈ t, t T controls the state of the system 1.1 into Ω at time t T , that is, x t T ∈ Ω, for all w ∈ W.
ii A set Ω ⊆ R N is called positively invariant if for all x 0 ∈ Ω a feedback control π keeps the trajectory of the system 1.1 in Ω, that is, for all w ∈ W.
To prove that the system 1.1 with the control obtained by solving the FHOCP has the ISDS property, we need the following assumption.Assumption 3.3. 1 There exist functions α l , α w ∈ K ∞ , where α l is locally Lipschitz continuous such that 3.4 2 The FHOCP in Definition 3.1 admits a feasible solution at the initial time t 0.
3 There exists a controller u t π t, x t such that the system 1.1 has the ISDS property.
4 For each 1 > ε > 0, there exists a locally Lipschitz continuous function V f x such that the terminal region Ω is a positively invariant set and we have where η ∈ K ∞ , w ∈ W, and Vf denotes the derivative of V f along the solution of system 1.1 with the control u ≡ π from point 3 of this assumption.5 For each sufficiently small ε > 0, it holds that Mathematical Problems in Engineering Remark 3.4.In 6 , it is discussed that a different stage cost, for example, by the definition of l s : l − l w , can be used for the FHOCP.In view of stability, the stage cost l s has to fulfill some additional assumptions, see 6, Chapter 3.4 .
Remark 3.5.The assumption 3.7 is needed to assure that the cost function satisfies the lower estimation in 2.7 .However, we did not investigated whether this condition is restrictive or not.In case of discrete-time systems and the according cost function, the assumption 3.7 is not necessary, see the proofs in 6, 12-15 .
The following theorem establishes ISDS of the system 1.1 , using the optimal control input u ≡ π * obtained from solving the FHOCP.Theorem 3.6.Consider a system of the form 1.1 .Under Assumption 3.3, the system resulting from the application of the predictive control strategy to the system, namely, ẋ t f x t , w t , π * t, x t , t ∈ R , x 0 x 0 , possesses the ISDS property.
Remark 3.7.Note that the gains and the decay rate of the definition of the ISDS property, Definition 2.2, can be calculated using Assumption 3.3, as it is partially displayed in the following proof.
Proof.We show that the optimal cost function J * x 0 , π * , w * ; t, T : V x 0 is an ISDS-Lyapunov function, following the steps: i the control problem admits a feasible solution π for all times t > 0; ii J * x 0 , π * , w * ; t, T satisfies the conditions 2.7 and 2.8 .
Then, by application of Theorem 2.4, the ISDS property follows.
Let us prove the following feasibility: we suppose that a feasible solution π t , x t , t ∈ t, t T at time t exists.For Δ > 0, we construct a control by π t , x t π t , x t , t ∈ t Δ, t T , π t , x t , t ∈ t T, t T Δ , 3.8 where π is the controller from Assumption 3.3, point 3. Since π controls x t Δ into x t T ∈ Ω and Ω is a positively invariant set, π t , x t keeps the systems trajectory in Ω for t T < t ≤ t T Δ under the constraints of the FHOCP.This means that from the existence of a feasible solution for the time t, we have a feasible solution for the time t Δ.Since we assume that a feasible solution for the FHOCP at the time t 0 exists Assumption 3.3, point 2 , it follows that a feasible solution exists for every t > 0. We replace π in 3.8 by π * .Then, it follows from 3.6 that holds.From this and with 3.5 , it holds that Now, with Assumption 3.3, point 5, we have This shows that J * satisfies 2.7 .Now, denote x 0 : 12 and therefore Now, we show that J * satisfies the condition 2.8 .Note that by Assumption 3.3, point 6, J * is locally Lipschitz continuous.With 3.13 , it holds that

14 Mathematical Problems in Engineering
This leads to

3.15
For h → 0 and using the first point of Assumption 3.3, we obtain By definition of γ r : η α −1 l 2α w r and g r : 1/2 α l η −1 r , r ≥ 0, this implies where the function g is locally Lipschitz continuous.We conclude that J * is an ISDS-Lyapunov function for the system ẋ t f x t , w t , π * t, x t , 3.18 and by application of Theorem 2.4 the system has the ISDS property.
In the next subsection, we transform the analysis of ISDS for MPC of single systems to interconnected systems.

Interconnected Systems
We consider interconnected systems with disturbances of the form ẋi t f i x 1 t , . . ., x n t , w i t , u i t , i 1, . . ., n,

3.19
where u i ∈ R M i , measurable and essentially bounded, are the control inputs and w i ∈ R P i are the unknown disturbances.We assume that the states, disturbances, and inputs fulfill the constraints where , and U i ⊆ R M i are compact and convex sets containing the origin in their interior.Now, we are going to determine an MPC scheme for interconnected systems.An overview of existing distributed and hierarchical MPC schemes can be found in 29 .The used scheme in this work is inspired by the min-max approach for single systems as in Definition 3.1, see 14, 15 .
At first, we determine the cost function of the ith subsystem by J i x 0 i , x j j / i , π i , w i ; t, T

3.21
where 1 > ε i > 0, x 0 i ∈ X i is the initial value of the ith subsystem at time t and π i ∈ Π i is a feedback, essentially bounded, locally Lipschitz in x and measurable in t, where Π i ⊆ R M i is a compact and convex set containing the origin in its interior.l i − l w i − l ij is the stage cost, where l i : R N i × R M i → R .l w i : R P i → R penalizes the disturbance and l ij : R N j → R penalizes the internal input for all j 1, . . ., n, j / i. l i , l w i and l ij are locally Lipschitz continuous functions with l i 0, 0 0, l w i 0 0, l ij 0 0, and In contrast to single systems, we add the terms l ij x j , j / i, to the cost function due to the interconnected structure of the subsystems.Here, two problems arise: the formulation of an optimal control problem for each subsystem and the calculation/determination of the internal inputs x j , j / i.
We conserve the minimization of J i with respect to π i and the maximization of J i with respect to w i as in Definition 3.1 for single systems.In the spirit of ISS/ISDS, which treat the internal inputs as "disturbances," we maximize the cost function with respect to x j , j / i worst-case approach .Since we assume that x j ∈ X j , we get an optimal solution π * i , w * i , x * j , j / i, of the control problem.The drawbacks of this approach are that, on the one hand, we do not use the systems equations 3.19 to predict x j , j / i and, on the other hand, the computation of the optimal solution could be numerically inefficient, especially if the number of subsystems n is "huge" or/and the sets X i are "large."Moreover, taking into account the worst-case approach, the maximization over x j , the obtained optimal control π * i for each subsystem could be extremely conservative, which leads to extremely conservative ISS or ISDS estimations.
To avoid these drawbacks of the maximization of J i with respect to x j , j / i, one could use the system equation 3.19 to predict x j , j / i instead.
A numerically efficient way to calculate the optimal solutions π * i , w * i of the subsystems is a parallel calculation.Due to interconnected structure of the system, the information about systems states of the subsystems should be exchanged.But this exchange of information causes that an optimal solution π * i , w * i could not be calculated.To the best of our knowledge, no theorem is proved that provides the existence of an optimal solution of the optimal control problem using such a parallel strategy.We conclude that a parallel calculation cannot help in our case.
Another approach of an MPC scheme for networks is inspired by the hierarchical MPC scheme in 30 .One could use the predictions of the internal inputs x j , j / i, as follows: at sampling time t kΔ, k ∈ N, Δ > 0 all subsystems calculate the optimal solution iteratively.This means that for the calculation of the optimal solution for the ith subsystem,the currently "optimized" trajectories of the subsystems 1, . . ., i − 1 will be used, denoted by x opt,kΔ p , p 1, . . ., i − 1, and the "optimal" trajectories of the subsystems i 1, . . ., n of the optimization at sampling time t k − 1 Δ will be used, denoted by The advantage of this approach would be that the optimal solution is not that much conservative as the min-max approach and the calculation of the optimal solution could be performed in a numerically efficient way, due to the usage of the model to predict the "optimal" trajectories and that the maximization over x j , j / i will be avoided.The drawback is that the optimal cost function of each subsystem depends on the trajectories x opt,• j , j / i using this hierarchical approach.Then, to the best of our knowledge, it is not possible to show that the optimal cost functions are ISDS-Lyapunov functions of the subsystems, which is a crucial step for proving ISDS of a subsystem or the whole network, because no helpful estimations for the Lyapunov function properties can be performed due to the dependence of the optimal cost functions of the trajectories x opt,• j , j / i.The FHOCP for the ith subsystem reads as follows:

3.22
where the terminal region Ω i is a compact and convex set with the origin in its interior.
The resulting optimal control of each subsystem is a feedback control law, that is, , and π * i t, x * i t is essentially bounded, locally Lipschitz in x, and measurable in t, for all i 1, . . ., n.
To show that each subsystem and the whole system have the ISDS property using the mentioned distributed MPC scheme, we suppose the following assumption for the ith subsystem of 3.19 .

2
The FHOCP admits a feasible solution at the initial time t 0.
3 There exists a controller u i t π i t, x t such that the ith subsystem of 3.19 has the ISDS property.
4 For each 1 > ε i > 0, there exists a locally Lipschitz continuous function V f i x i such that the terminal region Ω i is a positively invariant set and we have for almost all x i ∈ Ω i , where η i ∈ K ∞ , w i ∈ W i , and Vf i denotes the derivative of V f i along the solution of the ith subsystem of 3.19 with the control u i ≡ π i from point 3 of this assumption.
5 For each sufficiently small ε i > 0 it holds that The optimal cost function J * i x 0 i , x j * j / i , π * i , w * i ; t, T is locally Lipschitz continuous.Now, we can state that each subsystem possesses the ISDS property using the mentioned MPC scheme.Theorem 3.9.Consider an interconnected system of the form 3.19 .Let Assumption 3.8 be satisfied for each subsystem.Then, each subsystem resulting from the application of the control obtained by the FHOCP for each subsystem to the system, namely, ẋi t f i x 1 t , . . ., x n t , w i t , π * i t, x t , t ∈ R , x 0 i x i 0 , 3.26 possesses the ISDS property.
Proof.Consider the ith subsystem.We show that the optimal cost function V i x 0 i : J * i x 0 i , x j * j / i , π * i , w * i ; t, T is an ISDS-Lyapunov function for the ith subsystem.We abbreviate x j x j * j / i .By following the steps of the proof of Theorem 3.6, we conclude that there exists a feasible solution for all times t > 0 and that by 3.25 the functional V i x 0 i satisfies the condition Note that by Assumption 3.8, point 6, J * i is locally Lipschitz continuous.We have that it holds Vi

18
Mathematical Problems in Engineering and equivalently Vi for almost all x 0 i ∈ X i and all w * i ∈ W i , where γ i r : , where g i is locally Lipschitz continuous.Since i can be chosen arbitrarily, we conclude that each subsystem has an ISDS-Lyapunov function.It follows that each subsystem has the ISDS property.
To investigate whether the whole system has the ISDS property, we collect all functions γ ij in a matrix Γ : γ ij n×n , γ ii ≡ 0, which defines a map as in 2.12 .
Using the small-gain condition for Γ, the ISDS property for the whole system can be guaranteed.
Corollary 3.10.Consider an interconnected system of the form 3.19 .Let Assumption 3.8 be satisfied for each subsystem.If Γ satisfies the small-gain condition 2.13 , then the whole system possesses the ISDS property.
Proof.Each subsystem has an ISDS-Lyapunov function with gains γ ij .This follows from Theorem 3.9.The matrix Γ satisfies the SGC, and all assumptions of Theorem 2.8 are satisfied.It follows that with has the ISDS property.
In the next section, we investigate the ISS property for MPC of TDS.

MPC and ISS for Time-Delay Systems
Now, we introduce the ISS property for MPC of TDS.We derive conditions to assure that a single system, a subsystem of a network, and the whole system possess the ISS property applying the control obtained by an MPC scheme for TDS.

Single Systems
We consider systems of the form 2.18 with disturbances, where w ∈ W ⊆ L ∞ R , R P is the unknown disturbance and W is a compact and convex set containing the origin.The input u is an essentially bounded and measurable control subject to input constraints u ∈ U, where U ⊆ R is a compact and convex set containing the origin in its interior.The functional f has to satisfy the same conditions as in the previous section to assure that a unique solution exists, which is denoted by x t; ξ, w, u or x t in short.
The aim is to find an optimal control u such that the system 4.1 has the ISS property.Due to the presence of disturbances, we apply a feedback control structure, which compensates the effect of the disturbance.This means that we apply a feedback control law π t, x t to the system.In the rest of this section, we assume that π t, x t ∈ Π is essentially bounded, locally Lipschitz in x t , and measurable in t.The set Π ⊆ R m is assumed to be compact and convex containing the origin in its interior.We obtain an MPC control law by solving the control problem.Definition 4.1 Finite horizon optimal control problem with time-delays FHOCPTD .Let T be the prediction horizon and π t, x t a feedback control law.The finite horizon optimal control problem with time-delays for a system of the form 4.1 is formulated as min π max w ξ, π, w; t, T : min where ξ ∈ C −θ, 0 , R N is the initial function of the system at time t, and the terminal region Ω and the state constraint set X ⊆ C −θ, 0 , R N are compact and convex sets with the origin in their interior.l − l w is the stage cost, where l : R N × R m → R and l w : R P → R are locally Lipschitz continuous with l 0, 0 0, l w 0 0, and V f : Ω → R is the terminal penalty.Now, we can state a theorem that assures ISS of MPC for a single time-delay system with disturbances.Theorem 4.4.Let Assumption 4.3 be satisfied.Then, the system resulting from the application of the predictive control strategy to the system, namely, ẋ t f x t , w t , π * t, x t , t ∈ R , x 0 τ ξ τ , τ ∈ −θ, 0 , possesses the ISS property.
Proof.The proof goes along the lines of the proof of Theorem 3.6 with changes according to time-delays and functionals, that is, we show that the optimal cost functional V ξ : J * ξ, π * , w * ; t, T is an ISS-Lyapunov-Krasovskii functional.
For a feasible solution for all times t > 0, we suppose that a feasible solution π t , x t , t ∈ t, t T at time t exists.We construct a control by where π is the controller from Assumption 4.3, point 3, and Δ > 0. π steers x t Δ into x t T ∈ Ω and Ω is a positively invariant set.This means that π t , x t keeps the system trajectory in Ω for t T < t ≤ t T Δ under the constraints of the FHOCPTD.This implies that from the existence of a feasible solution for the time t, we have a feasible solution for the time t Δ.From Assumption 4.3, point 2, there exists a feasible solution for the FHOCPTD at the time t 0 and it follows that a feasible solution exists for every t > 0.
Replacing π in 4.9 by π * , it follows from 4.7 that

4.16
Let h → 0 , and using the first point of Assumption 4.3 we get By definition of χ r : ψ 2 α −1 l 2α w r and α r : 1/2 α l ψ −1 2 r , r ≥ 0, this implies that is, J * satisfies the condition 2.23 .We conclude that J * is an ISS-Lyapunov-Krasovskii functional for the system ẋ t f x t , w t , π * t, x t , 4.19 and by application of Theorem 2.12 the system has the ISS property.Now, we consider that interconnections of TDS and provide conditions such that the whole network with an optimal control obtained from an MPC scheme has the ISS property.

Interconnected Systems
We consider interconnected systems with time-delays and disturbances of the form ẋi t f i x t 1 , . . ., x t n , w i t , u i t , i 1, . . ., n, where u i ∈ R M i are the essentially bounded and measurable control inputs and w i ∈ R P i are the unknown disturbances.We assume that the states, disturbances, and inputs fulfill the constraints where and U i ⊆ R M i are compact and convex sets containing the origin in their interior.We assume the same MPC strategy for interconnected TDS as in Section 3.2.The FHOCPTD for the ith subsystem of 4.20 reads as min where ξ i ∈ X i is the initial function of the ith subsystem at time t and the terminal region Ω is a compact and convex set with the origin in its interior.π i t, x t is essentially bounded, locally Lipschitz in x, and measurable in t and Π i ⊆ R M i is a compact and convex sets containing the origin in its interior.l i − l w i − l ij is the stage cost, where l i : R N i × R M i → R .l w i : R P i → R penalizes the disturbance and l ij : R N j → R penalizes the internal input for all j 1, . . ., n, j / i. l i , l w i , and l ij are locally Lipschitz continuous functions with l i 0, 0 0, l w i 0 0, l ij 0 0, and V f i : Ω i → R is the terminal penalty of the ith subsystem.We obtain an optimal solution π * i , x j * j / i , w * i , where the control of each subsystem is a feedback control law, which depends on the current states of the whole system, that is, For the ith subsystem of 4.20 , we suppose the following assumption.

Assumption 4.5. 1 There exist functions α
The FHOCPTD admits a feasible solution at the initial time t 0.
3 There exists a controller u i t π i t, x t such that the ith subsystem of 4.20 has the ISS property.
4 There exists a locally Lipschitz continuous functional V f i φ i such that the terminal region Ω i is a positively invariant set and for all φ i ∈ Ω i we have where ψ 2i ∈ K ∞ , φ j ∈ C −θ, 0 , R N j , j 1, . . ., n and w i ∈ W i .D V f i denotes the upper right-hand side derivate of the functional V f i along the solution of the ith subsystem of 4.20 with the control u i ≡ π i from point 3. of this assumption.Proof.Consider the ith subsystem.We show that the optimal cost functional V i ξ i : J * i ξ i , x j * j / i , π * i , w * i ; t, T is an ISS-Lyapunov-Krasovskii functional for the ith subsystem.We abbreviate x t j x j t j / i * .
Following the lines of the proof of Theorem 4.4, we have that there exists a feasible solution of the ith subsystem for all times t > 0 and that the functional V i ξ i satisfies the condition ψ 1i ξ i 0 ≤ V i ξ i ≤ ψ 2i ξ i a , 4.26 using 4.25 and |ξ 0 | ≥ |ξ i 0 |.Note that by Assumption 4.5, point 6, J * i is locally Lipschitz continuous.We arrive that the following equation holds:

4.27
This is equivalent to

4.30
This can be shown for each subsystem and we conclude that each subsystem has an ISS-Lyapunov-Krasovskii functional.It follows that the ith subsystem is ISS in maximum formulation.
We collect all functions χ ij in a matrix Γ : χ ij n×n , χ ii ≡ 0, which defines a map as in 2.12 .
Using the small-gain condition for Γ, the following corollary from Theorem 4.6.
Corollary 4.7.Consider an interconnected system of the form 4.20 .Let Assumption 4.5 be satisfied for each subsystem.If Γ satisfies the small-gain condition 2.13 , then the whole system possesses the ISS property.
Proof.We know from Theorem 4.6 that each subsystem of 4.20 has an ISS-Lyapunov-Krasovskii functional with gains χ ij .Since the matrix Γ satisfies the SGC, all assumptions of Theorem 2.13 are satisfied and it follows that the whole system of the form

Conclusions
We have combined the ISDS property with MPC for nonlinear continuous-time systems with disturbances.For single systems, we have derived conditions such that by application of the control obtained by an MPC scheme to the system, it has the ISDS property, see Theorem 3.6.
Considering interconnected systems, we have proved that each subsystem possesses the ISDS property using the control of the proposed MPC scheme, which is Theorem 3.9.Using a smallgain condition, we have shown in Corollary 3.10 that the whole network has the ISDS property.
Considering single systems with time-delays, we have proved in Theorem 4.4 that a TDS has the ISS property using the control obtained by an MPC scheme, where we have used ISS-Lyapunov-Krasovskii functionals.For interconnected TDSs, we have established a theorem, that guarantees that each closed-loop subsystem obtained by application of the control obtained by a decentralized MPC scheme has the ISS property, see Theorem 4.6.From this result and using Theorem 2.13, we have shown that the whole network with time-delays has the ISS property under a small-gain condition, see Corollary 4.7.
In future research, we are going to derive conditions for open-loop MPC schemes to assure ISDS and ISS of TDSs, respectively.The differences of both schemes, closed-loop, and open-loop, will be analyzed and applied in practice.
Note that the results presented here are first steps of the approaches of ISDS for MPC and ISS for MPC with time-delays.More detailed studies should be done in these directions, especially in applications of these approaches.Therefore, numerical algorithms for the implementation of the proposed schemes, as in 5, 7 , for example, should be developed.It could be analyzed if and how other existing algorithms could be used or how they should be adapted for implementation for the results presented in this work.The advantages of the usage of ISDS for MPC in contrast to ISS for MPC could be investigated and applied in practice.
Furthermore, one can investigate ISDS and ISS for unconstrained nonlinear MPC, as it was done in 17, 31 , for example.