AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 475808 10.1155/2014/475808 475808 Research Article A Simplified Predictive Control of Constrained Markov Jump System with Mixed Uncertainties Yin Yanyan 1 Liu Yanqing 1 Karimi Hamid R. 2 He Shuping 1 Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education) Institute of Automation Jiangnan University Wuxi 214122 China jiangnan.edu.cn 2 Department of Engineering Faculty of Engineering and Science University of Agder 4898 Grimstad Norway uia.no 2014 2632014 2014 17 11 2013 27 01 2014 14 02 2014 26 3 2014 2014 Copyright © 2014 Yanyan Yin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

A simplified model predictive control algorithm is designed for discrete-time Markov jump systems with mixed uncertainties. The mixed uncertainties include model polytope uncertainty and partly unknown transition probability. The simplified algorithm involves finite steps. Firstly, in the previous steps, a simplified mode-dependent predictive controller is presented to drive the state to the neighbor area around the origin. Then the trajectory of states is driven as expected to the origin by the final-step mode-independent predictive controller. The computational burden is dramatically cut down and thus it costs less time but has the acceptable dynamic performance. Furthermore, the polyhedron invariant set is utilized to enlarge the initial feasible area. The numerical example is provided to illustrate the efficiency of the developed results.

1. Introduction

Hybrid systems are a class of dynamical systems denoted by an interaction between the continuous and discrete dynamics. In control community, the researchers tend to view hybrid systems as continuous state and discrete switching which focuses on the continuous state of dynamic system. Switched systems are a natural result from this point of view. Since switching systems can be applied to model the systems involving abrupt sudden changes which are widely found in the systems of economics and communications as well as manufacturing, more attention has been paid to them (see robust stabilization , finite-time analysis  and asynchronous switching ). When the system model is linear and the switching is driven by Markov process, it leads to Markov jump linear system (MJS). Specifically, MJS presents a stochastic Markov chain to describe the random changes of system parameters or structures, where the dynamic of MJS is switching among the models governed by a finite Markov chain. Due to this superiority, MJS has been widely investigated during the last twenty years. Attractive pioneer works have been obtained (see controller design , 2D MJS control , peak-to-peak filtering , and finite-time control [7, 8]). However, the cases of completely known transition probability (TP) considered in  are not always achievable since the TP is not easy to be fully accessible (see the delay or packet loss in networked control systems ). Thus it is necessary to investigate the partly unknown case .

On the other hand, the systems in practice are usually subject to input/output constraints. Thus, model predictive control (MPC) is then introduced to solve the problem of MJS with constraints since MPC can explicitly solve the constraints in control action. Successful MPC application in discrete-time MJS can be obtained in [13, 14]. Normally, MPC is reformulated as online quadratic program and results have been reported (see stability [15, 16] and enlarged terminal sets ). It should be noted that the online computation in the literature  leads to heavy computational burden. Thus, the researchers attempted to try a new alternative method to solve the problem. For this reason, explicit MPC  is presented. However, when the size of system increases, the time of searching explicit MPC law will also increase sharply.

Based on the above analysis, a simplified MPC design framework is introduced to reduce the burden of online computation for the constrained MJS with mixed uncertainties. The basic idea is that (N-1) steps of mode-dependent MPC are designed to steer the state to a final neighbour area which includes the origin. Then the final step of robust mode-independent MPC is devised to force the state towards the origin regardless of model uncertainty and transition probability uncertainty. This simplified MPC dramatically reduces the burden of computation with minor performance loss, which implies good balance between the calculation time and dynamical performance. Furthermore, the polyhedron invariant set is applied to further enlarge the initial feasible area.

The construction of the paper is as follows. Section 2 gives the basic dynamical of the system. Section 3 gives the finite-step simplified MPC algorithm and it is formulated as LMIs. Section 4 presents a numerical example to show the efficiency of the results. Section 5 concludes the paper.

Notations. The notations are as follows: Rn denotes a n-dimensional Euclidean space, AT stands for the transpose of a matrix, E{·} denotes the expectation of the stochastic process or vector, a positive-definite matrix is described as P>0, I means the unit matrix with appropriate dimension, and * means the symmetric term in a symmetric matrix.

2. Problem Statement and Preliminaries

The constrained discrete-time MJSs with mixed uncertainties are considered in this paper: (1)xk+1=A(rk)xk+B(rk)uk,yk=C(rk)xk, where xkRnx, ukRnu, ykRny, respectively, denote the state vector, the input vector, and the controlled output vector. The discrete-time Markov stochastic process {rk,k0} takes values in a finite set Γ, where Γ contains σ modes of system (1), Γ={1,2,3,,σ}, and r0 represents the initial mode. The uncertain system model A(rk) and B(rk) belong to the model sets (2)Ω(rk)={[A(rk),B(rk)],A(rk)=ι=1LαιAι(rk),B(rk)=ι=1LαιBι(rk),ι=1Lαι=1}. Inputs and outputs constraints are subject to (3)-ulimukulim,(4)-ylimykylim. The transition probability (TP) matrix is denoted by Π(k)={πij(k)}, i,jΓ, where πij(k)=P(rk+1=jrk=i) is the transition probability from mode i at time k to mode j at time k+1. The elements in TP matrix satisfy πij(k)0 and j=1σπij(k)=1: (5)π=[π11π12π1σπ21π22π2σπσ1πσ2πσσ]. The uncertain transition probability (TP) implies that some elements in π are unknown; a four-mode transition probability (TP) matrix π may be (6)π=[?π12??π21π22??π31?????π43?  ], where “?” represents the inaccessible element in TP matrix. For convenience, we denote π=πrkk+πrkuk, for all mode rkΓ at sampling time k, if πrkk0, and redescribe it as πrkk=(κrk1,,κrkτ), for all 1lτ, where κrkl represents the lth exact element in the ith row of π, Πrkk=jπrkkπrkrk+1.

Some preliminaries are introduced before proceeding.

Definition 1 (see [<xref ref-type="bibr" rid="B6">6</xref>]).

For any initial mode r0 and state x0, discrete-time MJS (1) is said to be stochastically stable if (7)limkE{xkTxkx0,r0}0.

Definition 2.

For MJS (1), an ellipsoid set Θ={xRnxxkTPk(rk)xkγk} associated with the state is said to be asymptotically mode-dependent stable, if the following holds, whenever xk0Θ, then xkΘ for kk0 and xk0 when k.

Next, we first derive the online optimal MPC algorithm for system (1). The aim is to minimize the function cost related to worst-case performance and then in Section 4 the corresponding simplified MPC algorithm will be derived. Finally the polyhedron invariant set is applied to further improve the initial feasible district.

3. Simplified MPC Design 3.1. Online Optimal MPC Theorem 3.

Consider MJS (1) with model uncertainties (2) and partly unknown TP matrix (6), at sampling time k, if there exist a set of matrices Fk(rk), such that the following holds: (8)minFk(rk)maxAι(rk),Bι(rk),πrkrk+1,rk,rk+1ΓJ(k) s.t. (9)-ulimukulim,(10)-ylimykylim,(11)E{V(xk+1,rk+1x0,r0)}-E{V(xk,rkx0,r0)}-E{xkTQ(rk)xk+ukTR(rk)ukx0,r0}. Then, it decides an upper bound on J(k), where uk=Fk(rk)xk, J(k)=E{k=0(xkTQ(rk)xk+ukTR(rk)uk)x0,r0}, Q(rk), R(rk) are positive definite weighting matrices.

Proof.

It is assumed that at the sampling time k, a state-feedback law u(k+ik)=Fk(rk)x(k+ik), is applied to minimize the worst cost function of Jk; it is easy to show that V(xk,rkx0,r0) is an upper bound on J(k). Let V(xk)=xkTPk(rk)xk, Pk(rk)>0, be a quadratic Lyapunov function. For any [Aι(rk),Bι(rk)Ω(rk)], the following constraint holds (12)E{V(xk+1,rk+1x0,r0)}-E{V(xk,rkx0,r0)}-E{xkTQ(rk)xk+ukTR(rk)ukx0,r0}. Summing (12) from i=0 to on both sides and using the fact xk=0 or V(xk)=0, we obtain (13)J(k)V(xk,rkx0,r0)=xkTPk(rk)xk which implies that V(xk,rkx0,r0) is an upper bound on J(k).

Theorem 4.

Consider MJS (1) with polytope model uncertainties (2) and partly unknown TP matrix (4), if there exist a set of positive definite matrices Xk(rk), Yk(rk), such that the following optimization problem (12) has an optimal solution: (14)minFk(rk)maxAι(rk),Bι(rk),πrkrk+1,rk,rk+1Γγk s.t. (15)[1*xkXk(rk)]0,rkΓ,rk+1πrk+1uk,(16)[ZYk(rk)*Xk(rk)]0,Ztt(ulimt)2,(17)[Xk(rk)*C(rk)θl(rk)M]0,Mhh(ylimh)2,(18)[Xk(rk)*θl(rk)Xk(rk+1)]0,rk+1πrkuk,(19)[ΠrkkXk(rk)UT(rk)Xk(rk)Q1/2(rk)YkT(rk)Q1/2(rk)*W(rk+1)00**γkI0***γkI]0,rk+1πrkk then, the mode-dependent state-feedback which minimizes the upper bound γk on J(k) and simultaneously stabilizes the closed-loop system within an ellipsoid ɛ={xkTXk-1(rk)xk1} is calculated by u(k+ik)=Fk(rk+i)xk+ik,  Fk(rk+i)=Yk(rk+i)Xk-1(rk+i), where Xk(rk)=γkPk-1(rk), θl(rk)=Al(rk)Xk(rk)+Bl(rk)Yk(rk),  UT(rk)=[κrk1θlT(rk),,κrkτθlT(rk)], W(rk+1)=diag{Xk(κrk1),Xk(κrk2),,Xk(κrkτ)}, Ztt, Mhh, respectively, describe the tth, hth diagonal element of Z, M, ulimt and ylimh, respectively, describe the tth and hth element of input and output constraints, t=1,2,,nu, h=1,2,,ny.

Proof.

Let Xk(rk)=γkPk-1(rk); J(k)γk in (13) can be solved by the following LMIs: (20)[1*xkXk(rk)]0,rkΓ,rk+1πrk+1uk. The input/output constraints are guaranteed by (16) and (17); the proof is similar to ; here we omit the proof. Equation (11) is equivalent to (21)Ξ(rk)=Pk(rk)-θlT(rk)(rk+1ππrkrk+1Pk(rk+1))×θl(rk)-Q(rk)-FkT(rk)R(rk)Fk(rk)0. Since rk+1ππrkrk+1=1, πrkrk+10, Πkrk=rk+1πrkkπrkrk+1, it leads to (22)Ξ(rk)=(rk+1ππrkrk+1)Pk(rk)-θlT(rk)×(rk+1ππrkrk+1Pk(rk+1))θl(rk)-Q(rk)-FkT(rk)R(rk)Fk(rk)=ΠrkkP(rk)-θlT(rk)×(rk+1πrkkπrkrk+1Pk(rk+1))θl(rk)+(rk+1πrkukπrkrk+1)×(Pk(rk)-θlT(rk)Pk(rk+1)θl(rk))-Q(rk)-FkT(rk)R(rk)Fk(rk)0. One sufficient condition to ensure (22) is (23)ΠrkkPk(rk)-θlT(rk)(rk+1πrkkπrkrk+1Pk(rk+1))×θl(rk)-Q(rk)-FkT(rk)R(rk)Fk(rk)0,Pk(rk)-θlT(rk)Pk(rk+1)θl(rk)0. Considering the Schur theory complement lemma, (16) and (17) can be derived.

Actually the feedback controller can make the closed-loop system stable in the ellipsoid ɛ={xkTXk-1(rk)xk1}. Assume that the optimal Pk*(rk), Fk*(rk) at the moment k are (24)Pk*(rk)=γk*(Xk*(rk))-1,Fk*(rk)=Yk*(Xk*(rk))-1,θlk*(rk)=Al(rk)+Bl(rk)Fk*(rk),ϑlk*(rk)=Al(rk)Xk*(rk)+Bl(rk)Yk*(rk). Equations (18) and (19) lead to (25)xkTPk*(rk)xkxkT(θlk*(rk))T×rk+1ππrkrk+1P(rk+1)θlk*(rk)xk+xkTQ(rk)xk+xkTFkT(rk)R(rk)Fk(rk)xk,E{xkTPk*(rk)xk}E{xk+1TPk*(rk+1)xk+1}+xkTQ(rk)xk+xkTFkT(rk)R(rk)Fk(rk)xk.Pk+1*(rk+1) is the optimal value at moment k+1; Pk*(rk+1) is a feasible one at moment k+1. By the optimum definition, (26)xk+1TPk*(rk+1)xk+1xk+1TPk+1*(rk+1)xk+1; then, (27)E{xkTPk*(rk)xk}E{xk+1TPk+1*(rk+1)xk+1}+xkTQ(rk)xk+xkTFkT(rk)R(rk)Fk(rk)xk. It is shown that E{xkTPk*(rk)xk} decrease strictly as E{xkTΦk*(rk)xk}0, k

From Definition 1, the system is stochastically stable. From (27), then (28)E{xkTPk*(rk)xk}E{xk+1TPk+1*(rk+1)xk+1}. This implies that the ellipsoid is an asymptotically stable invariant one, which completes the proof.

Corollary 5.

Consider MJS (1) with model uncertainties (2) and TP matrix (4) at current moment k; supposing that there exists a set of positive definite matrices X, Y, such that the following optimization problem has an optimal solution: (29)minγk,X,Yγk s.t. (30)[1*xkX]0,rkΓ,rk+1πrk+1uk,[ZY*X]0,Ztt(ulimt)2,[X*C(rk)θl(rk)M]0,Mhh(ylimh)2,[X*θl(rk)X]0,rk+1πrkuk,[ΠrkkXUT(rk)XQ1/2(rk)YTQ1/2(rk)*W(rk+1)00**γkI0***γkI]0,rk+1πrkk then the mode-independent state-feedback law can minimize the upper bound γk on the objective function J(k) and stabilize the closed-loop system in the ellipsoid ɛ={xkTX-1xk1} and it is obtained by u(k+ik)=Fxk+ik, F=YX-1, where X=γkP-1, θl(rk)=Al(rk)X+Bl(rk)Y, UT(rk)=[κrk1θlT(rk),,κrkτθlT(rk)], W(rk+1)=diag{X,X,,X}, Ztt and Mhh, respectively, describe the tth and hth diagonal element of Z, M, and ulimt and ylimh, respectively, describe the tth and hth element of input and output constraints, t=1,2,,nu, h=1,2,,ny.

3.2. Simplified MPC Design

In this section, a simplified MPC for uncertain MJS (1) is developed based on the online algorithm in Theorem 4; Figure 1 shows the simplified MPC schematic diagram. Then the simplified mode-independent feedback controller is designed regardless of model uncertainty and TP uncertainty since much more constraints will be nonactive in the neighboring region of origin and this freedom of feasibility is applied to improve the procedure of controller design.

Simplified MPC schematic diagram.

Theorem 6.

Consider uncertain MJS (1) associated with an initial state x0 satisfying x0TQ0-1(r0)x01; the simplified MPC Algorithm 7 robustly stabilizes the closed-loop system.

Proof.

For the N-step implementation at xj, j=1,,N, the selection for xj in Algorithm 7 implies Qj-1-1(rk)<Qj-1(rk), which means the constructed ellipsoid ξj={xxTQj-1(rk)x1} is embedded in ξj-1, that is, ξjξj-1. For a settled x, ξj={xxTQj-1(rk)x1} is decreasing monotonically associated with j, which guarantees the unique search in the search table for the largest j for ξj={xxTQj-1(rk)x1}. If xk belongs to ξj={xxTQj-1(rk)x1} and ξj+1={xxTQj+1-1(rk)x>1}, j=1,,N-1, by applying Theorem 3, the control law uk=Fj(rk)xk will steer the state in ξj-1 to ξj. Finally, the controller uk=FNxk (applying Corollary 5) make the state to be in ξN and converge to the origin. Furthermore, the LP programming algorithm is utilized to remove redundant constraints  and construct a sequence of polyhedral invariant set for MJS and thus enlarge the feasible domain.

Algorithm 7 (simplified MPC applying polyhedral invariant set).

Simplified MPC design is as follows.

Select xj, j=1,,N, which satisfy ɛj+1ɛj, ɛN=δ(0).

For step 1 to N-1, calculate the corresponding mode-dependent gains γj(rk), Qj(rk), Xj(rk), Yj(rk), Fj(rk) by applying Theorem 4 and store them in a search table.

For each Fj(rk), construct the corresponding polyhedral invariant set by the following algorithm: let Sj(rk)=[CT(rk),-CT(rk),FjT(rk),-FjT(rk)]T,  dj(rk)=[ymaxT(rk),yminT(rk),umaxT(rk),uminT(rk)]T. Select row m from (Sj(rk),dj(rk)) and then check j if Sj,m(rk(Aj(rk)+BjFj)(rk)dj,m(rk)) is redundant through solving the Linear programming: (31)maxρj,ms.t.  ρj,m=Sj,m(rk)(Aj(rk)+BjFj(rk))x-dj,m(rk)Sj(rk)xdj(rk).

If ρj,m>0, it implies that the constraint Sj,m(rk(Aj(rk)+BjFj)(rk)dj,m(rk)) is nonredundant; then renew the nonredundant constraints as Sj(rk)=[SjT(rk),(Sj,m(rk)(Aj(rk)+Bj(rk)Fj(rk)))T]T, dj(rk)=[dj(rk)T,dj,m(rk)T]T.

Online implementation: search the state in the search table to fix the needed index j(j<N), decide the smallest polyhedral invariant set χj(rk)={xSj(rk)xdj(rk)}, and finally implement uk=Fj(rk)xk.

Online implementation: continue to check if xχN(rk)={xSN(rk)xdN(rk)} is satisfied; if it is true, then apply uk=FNxk.

Remark 8.

It should be noted that the more approximation of optimality can be obtained as N increases; here N can be chosen according to different prior requirements. Thus, we can adjust the numbers of design step in terms of different requirements.

4. Illustrative Example

Consider the discrete-time MJS with four modes (σ=4): (32)A11=[10.10.010.99],B11=[0.10.187],A12=[10.100.05],  B12=[0.10.187],A21=[10.1-0.10.99],B21=[0.10.187],A22=[10.10.10.05],B22=[0.10.187],A31=[10.10.20.99],B31=[0.10.187],A32=[10.10.150.1],B32=[0.10.187],A41=[10.10.050.5],B41=[0.10.187],A42=[10.10.050.1],B42=[0.10.187]. The detailed constraints are umax=2 and ymax=1.5, initial state x0 is [-0.651]T, and C(rk)=. The positive definite weighting matrices are Q(rk)= and R(rk)=0.00002. The partly unknown TP matrix is randomly generated in Table 1.

The partly unknown TP matrix.

1 2 3 4
0.361 ? 0.092 ?
? 0.090 ? 0.248
0.162 0.489 ? ?
? ? 0.251 ?

Here we will show the 5-step example of the proposed Algorithm 7. Firstly, a state set {xjxj=(0.5,-0.9),(0.4,-0.8),(0.3,-0.7),(0.2,-0.6),(0.1,-0.5)} is designed to compute the corresponding feedback gains Fj(rk). It is noted that the sequence of states xj guarantees that the constructed polyhedral invariant sets are embedded, that is, SjSj-1. In this example, the first four mode-dependent feedback laws Fj(rk), j=1,,4 are obtained. When the state goes into the smallest polyhedral invariant set, the final-step (the fifth-step) gain F5 is designed to steer the state to the origin regardless of model uncertainty and TP uncertainty.

For each chosen xj in Figure 2, the 5-step ellipsoid invariant sets (purple solid lines) and 5-step polyhedral invariant sets (blue and orange alternant dot dash lines) are illustrated using the numbers 1 to 5. The stabilizable region of polyhedral invariant set constructed by Algorithm 7 is dramatically larger than that of ellipsoid invariant set while the dynamic response of simplified algorithm is comparable with online algorithm.

Trajectory of system states.

The results are computed at the same platform (AMD 2.1 GHz, memory 3.0 GB and MATLAB R2010a); the average time and variances of 30 times’ running of the system are shown in Table 2. From the table, the burden of computation is significantly reduced by simplified algorithm.

20 times’ average of 30 iterations of system state.

Algorithm 20 times’ average Variance
Online 11.6779 s 0.0069
5-step 0.0055 s 1.3173 e - 006
5. Conclusions

The problem of simplified predictive controller design for MJS with mixed uncertainties is investigated. The simplified algorithm drastically reduces the online computational burden with only a little loss of performance. A numerical example is provided to illustrate the validity of the results.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was partially supported by the National Natural Science Foundation of China (61273087), the 111 Project (B12018), the Fundamental Research Funds for the Central Universities (JUSRP11459), the Program for Excellent Innovative Team of Jiangsu Higher Education Institutions, and the Fundamental Research Funds for the Central Universities (JUDCF12029).

Xiang Z. Wang R. Chen Q. Robust reliable stabilization of stochastic switched nonlinear systems under asynchronous switching Applied Mathematics and Computation 2011 217 19 7725 7736 10.1016/j.amc.2011.02.076 MR2799785 ZBL1232.93093 Xiang Z. Qiao C. Mahmoud M. S. Finite-time analysis and H control for switched stochastic systems Journal of the Franklin Institute 2012 349 3 915 927 10.1016/j.jfranklin.2011.10.021 MR2899317 ZBL1273.93173 Xiang Z. Wang R. Chen Q. Robust stabilization of uncertain stochastic switched nonlinear systems under asynchronous switching Proceedings of the IMechE, Part I: Journal of Systems and Control Engineering 2011 225 1 8 20 Boukas E. K. Static output feedback control for stochastic hybrid systems: LMI approach Automatica 2006 42 1 183 188 10.1016/j.automatica.2005.08.012 MR2183081 ZBL1121.93365 Gao H. Lam J. Xu S. Wang C. Stabilization and H control of two-dimensional Markovian jump systems IMA Journal of Mathematical Control and Information 2004 21 4 377 392 10.1093/imamci/21.4.377 MR2099986 ZBL1069.93007 He S. Liu F. Robust peak-to-peak filtering for Markov jump systems Signal Processing 2010 90 2 513 522 2-s2.0-70349265952 10.1016/j.sigpro.2009.07.018 He S. Liu F. Finite-time H control of nonlinear jump systems with time-delays via dynamic observer-based state feedback IEEE Transactions on Fuzzy Systems 2012 20 4 605 614 He S. Liu F. Finite-time boundedness of uncertain time-delayed neural network with Markovian jumping parameters Neurocomputing 2013 103 1 87 92 Internet traffic report 2008, http://www.internettracreport.com Zhang L. Boukas E.-K. H control for discrete-time Markovian jump linear systems with partly unknown transition probabilities International Journal of Robust and Nonlinear Control 2009 19 8 868 883 10.1002/rnc.1355 MR2523161 ZBL1166.93320 Xiong J. Lam J. Gao H. Ho D. W. C. On robust stabilization of Markovian jump systems with uncertain switching probabilities Automatica 2005 41 5 897 903 10.1016/j.automatica.2004.12.001 MR2157722 ZBL1093.93026 Yin Y. Shi P. Liu F. Gain-scheduled robust fault detection on time-delay stochastic nonlinear systems IEEE Transactions on Industrial Electronics 2011 58 10 4908 4916 2-s2.0-80052360935 10.1109/TIE.2010.2103537 do Val J. B. R. Basar T. Receding horizon control of Markov jump linear systems Proceedings of the American Control Conference June 1997 Albuquerque, NM, USA 3195 3199 2-s2.0-0030675386 Park B.-G. Lee J.-W. Kwon W. H. Receding horizon control for linear discrete systems with jump parameters Proceedings of the 36th IEEE Conference on Decision and Control December 1997 San Diego, Calif, USA 3956 3957 2-s2.0-0031364859 Mayne D. Q. Rawlings J. B. Rao C. V. Scokaert P. O. M. Constrained model predictive control: stability and optimality Automatica 2000 36 6 789 814 10.1016/S0005-1098(99)00214-9 MR1829182 ZBL0949.93003 Liu L. Liu Z. Zhang J. Nonlinear model predictive control with terminal invariant manifolds for stabilization of underactuated surface vessel Abstract and Applied Analysis 2011 47 4 861 864 De Doná J. A. Seron M. M. Mayne D. Q. Goodwin G. C. Enlarged terminal sets guaranteeing stability of receding horizon control Systems & Control Letters 2002 47 1 57 63 10.1016/S0167-6911(02)00175-5 MR2010566 ZBL1094.93544 Lambert R. S. C. Rivotti P. Pistikopoulos E. N. A novel approximation technique for online and multi-parametric model predictive control Computer Aided Chemical Engineering 2011 29 739 742 2-s2.0-79958780895 10.1016/B978-0-444-53711-9.50148-6 Kothare M. V. Balakrishnan V. Morari M. Robust constrained model predictive control using linear matrix inequalities Automatica 1996 32 10 1361 1379 10.1016/0005-1098(96)00063-5 MR1420038 ZBL0897.93023 Pluymers B. Rossiter J. A. Suykens J. A. K. De Moor B. The efficient computation of polyhedral invariant sets for linear systems with polytopic uncertainty Proceedings of the American Control Conference (ACC '05) June 2005 804 809 2-s2.0-23944498162