We propose the receding horizon H∞ control (RHHC) for input-delayed systems. A new cost function for a finite horizon dynamic game problem is first introduced, which includes two terminal weighting terms parameterized by a positive definite matrix, called a terminal weighing matrix. Secondly, the RHHC is obtained from the solution to the finite dynamic game problem. Thirdly, we propose an LMI condition under which the saddle point value satisfies the nonincreasing monotonicity. Finally, we show the asymptotic stability and H∞ boundedness of the closed-loop system controlled by the proposed RHHC. The proposed RHHC has a guaranteed H∞ performance bound for nonzero external disturbances and the quadratic cost can be improved by adjusting the prediction horizon length for nonzero initial condition and zero disturbance, which is not the case for existing memoryless state-feedback controllers. It is shown through a numerical example that the proposed RHHC is stabilizing and satisfies the infinite horizon H∞ performance bound. Furthermore, the performance in terms of the quadratic cost is shown to be improved by adjusting the prediction horizon length when there exists no external disturbance with nonzero initial condition.
1. Introduction
In many industrial and natural dynamic processes, time-delays on states and/or control inputs are often encountered in the transmission of information or material between different parts of a system. Chemical processing systems, transportation systems, communication systems, and power systems are typical examples of time-delay systems. As one of time-delay systems, an input-delayed system is common and preferred for easy modeling and tractable analysis. Much research on input-delayed systems has been made for decades in order to compensate for the deterioration of the performance due to the presence of input delay [1–5].
For ordinary systems without time delay, the receding horizon control (RHC) or model predictive control (MPC) has attracted much attention from academia and industry because of its many advantages, including ease of computation, good tracking performance, and I/O constraint handling, compared with the popular steady-state infinite horizon linear quadratic (LQ) control [6–8]. The RHC for ordinary systems has been extended to H∞ problem in order to combine the practical advantage of the RHC with the robustness of the H∞ control [9–11]. This work investigated the nonincreasing monotonicity of the saddle point value corresponding to the optimal cost in LQ problems.
For time-delay systems, there are several results for the RHC [12–15]. A simple receding horizon control with a special cost function was proposed for state-delayed systems by using a reduction method [12]. However, it does not guarantee closed-loop stability by design, and therefore stability can be checked only after the controller has been designed. The general cost-based RHC for state-delayed systems was introduced in [13]. This method has both state and input weighting terms in the cost function. Furthermore, it has guaranteed closed-loop stability by design. The RHC in [13] is more effective in terms of a cost function since it has a more general form compared with memoryless state-feedback controllers. This RHC is also extended to receding horizon H∞ control (RHHC) in [14]. Although the stability and performance boundedness were shown in [14], the advantage of RHHC over the memoryless state-feedback H∞ controller was not mentioned there. While the results mentioned above deal with state-delay systems, the results given in [15] deal with the RHC for input-delayed systems. It extends the idea in [13] to input-delayed systems. However, to the best of our knowledge, there exists no result on the receding horizon H∞ control for input-delayed systems. The purpose of this paper is to lay the cornerstone for the theory on RHHC for input-delayed systems. The issues such as solution, stability, existence condition, and performance boundedness will be addressed in the main results. Furthermore, the advantage of RHHC for input-delayed systems over the memoryless state-feedback controller will be illustrated by adjusting the prediction horizon length.
The rest of this paper is structured as follows. In Section 2, we obtain a solution to the receding horizon H∞ control problem. In Section 3, we derive an LMI condition, under which the nonincreasing monotonicity of a saddle point value holds. In Section 4, we show that the proposed RHHC has asymptotic stability and satisfies H∞ performance boundedness. In Section 5, a numerical example is given to illustrate that the proposed RHHC is stabilizing as well as guarantees the H∞ performance bound. Finally, the conclusion is drawn in Section 6.
Throughout the paper, the notation P>0(P≥0) implies that the matrix P is symmetric and positive definite (positive semi-definite). Similarly, P<0(P≤0) implies that the matrix P is symmetric and negative definite (negative semidefinite). “⋆” is used to denote the elements under the main diagonal of a symmetric matrix. L2[0,∞) and L2[t0,tf] denote the space of square integrable functions on [0,∞) and [t0,tf], respectively.
2. Receding Horizon H∞ Control for Input-Delayed System
Consider a linear time-invariant system with an input delay
(2.1)x˙(t)=Ax(t)+B0u(t)+B1u(t-h)+Bww(t),z(t)=[Q1/2x(t)R1/2u(t)]
with the initial conditions x(0)=x0 and u(τ)=ϕ(τ) on τ∈[-h,0], where x∈Rn is the state, u∈Rm is the control input, w∈Rl is the disturbance signal that belongs to L2[0,∞), z∈Rp is the controlled output, and h>0 is the constant delay. A, B0, and B1 are constant matrices of appropriate dimensions. ϕ(t)∈Rm is assumed to be a continuous function. In order to obtain the RHHC, we will first consider the finite horizon cost function as follows:
(2.2)J(x(t0),ut0,t0,tf,u,w)=∫t0tf[xT(τ)Qx(τ)+uT(τ)Ru(τ)-γ2wT(τ)w(τ)]dτ+xT(tf)Qfx(tf)+∫tf-htfuT(τ)Rhu(τ)dτ,
where Q>0, R>0, Qf>0, and Rh>0. We can regard J as a function of either L2 signals or feedback strategies. Let ℳ={μ:[t0,tf]×Rn×Cm[-h,0]→Rm} and 𝒩={ν:[t0,tf]×Rn×Cm[-h,0]→Rl}, where Cm[-h,0] is the space of m-dimensional continuous vector functions on [-h,0]. Spaces ℳ and 𝒩 are strategy spaces, and we will write strategies as μ and ν to distinguish them from signals u and w. If ut denotes u(t+θ), θ∈[-h,0], then ut∈Cm[-h,0] by the definition of Cm[-h,0].
Let us formulate a dynamic game problem
(2.3)minμ∈ℳmaxν∈𝒩J(x(t0),ut0,t0,tf,μ,ν),
which is a zero sum game, where u is the minimizing player and w is the maximizing player. If the extremizing operators in (2.3) are interchangeable, then the minimizing u and maximizing case w are called saddle point strategies. A saddle point solution u(τ)=μ*(τ,x(τ),uτ), w(τ)=ν*(τ,x(τ),uτ) satisfies
(2.4)J(x(t0),ut0,t0,tf,μ*,w)≤J(x(t0),ut0,t0,tf,μ*,ν*)≤J(x(t0),ut0,t0,tf,u,ν*),∀u,w∈L2[t0,tf].
The value J(x(t0),ut0,t0,tf,μ*,ν*) is called the saddle point value. For simple notation, the saddle point value will be denoted by J*(x(t0),ut0,t0,tf) throughout this paper, that is,
(2.5)J*(x(t0),ut0,t0,tf)≜J(x(t0),ut0,t0,tf,μ*,ν*).
The purpose of this paper is to develop a method to design a control law, uR, based on the receding horizon concept such that
in case of zero disturbance, the closed-loop system is asymptotically stable and
with zero initial condition, the closed-loop transfer function from w to z, that is, Tzw, satisfies the H∞-norm bound, for given γ>0,(2.6)∥Tzw∥∞≤γ.
Since the proposed control is based on the receding horizon strategy and the closed-loop system satisfies the H∞-norm bound, such a control will be called the receding horizon H∞ control (RHHC).
Remark 2.1.
It is noted that the terminal weighting function consists of two terms, parameterized by two matrices Qf and Rh. We will call them terminal weighting matrices in this paper. The purpose of adding a second terminal weighting term, parameterized by Rh, is to take the delay effect into account in a designing a stabilizing RHHC. More specifically, if Rh is chosen properly, the saddle point value satisfies the “nonincreasing monotonicity property,” which will be considered in Section 3.
Before moving on, we introduce a lemma, which establishes a sufficient condition for a control u and a disturbance w to be saddle point strategies. In the lemma, V(τ,x(τ),uτ):[t0,tf]×Rn×Cm[-h,0]→R denotes a continuous and differentiable functional. Furthermore, we will use the notation
(2.7)ddτV(τ,x(τ),uτ)|μ(τ,x(τ),uτ)ν(τ,x(τ),uτ)≜limΔτ→0[V(τ+Δτ,xμ,ν(τ+Δ),uτ+Δτ)-V(τ,x(τ),uτ)Δτ],
where xμ,ν(τ+Δτ) is the solution of the system (2.1) resulting from the control u(t)=μ(t,x(t),ut) and disturbance w(t)=ν(t,x(t),ut).
Lemma 2.2.
Assume that there exists a continuous functional V(τ,x(τ),uτ):[t0,tf]×Rn×Cm[-h,0]→R, and a vector functional μ*(τ,x(τ),uτ):[t0,tf]×Rn×Cm[-h,0]→Rm and ν*(τ,x(τ),uτ):[t0,tf]×Rn×Cm[-h,0]→Rl such that
(2.8)(a)V(tf,x(tf),utf)=xT(tf)Qfx(tf)+∫tf-htfuT(τ)Rhu(τ)dτ,(b)ddτV(τ,x(τ),uτ)|μ*(τ,x(τ),uτ)ν*(τ,x(τ),uτ)+xT(τ)Qx(τ)+μ*T(τ,x(τ),uτ)Rμ*(τ,x(τ),uτ)-γ2ν*T(τ,x(τ),uτ)ν*(τ,x(τ),uτ)=0,(c)ddτV(τ,x(τ),uτ)|μ*(τ,x(τ),uτ)ν(τ,x(τ),uτ)+xT(τ)Qx(τ)+μ*T(τ,x(τ),uτ)Rμ*(τ,x(τ),uτ)-γ2νT(τ,x(τ),uτ)ν(τ,x(τ),uτ)≤ddτV(τ,x(τ),uτ)|μ*(τ,x(τ),uτ)ν*(τ,x(τ),uτ)+xT(τ)Qx(τ)+μ*T(τ,x(τ),uτ)Rμ*(τ,x(τ),uτ)-γ2ν*T(τ,x(τ),uτ)ν*(τ,x(τ),uτ)≤ddτV(τ,x(τ),uτ)|μ(τ,x(τ),uτ)ν*(τ,x(τ),uτ)+xT(τ)Qx(τ)+μT(τ,x(τ),uτ)Rμ(τ,x(τ),uτ)-γ2ν*T(τ,x(τ),uτ)ν*(τ,x(τ),uτ)
for all τ∈[t0,tf], all x(τ)∈Rn, and all uτ∈Cm[-h,0]. Then, V(s,x(s),us)=J(x(s),us,s,tf,μ*,ν*) and
(2.9)J(x(s),us,s,tf,μ*,ν)≤J(x(s),us,s,tf,μ*,ν*)≤J(x(s),us,s,tf,μ,ν*)
for all s∈[t0,tf]. That is, u(τ)=μ*(τ,x(τ),uτ) and w(τ)=ν*(τ,x(τ),uτ) are saddle point solutions and V(τ,x(τ),uτ) is a saddle point value.
Proof.
Similar lemmas are found in [13–16]. Even though Lemma 2.2 is different from those lemmas, one can get the idea for the proof without difficulty from the mentioned references. Thus, we omit the proof of the lemma.
From the above lemma, we see that V(τ,x(τ),uτ) is a saddle point value, that is, V(τ,x(τ),uτ)=J*(x(τ),uτ,τ,tf). Furthermore, it is noted that V(s,x(s),us)≥0 for all s∈[t0,tf]. This can be verified as follows.
From (2.9), it follows that we have
(2.10)V(s,x(s),us)=J(x(s),us,s,tf,μ*,ν*)≥J(x(s),us,s,tf,μ*,0),
where
(2.11)J(x(s),us,s,tf,μ*,0)=∫stf[xT(τ)Qx(τ)+μ*T(τ,x(τ),uτ)Rμ*(τ,x(τ),uτ)]dτ+xT(tf)Qfx(tf)+∫tf-htfuT(τ)Rhu(τ)dτ.
Since Qf>0 and Rh>0, we lead to J(x(s),us,s,tf,μ*,0)≥0. Consequently, V(s,x(s),us)≥0 for all s∈[t0,tf].
Before deriving RHHC, we first provide the solution to the finite horizon dynamic game problem in (2.3). The derivation is based on Lemma 2.2. The procedure taken for derivation of the solution is quite lengthy and tedious but similar to that used in [15]. Therefore, we do not provide the detailed derivation here. In order to apply the result of Lemma 2.2, we assume the saddle point value has the form
(2.12)V(τ,x(τ),uτ)={xT(τ)P1(τ)x(τ)+2xT(τ)∫-h0P2(τ,s)u(τ+s)ds+∫-h0∫-h0uT(τ+s)P3(τ,r,s)u(τ+r)drds,xT(τ)W1(τ)x(τ)+2xT(τ)∫-htf-τ-hW2(τ,s)u(τ+s)ds+∫-htf-τ-h∫-htf-τ-huT(τ+s)W3(τ,r,s)u(τ+r)drds+∫tf-τ-h0uT(τ+s)Rhu(τ+s)dτ,t0≤τ<tf-htf-h≤τ≤tf,
where P1(τ)∈Rn×n, P2(τ,s)∈Rn×m, and P3(τ,r,s)∈Rm×m are determined later on. Using the above saddle point value, the saddle point strategies for the dynamic game problem in (2.3) are given by
(2.13)μ*(τ,x(τ),uτ)={-R-1[Ω2(τ)x(τ)+∫-h0Ω3(τ,s)u(τ+s)ds],t0≤τ<tf-h-Ω1B0T[W1(τ)x(τ)+∫-htf-τ-hW2(τ,s)u(τ+s)ds],tf-h≤τ≤tf,ν*(τ,x(τ),uτ)={γ-2BwT[P1(τ)x(τ)+∫-h0P2(τ,s)u(τ+s)ds],t0≤τ<tf-hγ-2BwT[W1(τ)x(τ)+∫-htf-τ-hW2(τ,s)u(τ+s)ds],tf-h≤τ≤tf,
where Ω1, Ω2(τ), and Ω3(τ,s) are defined as follows:
(2.14)Ω1≜(R+Rh)-1,Ω2(τ)≜B0TP1(τ)+P2T(τ,0),Ω3(τ,s)≜B0TP2(τ,s)+P3T(τ,0,s).P1(·), P2(·), and P3(·) satisfy the following Riccati-type coupled partial differential equations:
(2.15)P˙1(τ)+ATP1(τ)+P1(τ)A+Q-Ω2T(τ)R-1Ω2(τ)+γ-2P1(τ)BwBwTP1(τ)=0,(∂∂τ-∂∂s)P2(τ,s)+ATP2(τ,s)-Ω2T(τ)R-1Ω3(τ,s)+γ-2P1(τ)BwBwTP2(τ,s)=0,(∂∂τ-∂∂r-∂∂s)P3(τ,r,s)-Ω3T(τ,s)R-1Ω3(τ,r)+γ-2P2T(τ,s)BwBwTP2(τ,r)=0
with boundary conditions
(2.16)P2(τ,-h)=P1(τ)B1,P3(τ,-h,s)=P2T(τ,s)B1,
where t0≤τ<tf-h, -h≤r≤0 and -h≤s≤0. Similarly, W1(·), W2(·), and W3(·) satisfy the following Riccati-type partial differential equations:
(2.17)W˙1(τ)+ATW1(τ)+W1(τ)A+Q-W1(τ)[B0Ω1B0T-γ-2BwBwT]W1(τ)=0,(∂∂τ-∂∂s)W2(τ,s)+ATW2(τ,s)-W1(τ)[B0Ω1B0T-γ-2BwBwT]W2(τ,s)=0,(∂∂τ-∂∂r-∂∂s)W3(τ,r,s)-W2T(τ,s)[B0Ω1B0T-γ-2BwBwT]W2(τ,r)=0
with boundary condition
(2.18)W2(τ,-h)=W1(τ)B1,W3(τ,-h,s)=W2T(τ,s)B1,
where tf-h≤τ≤tf, -h≤r≤0 and -h≤s≤0. In addition, P1(·),P2(·),P3(·) and W1(·),W2(·),W3(·) satisfy the following boundary conditions:
(2.19)W1(tf)=Qf,P1(tf-h)=W1(tf-h),P2(tf-h,s)=W2(tf-h,s),P3(tf-h,r,s)=W3(tf-h,r,s).P1(·),P2(·),P3(·) and W1(·),W2(·),W3(·) are solved backward in time from tf to t0. Because the system is time-invariant, the shape of P1(·),P2(·),P3(·) and W1(·),W2(·),W3(·) is only characterized by the difference between the initial time and the final time, that is, tf-t0. The values of P1(·),P2(·),P3(·) and W1(·),W2(·),W3(·) at the initial time, t0, vary with tf-t0. For fixed tf-t0, the values are all the same at the initial time. For example, P1(t0) with t0=1 and tf=5 is equal to P1(t0) with t0=2 and tf=6. If we take receding horizon strategy, t0 and tf corresponds to t and t+Tp, respectively, where t denotes the current time. It means that the difference between the initial time and the terminal time is set to be Tp. Therefore, P1(t0) reduces to a constant matrix regardless of the value of t0. Let us introduce new notations as follows:
(2.20)Ω-2≜Ω2(t0),Ω-3(s)≜Ω3(t0,s),W-1≜W1(t0),W-2(s)≜W2(t0,s).
Finally, the RHHC is represented as a distributed feedback strategy as follows:
(2.21)uR(t)={-R-1[Ω-2x(t)+∫-h0Ω-3(s)u(t+s)ds],Tp>h,-Ω1B0T[W-1x(t)+∫-hTp-hW-2(s)u(t+s)ds],0<Tp≤h.
It is noted that the feedback strategy is invariant with time. In order to solve Riccati-type coupled partial differential equations (PDEs) given in (2.15) and (2.17), we can utilize a numerical algorithm in [16]. The time required to solve the PDEs is proportional to the prediction horizon length, Tp. However, the realtime computational load for the RHHC remains the same for any prediction horizon length larger than the delay length, h.
We have constructed the RHHC from the solution to a finite horizon dynamic game problem. However, the only thing we can say about the control at present is that it is obtained based on the receding horizon strategy. Nothing can be said about the asymptotic stability and H∞-norm boundedness yet. We therefore will investigate those issues in the next two sections.
3. Nonincreasing Monotonicity of a Saddle Point Value
Nonincreasing monotonicity of the saddle point value plays an important role in proving the closed-loop stability and guaranteeing H∞-norm bound for delay-free systems and state-delay systems. As will be shown later, this is also the case with input-delay systems. In what follows, we will show how to choose terminal weighting matrices such that the saddle point value satisfies the nonincreasing monotonicity.
Theorem 3.1.
Given γ>0, assume that there exist X>0, S, Y1, and Y2 such that
(3.1)[(AX+B0Y1)T+(AX+B0Y1)(B1S+B0Y2)BwXQ1/2Y1TR1/2Y1T⋆-S00Y2TR1/2Y2T⋆⋆-γ2I000⋆⋆⋆-I00⋆⋆⋆⋆-I0⋆⋆⋆⋆⋆-S]≤0.
If one chooses terminal weighting matrices Qf and Rh such that Qf=X-1 and Rh=S-1, the saddle point value J*(x(t0),ut0,t0,σ) satisfies the following nonincreasing monotonicity property:
(3.2)∂J*(x(t0),ut0,t0,σ)∂σ≤0,∀σ>t0.
Proof.
The derivative of J* with respect to the terminal time can be written as
(3.3)∂J*(x(t0),ut0,t0,σ)∂σ=1Δ{∫t0σ+Δ[x-T(τ)Qx-(τ)+μ-T(τ,x-(τ),u-τ)Rμ-(τ,x-(τ),u-τ)-γ2ν-T(τ,x-(τ),u-τ)ν-(τ,x-(τ),u-τ)]dτ+x-T(σ+Δ)Qfx-(σ+Δ)+∫σ+Δ-hσ+Δu-T(τ)Rhu-(τ)dτ-∫t0σ[x^T(τ)Qx^(τ)+μ^T(τ,x^(τ),u^τ)Rμ^(τ,x^(τ),u^τ)-γ2ν^T(τ,x^(τ),u^τ)ν^(τ,x^(τ),u^τ)]dτ+x^T(σ)Qfx^(σ)-∫σ-hσu^T(τ)Rhu^(τ)dτ∫t0σ+Δ},
where the pair (μ-,ν-) is a saddle point solution for J(x(t0),ut0,t0,σ+Δ,u,w) and the pair (μ^,ν^) is a saddle point solution for J(x(t0),ut0,t0,σ,u,w). x- denotes the state trajectory resulting from the strategies μ- and ν-, and x^ denotes the state resulting from the strategies μ^ and ν^. Let us replace the feedback strategies μ- and ν^ by μ^ and ν- up to σ, respectively, and use u(τ)=K1x(τ)+K2u(τ-h) and w(τ)=ν-(τ,x(τ),uτ) for τ≥σ. It is noted that, since we have changed strategies, the resulting state trajectory is neither x- nor x^. Let us denote the resulting state trajectory by x. Then we obtain
(3.4)∂J*(x(t0),ut0,t0,σ)∂σ≤limΔ→01Δ{∫σσ+Δ[xT(τ)Qx(τ)+[K1x(τ)+K2u(τ-h)]T∫σσ+Δ≤K1xlimΔ→01Δ×R[K1x(τ)+K2u(τ-h)]-γ2wT(τ)w(τ)]dτ≤limΔ→0+1Δ+xT(σ+Δ)Qfx(σ+Δ)≤limΔ→0+1Δ+∫σ+Δ-hσ+ΔuT(τ)Rhu(τ)dτ-xT(σ)Qfx(σ)-∫σ-hσuT(τ)Rhu(τ)dτ}=xT(σ)Qx(σ)+[K1x(σ)+K2u(σ-h)]TR[K1x(σ)+K2u(σ-h)]-γ2wT(τ)w(τ)+ddσ{xT(σ)Qfx(σ)+∫σ-hσuT(τ)Rhu(τ)dτ}=xT(σ)Qx(σ)+[K1x(σ)+K2u(σ-h)]TR[K1x(σ)+K2u(σ-h)]-γ2wT(τ)w(τ)+2x˙T(σ)Qfx(σ)+uT(σ)Rhu(σ)-uT(σ-h)Rhu(σ-h).
After substituting x˙(σ)=(A+B0K1)x(σ)+(B1+B1K2)u(σ-h)+Bww(σ) into the above, we obtain
(3.5)∂J*(x(t0),ut0,t0,σ)∂σ≤[x(σ)u(σ-h)w(σ)]T[Λ11(B1+B0K2)+K1T(R+Rh)K2QfBw⋆K2T(R+Rh)K2-Rh0⋆⋆-γ2I]︸Λ[x(σ)u(σ-h)w(σ)],
where Λ11 is given as
(3.6)Λ11=(A+B0K1)TQf+Qf(A+B0K1)+Q+K1T(R+Rh)K1.
It is apparent that, if Λ≤0, the nonincreasing monotonicity in (3.2) holds. Λ≤0 can be rewritten as follows:
(3.7)Λ=[(A+B0K1)TQf+Qf(A+B0K1)(B1+B0K2)QfBw⋆-Rh0⋆⋆-γ2I]+[Q1/200R1/2K1R1/2K20K1K20]T[I000I000Rh-1]-1[Q1/200R1/2K1R1/2K20K1K20]≤0.
Pre- and postmultiply the above matrix inequality by diag{Qf-1,Rh-1,I} and set Y1=K1X and Y2=K2S. From Schur complement, Λ≤0 is then equivalently changed into (3.1). This completes the proof.
The nonincreasing monotonicity of the saddle point value implies that the saddle point value does not increase even though we increase the horizon length. As will be shown in the next section, this property plays an important role in RHHC's achieving closed-loop stability and H∞-norm boundedness.
Remark 3.2.
It is mentioned that once we obtain feasible matrices X, S, Y1, and Y2 satisfying the LMI (3.1), the controller u(t)=K1x(t)+K2u(t-h), where K1=Y1X-1 and K2=Y2S-1, is also a stabilizing H∞ controller with guaranteed H∞ performance bound γ even though we do not provide the proof here due to the space limitation. The features of the proposed RHHC compared to the controller u(t)=K1x(t)+K2u(t-h) will be illustrated through a numerical example.
4. Asymptotic Stability and H∞-Norm Boundedness
In this section, we show that the proposed receding horizon control achieves the closed-loop asymptotic stability for zero disturbance and achieves the H∞-norm boundedness for zero initial condition.
Theorem 4.1.
Given Q>0 and γ>0, if J*(x(t0),ut0,t0,σ)/∂σ≤0 for σ>t0, the system (2.1) controlled by the RHHC in (2.21) is asymptotically stable for zero disturbance and satisfies infinite horizon H∞-norm bound for zero initial condition.
Proof.
Nonincreasing monotonicity of a saddle point value is a sufficient condition for asymptotic stability and H∞-norm boundedness of the RHHC for state-delay systems. This theorem states that it is also the case with the RHHC for input-delayed systems. The complete proof of the theorem is lengthy but the idea used in [14] can be used for the proof of this theorem without much difficulty. Thus we omit the proof.
An LMI condition on the terminal weighting matrices under which the saddle point satisfies nonincreasing monotonicity is given in Theorem 3.1. Therefore, we lead to the following corollary.
Corollary 4.2.
Given Q>0, R>0, and γ>0, if the LMI (3.1) is feasible and one can obtain two terminal weighting matrices Qf and Rh, the system (2.1) controlled by the proposed RHHC in (2.21) is asymptotically stable for zero disturbance and satisfies the infinite horizon H∞ performance bound for zero initial condition.
Remark 4.3.
Memoryless H∞ state-feedback controllers also have closed-loop stability and satisfy H∞ performance bound. In fact, the proposed RHHC does not have an advantage over the existing H∞ state-feedback controllers in terms of H∞ performance bound as will be shown in the numerical example. However, the proposed RHHC has an advantage over them in a way that the former improves the performance represented in terms of the quadratic cost Jq:
(4.1)Jq=∫0∞[xT(t)Qx(t)+uT(t)Ru(t)]dt
by adjusting the prediction horizon length, Tp, in the case of nonzero initial condition with zero disturbance. Control systems are not always subject to disturbances. Thus it may be meaningful to consider situations where disturbances are gone. Then the proposed RHHC may be suitable because it has a guaranteed H∞ performance bound and improved quadratic cost. This feature will be illustrated later through a numerical example.
5. Numerical Example
In this section, a numerical example is presented in order to illustrate the feature of the proposed RHHC. Consider an input-delayed system (2.1) whose model parameters are given by
(5.1)A=[-110.51.5],B0=[0.51.4],B1=[0.40.1],Bw=[0.20.2],h=0.5.
It is noted that the system is open-loop unstable because the eigenvalues of A are -1.1861 and 1.686. State and input weighting matrices Q and R in (2.2) are chosen to be Q=I and R=1. For γ=0.3, the terminal weighting matrices Qf and Rh are obtained from Theorem 3.1 as follows:
(5.2)Qf=[1.60942.45242.45247.6094],Rh=0.1904.
We chose the prediction horizon length to be 1, that is, Tp=1, and computed an RHHC in (2.21) after solving partial differential equations given in this paper. The obtained RHHC has the form
(5.3)u(t)=-[0.85183.4256]x(t)+∫-0.50K(s)u(t+s)ds,
where the shape of K(s) is shown in Figure 1. As mentioned in Remark 3.2, we can also obtain a stabilizing H∞ controller from Theorem 3.1 as follows:
(5.4)u(t)=-[3.25659.3240]x(t)-0.0354u(t-h).
The shape of K(s) for Tp=1.
In order to illustrate the system response to a disturbance input, we applied a disturbance w(t) whose shape is given in Figure 2. The state trajectory x1 of the system by the proposed RHHC in (5.3) is compared with that of the system due to the controller in (5.4) in Figure 3. It is seen that the both controllers stabilize the input-delayed system affected by he external disturbance. It looks like that the controller in (5.4) outperforms the proposed RHHC. For quantitative comparison we computed H∞ performance. Firstly, for the proposed RHHC, we obtained
(5.5)∫zT(t)z(t)dt∫wT(t)w(t)dt=0.2265<γ=0.3,
which supports the fact that the controlled system satisfies the H∞ performance bound. For the controller given in (5.4), the obtained H∞ performance was 0.1647, which is even better than that of the proposed RHHC. This shows that the proposed RHHC does not have an advantage in terms of H∞ performance over existing methods. One may wonder what the feature of the proposed RHHC is or when it is useful. As already mentioned, one prominent advantage of the proposed RHHC is that we can improve the control performance of the system, which is represented in terms of the quadratic cost, by adjusting the prediction horizon length Tp for stabilization problem with no external disturbance. For this illustration, we assumed that the initial state of the system is x(0)=[11]T. In case of zero disturbance, let us define the quadratic cost as follows:
(5.6)Jq=∫010[xT(t)Qx(t)+uT(t)Ru(t)]dt.
Figure 4 shows state trajectories that are obtained by applying the proposed RHHC with different prediction horizon lengths. It also shows the resultant quadratic costs. It is noted that Tp=0 leads to the controller (5.4). It clearly shows that the RHHC with longer Tp achieves smaller quadratic cost. This example illustrates that the proposed RHHC has guaranteed H∞ performance bound for nonzero external disturbance and the quadratic performance can be improved by adjusting the prediction horizon length in case of nonzero initial condition and zero disturbance. This feature is never achievable through the conventional memoryless state feedback controller.
The shape of disturbance, w(t).
State trajectories x1 due to the disturbance input : solid line—RHHC in (5.3), dash dot—controller in (5.4).
State trajectories x1 for different Tp and the corresponding quadratic costs.
6. Conclusions
In this paper, we proposed a receding horizon H∞ control (RHHC) for input-delayed systems. Firstly, we proposed a new cost function for a dynamic game problem. The cost function has two terminal weighting terms that are parameterized by two terminal weighting matrices. Secondly, we derived a saddle point solution to a finite horizon dynamic game problem. Thirdly, the receding horizon H∞ control was constructed from the obtained saddle point solution. We showed that, under the nonincreasing monotonicity condition of a saddle point value, the proposed receding horizon H∞ control is stabilizing and satisfies the H∞ performance bound. We proposed an LMI condition on the terminal weighting matrices, under which the saddle point value satisfies the nonincreasing monotonicity. Unlike the conventional memoryless state feedback controller, the proposed RHHC has a feature that the quadratic performance of the controlled system for nonzero initial condition can be improved by adjusting the prediction horizon length.
Acknowledgments
This research was supported by an INHA Research Grant and was also supported by the MKE (The Ministry of Knowledge Economy), Korea, under the CITRC (Convergence Information Technology Research Center) support program (NIPA-2012-H0401-12-1007) supervised by the NIPA (National IT Industry Promotion Agency).
KwonW. H.PearsonA. E.Feedback stabilization of linear systems with delayed control198025226626910.1109/TAC.1980.1102288567387ZBL0438.93055ArtsteinZ.Linear systems with delayed controls: a reduction198227486987910.1109/TAC.1982.1103023680488ZBL0486.93011TadmorG.The standard H∞ problem in systems with a single input delay200045338239710.1109/9.8477191762852ZBL0978.93026MoonY. S.ParkP.KwonW. H.Robust stabilization of uncertain input-delayed systems using reduction method200137230731210.1016/S0005-1098(00)00145-X1832039ZBL0969.93035BasinM.Rodriguez-GonzalezJ.Optimal control for linear systems with multiple time delays in control input2006511919710.1109/TAC.2005.8617182192794KothareM. V.BalakrishnanV.MorariM.Robust constrained model predictive control using linear matrix inequalities199632101361137910.1016/0005-1098(96)00063-51420038ZBL0897.93023PrimbsJ. A.NevistićV.Feasibility and stability of constrained finite receding horizon control200036796597110.1016/S0005-1098(00)00004-21829693ZBL0955.93504MayneD. Q.RawlingsJ. B.RaoC. V.ScokaertP. O. M.Constrained model predictive control: stability and optimality200036678981410.1016/S0005-1098(99)00214-91829182ZBL0949.93003TadmorG.Receding horizon revisited: an easy way to robustly stabilize an LTV system199218428529410.1016/0167-6911(92)90058-Z1158655ZBL0751.93059LallS.GloverK.ClarkeD.A game theoretic approach to moving horizon control1994Oxford University Press131144LeeJ.-W.KwonW. H.LeeJ. H.Receding horizon H∞ tracking control for time-varying discrete linear systems199768238539910.1080/0020717972236861690667ZBL0888.93021KwonW. H.KangJ. W.LeeY. S.MoonY. S.A simple receding horizon control for state delayed systems and its stability criterion2001136539551KwonW. H.LeeY. S.HanS. H.General receding horizon control for linear time-delay systems20044091603161110.1016/j.automatica.2004.04.0032153826ZBL1055.93032LeeY. S.HanS. H.KwonW. H.Receding horizon H∞ control for systems with a state-delay200681637110.1111/j.1934-6093.2006.tb00253.x2379042ParkJ. H.YooH. W.HanS.KwonW. H.Receding horizon controls for input-delayed systems20085371746175210.1109/TAC.2008.9283202446393EllerD. H.AggarwalJ. K.BanksH. T.Optimal control of linear time-delay systems19691466786870274130