1. IntroductionIn recent years, the problem of missing measurements caused by unreliable channel transmission has been the focus of many scholars [1–3]. The research on the problem of packet loss can be roughly divided into two directions: one is to solve the linear estimator based on the minimum mean square error method and the other is to use the quadratic filtering method. Nahi [4] believes that the observation sequence may contain only noise when the packets are lost and derives a set of recursive formulas similar to Kalman filtering in the sense of the minimum mean square error. In [5], the Kalman filter is implemented by intermittent observation to solve the problem of information loss in large wireless sensor networks. Zhang et al. [6] propose an estimator that can be applied to an infinite horizon, and its iteration only includes solving a Riccati equation. This estimator avoids the convergence analysis problem caused by the calculation of the Lyapunov equation in traditional estimation methods. The authors in [7] apply the recombination innovation analysis method to obtain an optimal linear filter, which solves the remote estimation problem of the packet loss network with the measurement delay system obeying Bernoulli distribution.
However, with the development of engineering technology [8–10], the performance of the traditional linear estimator cannot meet the requirements of the real system. Therefore, people are paying more and more attention to the optimization design and implementation of the estimator. De Santis et al. [11] first propose a method called quadratic filtering, which uses a quadratic function of the measurement equation to improve the performance of the filter. Experiments show that this method is superior to the linear filtering method in estimating performance. This research has attracted great attention of many scholars. Caballero-Águila et al. [12] consider using innovative methods to solve the least square linear estimation problem and simplify the quadratic estimation problem into a linear estimation problem in a suitable augmented system. After that, Cacace et al. [13] add the packet loss factor to the measurement model and use the quadratic filtering method to obtain a filter iteration equation with a smaller estimation error. The Kronecker algebraic rules are used in [14] to discuss the stochastic properties of augmented noise in augmented systems. Then, the linear estimation of the discrete-time non-Gaussian system is obtained by the projection formula. Cacace et al. [15] propose a feedback quadratic filter that rewrites the system model by introducing an output injection term and prove that the performance of the feedback quadratic filter depends on the gain parameter of the output term.
Meanwhile, researchers found that time delay is common due to uncertain factors such as bandwidth and network failures [16–21]. Especially, the state estimation problem has received much attention for time-delay systems in [22–26]. The emergence of time delay often leads to the instability and even worsens the overall performance of the system. Therefore, solving the time-delay problem in life is bound to be an important subject of our research. The emergence of time delay often leads to the instability of the system and even makes the overall performance of the system worse, so solving the problem of time delay in life is bound to be the most important research content [27]. By solving a partial differential equation, the solution of a continuous observation delay system is obtained. The authors in [28] transform the discrete system with observation delay into a nonobservation delay system estimation problem by expanding the dimensions and then obtain the filter based on the standard Kalman filtering theory. Zhang et al. [29] propose a new-information reorganization analysis theory, that is, keeping observation information unchanged and then rearranging and combining observation data from different channels and different time delays into a system without time delay. Finally, the new observation data are introduced into the innovation sequence, and the signal is designed by using the projection theory. Song et al. [7] extended the abovementioned innovation restructuring theory to the study of infinite time estimation and made a deeper demonstration of the estimation of multistable systems.
Inspired by the above studies, this paper considers the quadratic filter problem of time-delay systems with multichannel multiplicative noise in discrete time. First of all, we assume that the measurement has a delay phenomenon, and the measurement is transmitted through multiple communication channels. The packet loss of each channel is described by the Bernoulli process of independent and identical distribution. Secondly, we use the innovation recombination theory to rearrange and combine the above observation data and obtain a new time-delay free observation system structure. Finally, we construct a quadratic filtering equation for the new time-delay free observation system and obtain a new filter by solving two Riccati equations and one Lyapunov equation. The main contribution of this paper is to effectively combine the quadratic filtering method with the innovation recombination theory; therefore, so as to obtain the quadratic filtering scheme of the discrete-time system with packet loss and measurement delay.
The rest of this article is organized as follows. First, Section 2 provides the question statement and preliminary. Section 3 provides quadratic filter solutions of the problem with detailed derivation processes. This part is the key result of this paper. Then, Section 4 is a simulation example to prove the effectiveness of the estimator algorithm in Section 3. Finally, the summary of this paper is given in Section 5.
1.1. NotationThroughout this technical paper, the superscripts ”T” and ”-1” represent the transpose and inverse of a matrix, ℛn denotes the n-dimensional Euclidean space, E⋅ stands for the mathematical expectation operator, ⊗ and ⊙ are used to denote the Kronecker product and the Hadamard product, respectively, I represents an identity matrix of the appropriate dimension, and δij=0 for i≠j and δii=1. We use diagλ1,…,λn to represent a diagonal matrix, where λ1,…,λn are the diagonal elements of this diagonal matrix. If the dimensions are not explicitly stated, matrices are assumed to have compatible dimensions with algebraic operations.
3. Main ResultsSince there is a time delay d at instant k, state xk−d has an additional measurement y1k. In addition, when k≥d, the measurement yk contains the time delay. According to [29], the linear space ℒyss=0k contains the same information as ℒY1ss=0k−d,Y0ss=k−d+1k, where the new observations Y1s and Y0s are provided as follows:(5)Y1s=y0sy1s+d, 0≤s≤k−d, Y0s=y0s, k−d<s≤k.
For convenience, Y0s and Y1s can be rewritten as(6)Y1s=H1xs+V1s,Y0s=H0xs+V0s,where(7)H1=ξsB0θs+dB1, H0=ξsB0, V1s=v0sv1s+d, V0s=v0s.
Before introducing the quadratic filtering problem, we will construct the augmented state and measurement vectors by stacking the original vectors and obtaining their second-order Kronecker powers. Then, we can get the new state vector and the measurement vector as shown below:(8)x2s+1=A2x2s+fs+mns2,where(9)fs=n2s+Axs⊗ns+ns⊗Axs−mns2,with Efs=0.
Similarly, it is not difficult to obtain the following measurement equation:(10)y02s=ϕ2B02x2s+ls+mV0s2,y12s=ϕ00φ2B0B12x2s+gs+mV1s2,where(11)ls=ξs−ϕ2B02x2s+ξsB0xs⊗V0s+V0s⊗ξsB0xs+V02s−mV0s2,gs=H¯12x2s+V12s+H1xs⊗V1s+V1s⊗H1xs−mV1s2,H¯1=ξs−ϕ00θs+d−φB0B1,where(12)ϕ2=Eξs⊗ξs=ϕ110⋯00ϕ22⋯0⋮⋮⋱⋮00⋯ϕm1m1,ϕ00φ2=Eξs00θs⊗ξs00θs=diagϕ11,φ110⋯00diagϕ22,φ22⋯0⋮⋮⋱⋮00⋯φm2m2,in which(13)ϕ11=α10⋯00α1α2⋯0⋮⋮⋱⋮00⋯α1αm1, ϕ22=α2α10⋯00α2⋯0⋮⋮⋱⋮00⋯α2αm1, ϕm1m1=αm1α10⋯00αm1α2⋯0⋮⋮⋱⋮00⋯αm1, φ11=α1β10⋯00α1β1⋯0⋮⋮⋱⋮00⋯α1βm2, φ22=α2β10⋯00α2β2⋯0⋮⋮⋱⋮00⋯α2βm2, φm2m1=β1α10⋯00β1α2⋯0⋮⋮⋱⋮00⋯β1βm2, φm2m2=βm2α10⋯00βm2α2⋯0⋮⋮⋱⋮00⋯βm2, with ϕ=diagα1,…,αm1,φ=diagβ1,…,βm2, Els=0, and Egs=0.
Then, the new state vector and the measurement vector can be written as(14)xs≜xsx2s, ys≜ysy2s.
Finally, we can derive the augmented system as follows:(15)xs+1=Axs+CU0s+Fs,(16)Y0s=H0xs+E0U0s+Ls,(17)Y1s=H1xs+E1U1s+Gs,where(18)A≜A00A2, C≜00In2×n20, Fs≜nsfs, H0≜ϕB000ϕ2B02, E0≜000Im12×m12, U0s≜mns2mV02s, Ls≜ξs−ϕB0xs+V0sls, U1s≜mns2mV12s, E1≜000Im22×m22, Gs≜H¯1xs+V1sgs, H1≜ϕ00φB0B100ϕ00φ2B0B12.
Note that the new measurements Y0s and Y1s are delay free. Fs, Ls, and Gs for all s and the initial x0 are mutually independent. Moreover, Fs, Ls, and Gs are zero mean such that EFsFTj=QFsδs,j, ELsLTj=QLsδs,j, and EGsGTj=QGsδs,j, and the detailed calculation processes are given below:(19)QFs=EnsnTsEnsfTsEfsnTsEfsfTs=EnsnTsEnsn2TsEn2snTs∗1,where(20)∗1=EnsnTs⊗nsnTs−mns2mns2T+I+ΠADsT⊗EnsnTsI+Π.
It should be pointed out that the entries of Ensn2Ts, EnsnTs⊗nsnTs, and EnsnTs are known since they are the elements of mns3, mns4, and mns2, respectively.
For convenience, let us define(21)D0,2s≜Ex2s,D1,2s≜Exsx2Ts,D2,2s≜Ex2sx2Ts.
Then, we can calculate that(22)D0,2s+1=Ex2s+1=vecTDs+1,D1,2s+1=Exs+1x2Ts+1=EAxs+nsA2x2s+fs+mns2=AD1,2sA2T+Ensn2Ts,D2,2s+1=Ex2s+1x2Ts+1=EA2x2s+fs+mns2×A2x2s+fs+mns2=A2D2,2sA2T+En2sn2Ts+I+Π×ADsAT⊗EnsnTsI+Π,where Π is the matrix which guarantees that ns⊗Axs=ΠAxs⊗ns.
Following the similar way for QFs, one has(23)QLs=ELsLTs=Γ1⊙B0DsB0T+EV0sV0Ts∗2∗2T∗3,QGs=EGsGTs=diagΓ1,Λ1⊙B1DsB1T+EV1sV1Ts∗4∗4T∗5,where(24)∗2=EV0sV02Ts+Eξs−ϕξs−ϕ2TB0D1,2sB02T,∗3=Γ1⊗Γ1⊙B02D2,2sB02T+EV02sV02Ts−mV0s2mV0s2T+Eξs−ϕ2B02D0,2smV0s2T+mV0s2D0,2TsB02TEξs−ϕ2T+I+Π1Γ1⊙B0DsB0T⊗EV0sV0TsI+Π1,∗4=EH¯2H¯22TBD1,2sB2T+EV1sV12Ts,∗5=diagΓ1,Λ1⊗diagΓ1,Λ1⊙B12D2,2sB12T+EV12sV12Ts+EH¯22B2D0,2smV1s2T+mV1s2D0,2TsB2TEH¯22T−mV1s2mV1s2T+I+Π2diagΓ1,Λ1⊙B12D2,2sB12T⊗EV1sV1TsI+Π2,where(25)Γ1=α11−α10⋯00α21−α2⋯0⋮⋮⋱⋮00⋯αm11−αm1,Λ1=β11−β10⋯00β21−β2⋯0⋮⋮⋱⋮00⋯βm21−βm2,H¯2=ξs−ϕ00θs+d−φ,B=B0B1.
It should be noted that Π1 and Π2 are the matrices which ensure V0s⊗H0xs=Π1H0xs⊗V0s and V1s⊗H1xs=Π2H1xs⊗V1s. Then, notice that the entries of EV0sV0Ts, EV0sV0Ts⊗V0sV0Ts, EV02sV0Ts, EV1sV1Ts, EV12sV1Ts, and EV1sV1Ts⊗V1sV1Ts are known because they are the elements of mV0s2, mV0s4, mV0s3, mV1s2, mV1s3, and mV1s4, respectively.
We define the quadratic state estimator x^s+1s as the projection of xs+1 onto the linear space ℒY1ii=0k−d,Y0ii=k−d+1k. In order to derive the projection, we give the following definitions of the innovation sequence:(26)ws,0=Y0s−Y^0s=H0x˜s+Ls,(27)ws,1=Y1s−Y^1s=H1x˜s+Gs,where Y^1s,1 is the projection of Y1s onto the linear space of ℒY1ii=0s−1 and Y^0s,0 is the projection of Y0s onto the linear space of ℒY1ii=0k−d,Y0ii=k−d+1k. Then, we define x˜s,0=xs−x^s,0 and x˜s,1=xs−x^s,1. It should be noted that the definitions of x^s,0 and x^s,1 are similar to Y^0s,0 and Y^1s,1. In addition, we reckon that wi,1i=0k−d,wi,0i=k−d+1k is an independent white noise and spans the same linear space as ℒY1ii=0k−d,Y0ii=k−d+1k. Next, we derive the covariance Rws,0 and Rws,1 of the innovation sequence. For convenience, the following definitions are given:(28)P0s≜Ex˜s,0x˜Ts,0,P1s≜Ex˜s,1x˜Ts,1,Ds≜ExsxTs.
Then, the covariance matrices of the innovation sequences (26) and (27) can be derived by the following formula:(29)Rws,0=Ews,0wTs,0=H0P0sH0T+QLs,Rws,1=Ews,1wTs,1=H1P1sH1T+QGs.
Finally, the covariance matrices P0s+1 and P1s+1 can be derived as(30)P1s+1=AP1sAT+QFs−AP1sH1TRws,1−1H1P1sAT,(31)P10=D10,(32)P0s+1=AP0sAT+QFs−AP0sH0TRws,0−1H0P0sAT,(33)P0k−d+1=P1k−d+1,and the Lyapunov equation Ds+1 can be calculated by(34)Ds+1=ADsAT+QFs.
Theorem 1.For given systems (15)–(17), the quadratic filter x^kk can be derived as(35)x^kk=x^k,0+P0kH0TRwk,0−1Y0k−H0x^k−E0U0k,where the estimator x^k0 is computed by(36)x^s+1,0=Ax^s,0+CU0s+AP0sH0TRws,0−1Y0s−H0x^s−E0U0s,with the initial value x^k−d+1,0=x^k−d+1,1, and x^k−d+1,1 is obtained by(37)x^s+1,1=Ax^s,1+CU0s+AP1sH1TRws,1−1Y1s−H1x^s−E1U1s.
Proof.According to (15), we can directly prove (38) by the projection theorem:(38)x^s+1,0=Ax^s,0+Exs+1wTs,0Rws,0−1ws,0=Ax^s,0+CU0s+AP0sH0TRws,0−1×Y0s−H0x^s−E0U0s.
Then, we obtain the estimate error x˜s+1,0 by subtracting (38) from (15):(39)x˜s+1,0=Ax˜s,0+Fs−AP0sH0TRws,0−1ws,0.
Therefore, the prediction error covariance P0s+1 can be calculated from (39):(40)P0s+1=Ex˜s+1,0x˜Ts+1,0=AP0sAT+QFs−AP0sH0TRws,0−1H0P0sAT.
Similar to formula (39), we can deduce (41) as follows:(41)x^s+1,1=Ax^s,1+Exs+1wTs,1Rws,0−1ws,1=Ax^s,1+CU0s+AP1sH0TRws,1−1×Y1s−H1x^s−E1U1s.
By combining (15) and (41), one has(42)x˜s+1,1=Ax˜s,1+Fs−AP1sH1TRws,1−1ws,1.
As such, we get the Riccati equation:(43)P1s+1=Ex˜s+1,1x˜Ts+1,1=AP1sAT+QFs−AP1sH1TRws,1−1×H1P1sAT.
By the definition of x^k−d+1,0 and x^k−d+1,1, we can conclude that x^k−d+1,0=x^k−d+1,1. Therefore, formula (33) is proved.
The proof is finished.