The least-squares quadratic estimation problem of signals from observations coming from multiple sensors is addressed when there is a nonzero probability that each observation does not contain the signal to be estimated. We assume that, at each sensor, the uncertainty about the signal being present or missing in the observation is modelled by correlated Bernoulli random variables, whose probabilities are not necessarily the same for all the sensors. A recursive algorithm is derived without requiring the knowledge of the signal state-space model but only the moments (up to the fourth-order ones) of the signal and observation noise, the uncertainty probabilities, and the correlation between the variables modelling the uncertainty. The estimators require the autocovariance and cross-covariance functions of the signal and their second-order powers in a semidegenerate kernel form. The recursive quadratic filtering algorithm is derived from a linear estimation algorithm for a suitably defined augmented system.

1. Introduction

In many real systems the signal to be estimated can be randomly missing in the observations due, for example, to intermittent failures in the observation mechanism, fading phenomena in propagation channels, target tracking, accidental loss of some measurements, or data inaccessibility during certain times. Usually, these situations are characterized by including in the observation equation not only an additive noise, but also a multiplicative noise consisting of a sequence of Bernoulli random variables taking the value one if the observation is state plus noise, or the value zero if it is only noise (uncertain observations). Since these models are appropriate in many practical situations with random failures in the transmission, the estimation problem in systems with uncertain observations has been widely studied in the literature under different hypotheses and approaches (see e.g., [1, 2] and references therein).

On the other hand, in some practical situations the state-space model of the signal is not available and another type of information must be processed for the estimation. In the last years, the estimation problem from uncertain observations has been investigated using covariance information, and algorithms with a simpler structure than those obtained when the state-space model is known have been derived (see, e.g., ).

Recently, the least-squares linear estimation problem using uncertain observations transmitted by multiple sensors, whose statistical properties are assumed not to be the same, has been studied by several authors under different approaches and hypotheses on the processes (see, e.g., [4, 5] for a state-space approach and [6, 7] for a covariance approach).

In this paper, using covariance information, recursive algorithms for the least-squares quadratic filtering problem from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics are proposed. This paper extends the results in  in two directions: on the one hand, correlation at times k and k+r between the random variables modelling the uncertainty in the observations is considered, and, on the other, the quadratic estimation problem is addressed. The quadratic estimation is also a new topic addressed in this paper over the results in Hermoso-Carazo et al.  which are also referred to observations with uncertainty modelled by Bernoulli variables correlated at times k and k+r with arbitrary r, but coming from a single sensor. Furthermore, the current paper differs from  in the correlation model considered and in the information used to derive the algorithms (state-space model in  and covariance information in the current paper).

To address the quadratic estimation problem, augmented signal and observation vectors are introduced by assembling the original vectors with their second-order powers defined by the Kronecker product, thus obtaining a new augmented system and reducing the quadratic estimation problem in the original system to the linear estimation problem in the augmented system. By using an innovation approach, the linear estimator of the augmented signal based on the augmented observations is obtained, thus providing the required quadratic estimator.

The performance of the proposed filtering algorithms is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times k and k+r, considering several values of r. The linear and quadratic estimation error covariance matrices are compared, showing the superiority of the quadratic estimators over the linear ones.

2. Observation Model and Hypotheses

The problem at hand is to determine the least-squares (LS) quadratic estimator of an n-dimensional discrete signal, zk, from noisy measurements coming from multiple sensors which may not contain the signal with different probabilities. In this section, we present the observation model and the hypotheses about the signal and noise processes involved.

Consider m scalar sensors whose measurements at each sampling time, k, denoted by yki, may either contain the signal to be estimate, zk, or be only noise, vki; the uncertainty about the signal being present or missing in the observation is modelled by Bernoulli variables, γki. The observation model is thus described as follows:yki=γkiHkizk+vki,k1,i=1,,m. If γki=1, then yki=Hkizk+vki and the measurement coming from the ith sensor contains the signal; otherwise, if γki=0, then yki=vki, which means that such measurement is only noise. Therefore, the variables {γki;  k1} model the uncertainty of the observations coming from the ith sensor.

To simplify the notation, the observation equation (2.1) is rewritten in a compact form as follows:yk=ΥkHkzk+vk,k1, where yk=(yk1,,ykm)T,  Hk=(Hk1T,,HkmT)T,  Υk=Diag(γk1,,γkm), and vk=(vk1,,vkm)T.

It is known that if the signal zk and the observations y1,,yk have finite second-order moments, the LS linear filter of zk is the orthogonal projection of zk on the space of n-dimensional random variables obtained as linear transformations of y1,,yk. So, by defining the random vectors yi=yiyi ( denotes the Kronecker product ) and, if E[yiTyi]<, the LS quadratic estimator of zk based on the observations up to the sampling time k is the orthogonal projection of zk on the space of n-dimensional linear transformations of y1,,yk and their second-order powers y1,,yk. To guarantee the existence of the second-order moments of the vectors yi, the pertinent assumptions about the processes in (2.1) are now stated.

The n×1 signal process {zk;  k1} has zero mean, and its autocovariance function, Kk,sz, as well as the autocovariance function of the second-order powers, Kk,sz, is expressed in a semidegenerate kernel form Kk,sz=AkBsT,sk,Kk,sz=akbsT,sk, where the n×M matrix functions A, B, and the n2×L matrix functions a, b, are known. Moreover, it is assumed that the covariance function of the signal and its second-order powers, Kk,szz, can also be expressed as Kk,szz={αkβsT,sk,ɛkδsT,ks, where α, β, ɛ, and δ are n×N, n2×N, n×P, and n2×P known matrix functions, respectively.

For i=1,,m, the sensor additive noises, {vki;    k1}, are zero mean white processes, and their moments, up to the fourth one, are known; we will denote Rk=Cov[vk], Rk(3)=Cov[vk,vk] and Rk(4)=Cov[vk].

For i=1,,m, the noises {γki;  k1} are sequences of Bernoulli random variables with P[γki=1]=pki; the variables γki and γsi are independent for |k-s|2, and Cov[γki,γk+1i] are assumed to be known.

The signal process, {zk;  k1}, and the noise processes, {γk;  k>1} and {vk;  k1}, where γk=(γk1,,γkm)T, are mutually independent.

3. Augmented System

Given the observation model (2.1) with assumptions (H1)–(H4), the problem is to find the LS quadratic filter of the signal zk, which will be denoted zk/kq. The technique used to obtain this estimator consists of augmenting the signal and observation by assembling the original vectors and their second-order powers, 𝒵k=(zkT,zkT)T,  𝒴k=(ykT,ykT)T, and deriving the estimator zk/kq as the vector constituted by the first n entries of the LS linear filter of 𝒵k based on 𝒴k.

To obtain this linear estimator, the first- and second-order statistical properties of the augmented vectors 𝒵k and 𝒴k are now analyzed.

3.1. Properties of the Augmented Vectors

By using the Kronecker product properties and denoting Dkγ=Diag(Υk,Υk), k=Diag(Hk,Hk) and Vk=(vk(Im2+Km2)((ΥkHkzk)vk)+vk), (Im2 is the m2×m2 identity matrix and Km2 is the m2×m2 commutation matrix, ) the following model with uncertain observations is obtained, Yk=DkγHkZk+Vk,k1. It should be noted that the signal, 𝒵k, and the noise, 𝒱k, in this new model have non-zero mean. Nevertheless, this handicap can be overcome by considering the centered augmented vectors Zk=𝒵k-E[𝒵k] and Yk=𝒴k-E[𝒴k] which, taking into account that E[DkγkZk]=E[Dkγ]kE[Zk], satisfyYk=DkγHkZk+Vk,k1, where Vk=(vk(Im2+Km2)((ΥkHkzk)vk)+vk-vec(Rk))+(Dkγ-Dkp)HkE[Zk], being Dkp=E[Dkγ] and vec the operator that vectorizes a matrix .

Note that the LS linear estimator of 𝒵k based on 𝒴1,,𝒴k is obtained from the LS linear estimator of Zk based on Y1,,Yk, just adding the mean vector E[𝒵k]=(0n×1T,(vec(AkBkT))T)T. Hence, since the first n components of E[𝒵k] are zero, the required quadratic estimator zk/kq is just the vector constituted by the first n entries of the LS linear filter of Zk. Henceforth, these centered vectors will be referred to as the augmented signal and observation vectors, respectively.

The signal and noise processes {Zk;  k1} and {Vk;  k1} involved in model (3.3) are zero mean. In the following propositions the second-order statistical properties of these processes are established.

Proposition 3.1.

If the signal process {zk;  k1} satisfies (H1), the autocovariance function of the augmented signal process {Zk;  k1} can be expressed in a semidegenerate kernel form, namely, Kk,sZ=E[ZkZsT]=AkBsT,sk, where Ak=(Akαk0n×P0n×L0n2×M0n2×Nδkak),Bk=(Bk0n×Nɛk0n×L0n2×Mβk0n2×Pbk).

Proof.

It is immediate from hypothesis (H1) about the covariance functions of the signal and its second-order moments.

Proposition 3.2.

Under (H1)–(H4), the noise {Vk;  k1} is a sequence of random vectors with covariance matrices, E[VkVsT]=Rk,sV, given by Rk,sV={R¯k+Cov[Γk](HkE[Zk]E[ZkT]HkT),s=kCov[Γk,Γk+1](HkE[Zk]E[Zk+1T]Hk+1T),s=k+10,|k-s|0,1, where denotes the Hadamard product and Γk=(γkγk),R¯k=(RkRk(3)Rk(3)TRk22) with Rk22=(Im2+Km2)((E[γkγkT](HkAkBkTHkT))Rk)(Im2+Km2)+Rk(4). Moreover, {Vk;    k1} is uncorrelated with the processes {Zk;  k1} and {DkγkZk;  k1}.

Proof.

It is obvious that E[Vk]=0,  for all  k1. On the other hand, since Vk=𝒱k-E[𝒱k]+(Dkγ-Dkp)kE[𝒵k], and {zk;  k1}, {vk;  k1} and {γk;  k1} are mutually independent, it is easy to see that E[(𝒱k-E[𝒱k])((Dsγ-Dsp)sE[𝒵s])T]=0,  for all  k,s and hence E[VkVsT]=Cov[Vk,Vs]+E[((Dkγ-Dkp)HkE[Zk])((Dsγ-Dsp)HsE[Zs])T]. Firstly, we prove that Cov[Vk,Vs]=(Rk,s11Rk,s12Rk,s12TRk,s22)=(RkRk(3)Rk(3)TRk22)δk,s, where δ denotes the Kronecker delta function.

Indeed, since {vk;  k1} is a zero mean white sequence with covariances Rk,  for all  k1, it is clear that Rk,s11=Rkδk,s. Moreover, from the mutual independence, the Kronecker and Hadamard products properties lead to Rk,s12=((ΥspHsE[zs])TRkδk,s)(Im2+Km2)+Rk(3)δk,s=Rk(3)δk,s,Rk,s22=(Im2+Km2)((E[γkγsT](HkAkBsTHsT))Rkδk,s)(Im2+Km2)+Rk(4)δk,s=Rk22δk,s.

On the other hand, E[((Dkγ-Dkp)HkE[Zk])((Dsγ-Dsp)HsE[Zs])T],=Cov[Γk,Γs](HkE[Zk]E[ZsT]HsT) and since Cov[Γk,Γs]=0 for |k-s|2, the covariance matrices Rk,sV are obtained.

The uncorrelation between {Vk;  k1} and the processes {Zk;  k1} and {DkγkZk;  k1} is derived in a similar way, taking into account that {zk;  k0}, {vk;  k1}, and {γk;  k1} are mutually independent and using the Kronecker and Hadamard products properties.

As indicated above, to obtain the LS quadratic estimators of the signal zk based on observations (2.1), we consider the LS linear estimators of the augmented signal, Zk, based on the augmented observations (3.3). As known, the LS linear filter of Zk is the orthogonal projection of the vector Zk onto (Y1,,Yk), the linear space spanned by {Y1,,Yk}; so the Orthogonal Projection Lemma (OPL) states that the estimator, Ẑk/k, is the only linear combination satisfying the orthogonality property E[(Zk-Ẑk/k)YsT]=0,s=1,,k.

Due to the fact that the observations are generally nonorthogonal vectors, we will use an innovation approach, consisting of transforming the observation process {Yk;  k1} into an equivalent process (innovation process) of orthogonal vectors {νk;  k1}, equivalent in the sense that each set {ν1,,νk} spans the same linear subspace as {Y1,,Yk}; that is, (ν1,,νk)=(Y1,,Yk).

The innovation process is constructed by the Gram-Schmidt orthogonalization procedure, using an inductive reasoning. Starting with ν1=Y1, the projection of the next observation, Y2, onto (ν1) is given by Ŷ2/1=E[Y2ν1T](E[ν1ν1T])-1ν1; then, the vector ν2=Y2-Ŷ2/1 is orthogonal to ν1, and clearly (ν1,ν2)=(Y1,Y2). Let {ν1,,νk-1} be the set of orthogonal vectors satisfying (ν1,,νk-1)=(Y1,,Yk-1); if now we have an additional observation Yk, we project it onto (ν1,,νk-1) and the orthogonality property allows us to find the projection by separately projecting onto each of the previous orthogonal vectors; that is, Ŷk/k-1=j=1k-1E[YkνjT](E[νjνjT])-1νj; so the next vector, νk=Yk-Ŷk/k-1, is orthogonal to the previous ones and (ν1,,νk)=(Y1,,Yk).

Note that the projection Ŷk/k-1 is the part of the observation Yk that is determined by knowledge of {Y1,,Yk-1}; thus the remainder vector νk=Yk-Ŷk/k-1 can be regarded as the “new information” or the “innovation" provided by Yk and the process {νk;  k1} as the innovation process associated with {Yk;  k1}. The causal and causally invertible linear relation existing between the observation and innovation processes makes the innovation process unique.

Next, taking into account that the innovations constitute a white process, we derive a general expression for the LS linear estimator of the augmented signal, Zk, based on {Y1,,YL}, which will be denoted by Ẑk/L. Replacing {Y1,,YL} by the equivalent set of orthogonal vectors {ν1,,νL}, the signal estimator is Ẑk/L=j=1Lhk,jνj, where the impulse-response function hk,j,  j=1,,L is calculated from the orthogonality property, E[(Zk-Ẑk/L)νsT]=0,sL, which leads to the Wiener-Hopf equationE[ZkνsT]=j=1Lhk,jE[νjνsT],sL. Due to the whiteness of the innovation process, E[νjνsT]=0 for js and the Wiener-Hopf equation is expressed as E[ZkνsT]=hk,sE[νsνsT],sL; consequently, hk,s=E[ZkνsT](E[νsνsT])-1,sL, and, therefore, the following general expression for the LS linear filter of the augmented signal is obtainedẐk/L=i=1LSk,iΠi-1νi, where Sk,i=E[ZkνiT] and Πi=E[νiνiT].

Using the properties of the processes involved in (3.3), as established in Propositions 3.1 and 3.2, and expression (4.8) for the filter, we derive a recursive algorithm for the linear filtering estimators, Ẑk/k, of the augmented signal Zk. As indicated above, the first n entries of these estimators provide the required quadratic filter of the original signal zk.

Theorem 4.1.

The quadratic filter, zk/kq, of the original signal zk is given by zk/kq=ΘẐk/k,k1, where Θ is the operator which extracts the first n entries of Ẑk/k, the linear filter of the augmented signal Zk, which is obtained by Ẑk/k=AkOk,k1, where the vectors Ok are recursively calculated from Ok=Ok-1+JkΠk-1νk,k1;O0=0. The innovation, νk, satisfies νk=Yk-DkpHkAkOk-1-Ξk,k-1νk-1,k2;ν1=Y1, with Ξk,k-1=(Cov[Γk,Γk-1](HkAkBk-1THk-1T)+Rk,k-1V)Πk-1-1,k2, and Πk, the covariance matrix of the innovation, verifies Πk=E[ΓkΓkT](HkAkBkTHkT)-DkpHkAkrk-1AkTHkTDkp-Ξk,k-1Πk-1Ξk,k-1T-DkpHkAkJk-1Ξk,k-1T-Ξk,k-1Jk-1TAkTHkTDkp+Rk,kV,k2,Π1=E[Γ1Γ1T](H1A1B1TH1T)+R1,1V. The matrix function J is given by Jk=[BkT-rk-1AkT]HkTDkp-Jk-1Ξk,k-1T,k2,J1=B1TH1TD1p, where rk is recursively obtained from rk=rk-1+JkΠk-1JkT,k1;r0=0.

Proof.

We start by obtaining an explicit formula for the innovations, νk=Yk-Ŷk/k-1, or, equivalently, for the one-stage predictor of Yk, which by denoting Tk,i=E[YkνiT] is given by Ŷk/k-1=i=1k-1Tk,iΠi-1νi,k2;Ŷ1/0=0. Using the hypotheses on the model, it is deduced that Tk,i=DkpHkSk,i,        i<k-1, and hence, Ŷk/k-1=DkpHkẐk/k-1+(Tk,k-1-DkpHkSk,k-1)Πk-1-1νk-1. Using again the hypotheses on the model, we obtain Tk,k-1-DkpHkSk,k-1=Cov[Γk,Γk-1](HkAkBk-1THk-1T)+Rk,k-1V; consequently, νk=Yk-DkpHkẐk/k-1-Ξk,k-1νk-1,k2;ν1=Y1, with Ξk,k-1 given by expression (4.13).

Next, expression (4.10) for the filter Ẑk/k is derived. For this purpose, taking into account expression (4.8), we obtain formulas to calculate the coefficients Sk,i=E[ZkνiT],  ik. From the hypotheses on the model, replacing νi by its expression in (4.21), and using (4.8) for Ẑi/i-1, we have Sk,i=AkBiTHiTDip-j=1i-1Sk,jΠj-1Si,jTHiTDip-Sk,i-1Ξi,i-1T,2ik;Sk,1=A1B1TH1TD1p or, equivalently, Sk,i=AkJi,ik, where J is a function satisfying Ji=BiTHiTDip-j=1i-1JjΠj-1Si,jTHiTDip-Ji-1Ξi,i-1T,        2ik;J1=B1TH1TD1p. Then, from (4.8) and (4.23), expression (4.10) for the filter is deduced, where Ok, defined by Ok=i=1kJiΠi-1νi,k1;O0=0 satisfies the recursive relation (4.11). Analogously, it is obtained that the one-stage predictor of the signal is given by Ẑk/k-1=𝒜kOk-1, which substituted into (4.21) leads to formula (4.12) for the innovation.

Expression (4.15) for Jk is derived making i=k in (4.24), using (4.23), and defining the function rk=E[OkOkT]=j=1kJjΠj-1JjT,k1;  r0=0. From this definition, the recursive relation (4.16) is also immediately derived.

Finally, we obtain expression (4.14) for the innovation covariance matrix; from the hypotheses on the model, expression (4.12), and the definition of rk, the following equation is obtained: Πk=E[ΓkΓkT](HkAkBkTHkT)+Rk,kV-DkpHkAkrk-1AkTHkTDkp-Ξk,k-1Πk-1Ξk,k-1T-DkpHkAkE[Ok-1νk-1T]Ξk,k-1T-Ξk,k-1E[νk-1Ok-1T]AkTHkTDkp,k2. So, expression (4.14) for Πk is deduced taking into account that E[Ok-1νk-1T]=Jk-1, which follows from (4.11) using that the vector Ok-2 is orthogonal to νk-1.

To conclude, as a measure of the estimation accuracy, we have calculated the filtering error covariance matrices, Σk/k=E[ZkZkT]-E[Ẑk/kẐk/kT], which clearly are obtained by Σk/k=𝒜k[kT-rk𝒜kT],  k1.

5. Generalization to Correlation at Times <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M259"><mml:mrow><mml:mi>k</mml:mi></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M260"><mml:mrow><mml:mi>s</mml:mi></mml:mrow></mml:math></inline-formula>, with <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M261"><mml:mrow><mml:mo stretchy="false">|</mml:mo><mml:mrow><mml:mi>k</mml:mi><mml:mo>-</mml:mo><mml:mi>s</mml:mi></mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mi>r</mml:mi></mml:math></inline-formula>

The observation model considered in Section 2 assumes that the uncertainty is modeled by Bernoulli variables correlated at consecutive sampling times, but independent otherwise. In this section, such model is generalized by assuming correlation in the uncertainty at times k and s differing r units of time. Specifically, hypothesis (H3) is replaced by the following one.

For i=1,,m, the noises {γki;  k1} are sequences of Bernoulli random variables with P[γki=1]=pki. For i,j=1,,m, the variables γki and γsj are assumed to be independent for |k-s|r, and Cov[γki,γsj] are known for |k-s|=r.

This correlation model allows us to consider certain situations where the signal cannot be missing at r+1 consecutive observations.

Similar considerations to those made in Section 3 for the case of consecutive sampling times lead now to the following expression for the covariance matrices of the noise {Vk;  k1}: Rk,sV={R¯k+Cov[Γk](HkE[Zk]E[ZkT]HkT),s=k,Cov[Γk,Γs](HkE[Zk]E[ZsT]HsT),|k-s|=r,0,|k-s|0,r. Then, performing the same steps as in the proof of Theorem 4.1, the following algorithm is deduced.

Theorem 5.1.

The quadratic filter, zk/kq, of the original signal zk is given by zk/kq=ΘẐk/k,k1, where Θ is the operator which extracts the first n entries of Ẑk/k, the linear filter of the augmented signal Zk, which is obtained by Ẑk/k=AkOk,k1, where the vectors Ok are recursively calculated from Ok=Ok-1+JkΠk-1νk,k1;O0=0. The innovation, νk, satisfies ν1=Y1,νk=Yk-DkpHkAkOk-1,2kr,νk=Yk-DkpHkAkOk-1-Ξk,k-r[νk-r-i=k-r+1k-1Ti,k-rTΠi-1νi],k>r with Ξk,k-r=(Cov[Γk,Γk-r](HkAkBk-rTHk-rT)+Rk,k-rV)Πk-r-1Ti,k-r=DipHiSi,k-r-Ξi,i-rTk-r,i-rT,i=k-r+1,,k-1. The covariance matrix of the innovation, Πk, verifies Π1=E[Γ1Γ1T](H1A1B1TH1T)+R1,1V,Πk=Cov[Γk](HkAkBkTHkT)+Rk,kV+DkpHkAkJk,2kr,Πk=Cov[Γk](HkAkBkTHkT)+Rk,kV+DkpHkAkJk-Ξk,k-r[Πk-r+i=k-r+1k-1Ti,k-rTΠi-1Ti,k-r]Ξk,k-rT-(DkpHk[Bk-Akrk-1]-JkT)AkTHkTDkp,k>r. The matrix function J is given by J1=p1B1TH1T,Jk=[BkT-rk-1AkT]HkTDkp,2kr,Jk=[BkT-rk-1AkT]HkTDkp-(Jk-r-j=k-r+1k-1JjΠj-1Tj,k-r)Ξk,k-rT,k>r, where rk is recursively obtained from rk=rk-1+JkΠk-1JkT,k1;r0=0.

Proof.

It is analogous to that of Theorem 4.1, taking into account that, in this case, the one-stage predictor of Yk satisfies Ŷk/k-1=DkpHkẐk/k-1+i=k-rk-1(Tk,i-DkpHkSk,i)Πi-1νi, and, from the model hypotheses, Tk,k-r-DkpHkSk,k-r=Cov[Γk,Γk-r](HkAkBk-rTHk-rT)+Rk,k-rV, and for i>k-r, Tk,i-DkpHkSk,i=-(Cov[Γk,Γk-r](HkAkBk-rTHk-rT)+Rk,k-rV)Πk-r-1Ti,k-rT.

6. Numerical Simulation Example

To illustrate the application of the proposed filtering algorithm a numerical simulation example is shown now. To check the effectiveness of the proposed quadratic filter, we ran a program in MATLAB which, at each iteration, simulates the signal and the observed values and provides the linear and quadratic filtering estimates, as well as the corresponding error covariance matrices.

For the simulations, this program has been applied to a scalar signal {zk;  k1}, generated by the following first-order autoregressive model: zk=0.95zk-1+wk-1,k1, where the initial state is a zero mean Gaussian variable with Var[z0]=1 and {wk;  k0} is a zero mean white Gaussian noise with Var[wk]=1.

The autocovariance functions of the signal and their second-order powers are given in a semidegenerate kernel form, specifically, Kk,sz=1.025641×0.95k-s,Kk,sz2=2.1038795×0.952(k-s),sk

and the covariance function of the signal and their second-order powers is given by Kk,szz2=0,  for all  s,k. According to hypothesis (H1), the functions constituting these covariance functions can be defined as follows:Ak=1.025641×0.95k,Bk=0.95-k,ak=2.1038795×0.952k,bk=0.95-2k,αk=βk=ɛk=δk=0.

Consider two sensors whose measurements, according to our theoretical study, are perturbed by sequences of Bernoulli random variables {γk(i);  k1}, i=1,2, and by additive white noises, {vk(i);  k1},   i=1,2; that is, yki=γk(i)zk+vk(i),k1,i=1,2.

Assume that the additive noises are independent and have the following probability distributions: P[vk(2)=-8]=18,        P[vk(1)=87]=78,k1,P[vk(2)=1]=1518,        P[vk(2)=-3]=218,        P[vk(2)=-9]=118,k1.

Now, in accordance with the proposed uncertain observation model, we assume that the uncertainty at any time k>r is correlated with the uncertainty at time s only if |k-s|=r, but independent otherwise.

To model the uncertainty in this way, we can consider two independent sequences of independent Bernoulli random variables, {θk(i);  k1}, i=1,2, and define γk(i)=1-θk(i)(1-θk+r(i)),  k1. It is obvious that the variables γk(i) take the value zero if and only if θk(i)=1 and θk+r(i)=0; otherwise, γk(i)=1; therefore, {γk(i);  k1} are Bernoulli variables with P[γk(i)=1]=1-P[θk(i)=1]P[θk+r(i)=0]. Note that γk(i)=0 implies θk+r(i)=0 and, consequently, γk+r(i)=1; this fact implies that if the signal is missing at time k, it is assured that, at time k+r, the observation contains the signal; therefore, the signal cannot be missing in r+1 consecutive observations.

For the application, we have assumed that the variables θk(i) in each sensor have the same distribution; that is, P[θk(i)=1]=θi independent of k. So, in each sensor, the probability that the observation contains the signal, pi=P[γk(i)=1]=1-θi(1-θi), is constant for all the observations.

Since {θk(1);  k1} is independent of {θk(2);  k1}, also {γk(1);  k1} is independent of {γk(2);  k1} and hence, for all  k,s, Cov[γk(i),γs(j)]=0 for ij. For fixed i=1,2, the variance of γk(i) is Var[γk(i)]=pi(1-pi), and the correlation between two different variables γk(i) and γs(i) is obtained as follows.

For |k-s|0,r the variables (θk(i),θk+r(i)) and (θs(i),θs+r(i)) are independent, thus being the variables γk(i)=1-θk(i)(1-θk+r(i)) and γs(i)=1-θs(i)(1-θs+r(i)) also independent and hence, uncorrelated; that is, Cov[γk(i),γs(i)]=E[γk(i)γs(i)]-E[γk(i)]E[γs(i)]=0.

For |k-s|=r, E[θk(i)(1-θk+r(i))θs(i)(1-θs+r(i))]=0 since k=s+r or s=k+r, and θ(i)(1-θ(i))=0. Then, E[γk(i)γs(i)]=1-E[θk(i)(1-θk+r(i))]-E[θs(i)(1-θs+r(i))]=1-2θi(1-θi)=2pi-1, which implies that Cov[γk(i),γs(i)]=E[γk(i)γs(i)]-E[γk(i)]E[γs(i)]=2pi-1-pi2=-(1-pi)2.

Summarizing, the correlation function of {γk(i);  k1} is given by Cov[γk(i),γs(i)]={0,|k-s|r,pi(1-pi),|k-s|=0,-(1-pi)2,|k-s|=r,

and, hence, the measurements above described are in accordance with the proposed correlation model.

To analyze the performance of the proposed estimators, the linear and quadratic filtering error variances have been calculated for different values of r and also for different θ1 and θ2, which provide different values of the probabilities p1 and p2. Since pi are the same if the value 1-θi is considered instead of θi, only the case θi0.5 is examined here (note that, in such case, pi is a decreasing function of θi); more specifically, the values θi=0.1,  0.2,  0.3,  0.4,  0.5 (which lead to pi=0.91,  0.84,  0.78,  0.76,  0.75, resp.) have been used.

First, considering r=4, the linear and quadratic filtering error variances are calculated for the values θ1=θ2=0.1, θ1=0.2 and θ2=0.3, θ1=0.4 and θ2=0.5. Figure 1 shows the results obtained; for all the values of θi, the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, thus confirming the superiority of the quadratic filter over the linear one in the estimation accuracy. Also, from this figure it is gathered that, as θ1 or θ2 increase (which means that the probability that the signal is present in the observations coming from the corresponding sensor decreases), the filtering error variances become greater and, hence, worse estimations are obtained.

Linear and quadratic filtering error variances for r=4 and different values of θ1 and θ2.

Next, we compare the performance of the linear and quadratic filtering estimators for the values θi=0.1,  0.2,  0.3,  0.4,  0.5; since the linear and quadratic filtering error variances show insignificant variation from the 5th iteration onwards only the error variances at a specific iteration are considered.

In Figure 2 the linear and quadratic filtering error variances at k=50 are displayed versus θ1 (for constant values of θ2), and, in Figure 3, these variances are shown versus θ2 (for constant values of θ1). From these figures it is gathered that, as θ1 or θ2 decrease (and, consequently, the probability that the signal is not present in the observations coming from the corresponding sensor, 1-pi, decreases), the filtering error variances become smaller and, hence, better estimations are obtained. Note that this improvement is more significant for small values of θ1 or θ2, that is, when the probability that the signal is present in the observations coming from one of the sensors is large. On the other hand, both figures show that, for all the values of θ1 and θ2, the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, confirming again the superiority of the quadratic filter over the linear one.

Linear and quadratic filtering error variances at k=50 versus θ1, for constant values of θ2=0.1,  0.2,  0.3,  0.4,  0.5.

Linear and quadratic filtering error variances at k=50 versus θ2, for constant values of θ2=0.1,  0.2,  0.3,  0.4,  0.5.

Finally, for θ1=θ2=0.5 (these values produce the maximum value for the probability that the signal is not present in the observations coming from both sensors) and considering different values of r, specifically r=1,,16, the error variances at k=50 for the linear and quadratic filters are displayed in Figure 4. From this figure it is deduced that the performance of the estimators improves when the values of r are smaller and, hence, a greater distance between the correlated variables produces worse estimations (in the sense of the mean squared error). As expected, this figure also shows that the estimation accuracy of the quadratic filters is superior to that of the linear filters and also that the error variances show insignificant variation when the values of r are greater.

Linear and quadratic filtering error variances at k=50 versus r, when θ1=θ2=0.5.

7. Conclusion

A recursive quadratic filtering algorithm is proposed from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics. This is a realistic assumption in situations concerning sensor data that are transmitted over communication networks where, generally, multiple sensors with different properties are involved. The uncertainty in each sensor is modelled by a sequence of Bernoulli random variables which are correlated at times k and k+r. A real application of such observation model arises, for example, in signal transmission problems where a failure in one of the sensors at time k is detected and the old sensor is replaced at time k+r, thus avoiding the possibility of missing signal in r+1 consecutive observations.

Using covariance information, the algorithm is derived by applying the innovation technique to suitably defined augmented signal and observation vectors, and the LS quadratic estimator of the signal is obtained from the LS linear estimator of the augmented signal based on the augmented observations.

The performance of the proposed filtering algorithm is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times k and k+r, considering several values of r.

Acknowledgment

This paper is supported by Ministerio de Educación y Ciencia (Grant no. MTM2008-05567) and Junta de Andalucía (Grant no. P07-FQM-02701).

CarravettaF.MavelliG.Polynomial filtering for systems with non-independent uncertain observations3Proceedings of the 43rd IEEE Conference on Decision and Control December 2004Atlantis, Bahamas31093114SahebsaraM.ChenT.ShahS. L.Optimal filtering with random sensor delay, multiple packet dropout and uncertain observationsInternational Journal of Control2007802292301228708310.1080/00207170601019500ZBL1140.93486NakamoriS.Caballero-ÁguilaR.Hermoso-CarazoA.Linares-PérezJ.Linear estimation from uncertain observations with white plus coloured noises using covariance informationDigital Signal Processing2003138552568HounkpeviF. O.YazE. E.Robust minimum variance linear state estimators for multiple sensors with different failure ratesAutomatica20074371274128010.1016/j.automatica.2006.12.0252306776ZBL1123.93085Caballero-ÁguilaR.Hermoso-CarazoA.Linares-PérezJ.Linear and quadratic estimation using uncertain observations from multiple sensors with correlated uncertaintySignal Processing201191233033710.1016/j.sigpro.2010.07.013ZBL1203.94024Jiménez-LópezJ. D.Linares-PérezJ.NakamoriS.Caballero-ÁguilaR.Signal estimation based on covariance information from observations featuring correlated uncertainty and coming from multiple sensorsSignal Processing2008882998300610.1016/j.sigpro.2008.07.007ZBL1151.94375Hermoso-CarazoA.Linares-PérezJ.Jiménez-LópezJ. D.Caballero-ÁguilaR.NakamoriS.Recursive fixed-point smoothing algorithm from covariances based on uncertain observations with correlation in the uncertaintyApplied Mathematics and Computation20082031243251245156110.1016/j.amc.2008.04.030ZBL1157.65314MagnusJ. R.NeudeckerH.Matrix Differential Calculus with Applications in Statistics and Econometrics1999New York, NY, USAJohn Wiley & Sons