Quadratic Filtering Algorithm Based on Covariances Using Correlated Uncerta in Observations Coming from Different Sensors

The least-squares quadratic estimation problem of signals from observations coming frommultiple sensors is addressed when there is a nonzero probability that each observation does not contain the signal to be estimated. We assume that, at each sensor, the uncertainty about the signal being present or missing in the observation is modelled by correlated Bernoulli random variables, whose probabilities are not necessarily the same for all the sensors. A recursive algorithm is derived without requiring the knowledge of the signal state-space model but only the moments up to the fourth-order ones of the signal and observation noise, the uncertainty probabilities, and the correlation between the variables modelling the uncertainty. The estimators require the autocovariance and cross-covariance functions of the signal and their second-order powers in a semidegenerate kernel form. The recursive quadratic filtering algorithm is derived from a linear estimation algorithm for a suitably defined augmented system.


Introduction
In many real systems the signal to be estimated can be randomly missing in the observations due, for example, to intermittent failures in the observation mechanism, fading phenomena in propagation channels, target tracking, accidental loss of some measurements, or data inaccessibility during certain times.Usually, these situations are characterized by including in the observation equation not only an additive noise, but also a multiplicative noise consisting of a sequence of Bernoulli random variables taking the value one if the observation is state plus noise, or the value zero if it is only noise uncertain observations .Since these models are appropriate in many practical situations with random failures in the transmission, the estimation problem in systems with uncertain observations has been widely studied in the literature under different hypotheses and approaches see e.g., 1, 2 and references therein .
On the other hand, in some practical situations the state-space model of the signal is not available and another type of information must be processed for the estimation.In the last years, the estimation problem from uncertain observations has been investigated using covariance information, and algorithms with a simpler structure than those obtained when the state-space model is known have been derived see, e.g., 3 .
Recently, the least-squares linear estimation problem using uncertain observations transmitted by multiple sensors, whose statistical properties are assumed not to be the same, has been studied by several authors under different approaches and hypotheses on the processes see, e.g., 4, 5 for a state-space approach and 6, 7 for a covariance approach .
In this paper, using covariance information, recursive algorithms for the least-squares quadratic filtering problem from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics are proposed.This paper extends the results in 6 in two directions: on the one hand, correlation at times k and k r between the random variables modelling the uncertainty in the observations is considered, and, on the other, the quadratic estimation problem is addressed.The quadratic estimation is also a new topic addressed in this paper over the results in Hermoso-Carazo et al. 7 which are also referred to observations with uncertainty modelled by Bernoulli variables correlated at times k and k r with arbitrary r, but coming from a single sensor.Furthermore, the current paper differs from 5 in the correlation model considered and in the information used to derive the algorithms state-space model in 5 and covariance information in the current paper .
To address the quadratic estimation problem, augmented signal and observation vectors are introduced by assembling the original vectors with their second-order powers defined by the Kronecker product, thus obtaining a new augmented system and reducing the quadratic estimation problem in the original system to the linear estimation problem in the augmented system.By using an innovation approach, the linear estimator of the augmented signal based on the augmented observations is obtained, thus providing the required quadratic estimator.
The performance of the proposed filtering algorithms is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times k and k r, considering several values of r.The linear and quadratic estimation error covariance matrices are compared, showing the superiority of the quadratic estimators over the linear ones.

Observation Model and Hypotheses
The problem at hand is to determine the least-squares LS quadratic estimator of an ndimensional discrete signal, z k , from noisy measurements coming from multiple sensors which may not contain the signal with different probabilities.In this section, we present the observation model and the hypotheses about the signal and noise processes involved.
Consider m scalar sensors whose measurements at each sampling time, k, denoted by y i k , may either contain the signal to be estimate, z k , or be only noise, v i k ; the uncertainty about the signal being present or missing in the observation is modelled by Bernoulli variables, γ i k .The observation model is thus described as follows: k and the measurement coming from the ith sensor contains the signal; otherwise, if γ i k 0, then y i k v i k , which means that such measurement is only noise.
Therefore, the variables {γ i k ; k ≥ 1} model the uncertainty of the observations coming from the ith sensor.
To simplify the notation, the observation equation 2.1 is rewritten in a compact form as follows: where It is known that if the signal z k and the observations y 1 , . . ., y k have finite secondorder moments, the LS linear filter of z k is the orthogonal projection of z k on the space of n-dimensional random variables obtained as linear transformations of y 1 , . . ., y k .So, by defining the random vectors y 2 i y i ⊗ y i ⊗ denotes the Kronecker product 8 and, if E y < ∞, the LS quadratic estimator of z k based on the observations up to the sampling time k is the orthogonal projection of z k on the space of n-dimensional linear transformations of y 1 , . . ., y k and their second-order powers y 2 1 , . . ., y 2 k .To guarantee the existence of the second-order moments of the vectors y 2 i , the pertinent assumptions about the processes in 2.1 are now stated.

Hypotheses about the Model
H1 The n × 1 signal process {z k ; k ≥ 1} has zero mean, and its autocovariance function, K z k,s , as well as the autocovariance function of the second-order powers, K z 2 k,s , is expressed in a semidegenerate kernel form where the n × M matrix functions A, B, and the n 2 × L matrix functions a, b, are known.Moreover, it is assumed that the covariance function of the signal and its second-order powers, K zz 2 k,s , can also be expressed as where α, β, ε, and δ are n × N, n 2 × N, n × P , and n 2 × P known matrix functions, respectively.
H2 For i 1, . . ., m, the sensor additive noises, {v i k ; k ≥ 1}, are zero mean white processes, and their moments, up to the fourth one, are known; we will denote

Augmented System
Given the observation model 2.1 with assumptions H1 -H4 , the problem is to find the LS quadratic filter of the signal z k , which will be denoted z q k/k .The technique used to obtain this estimator consists of augmenting the signal and observation by assembling the original vectors and their second-order powers, , and deriving the estimator z q k/k as the vector constituted by the first n entries of the LS linear filter of Z k based on Y k .
To obtain this linear estimator, the first-and second-order statistical properties of the augmented vectors Z k and Y k are now analyzed.

Properties of the Augmented Vectors
By using the Kronecker product properties and denoting 1 is the m 2 × m 2 identity matrix and K m 2 is the m 2 × m 2 commutation matrix, 8 the following model with uncertain observations is obtained, It should be noted that the signal, Z k , and the noise, V k , in this new model have non-zero mean.Nevertheless, this handicap can be overcome by considering the centered augmented vectors where γ k and vec the operator that vectorizes a matrix 8 .Note that the LS linear estimator of Hence, since the first n components of E Z k are zero, the required quadratic estimator z q k/k is just the vector constituted by the first n entries of the LS linear filter of Z k .Henceforth, these centered vectors will be referred to as the augmented signal and observation vectors, respectively.
The signal and noise processes {Z k ; k ≥ 1} and {V k ; k ≥ 1} involved in model 3.3 are zero mean.In the following propositions the second-order statistical properties of these processes are established.Proposition 3.1.If the signal process {z k ; k ≥ 1} satisfies (H1), the autocovariance function of the augmented signal process {Z k ; k ≥ 1} can be expressed in a semidegenerate kernel form, namely, where

3.6
Proof.It is immediate from hypothesis H1 about the covariance functions of the signal and its second-order moments.

3.7
where • denotes the Hadamard product and 0, for all k, s and hence

3.10
Firstly, we prove that where δ denotes the Kronecker delta function.Indeed, since {v k ; k ≥ 1} is a zero mean white sequence with covariances R k , for all k ≥ 1, it is clear that R 11 k,s R k δ k,s .Moreover, from the mutual independence, the Kronecker and Hadamard products properties lead to

3.12
On the other hand,

Quadratic Filtering Algorithm
As indicated above, to obtain the LS quadratic estimators of the signal z k based on observations 2.1 , we consider the LS linear estimators of the augmented signal, Z k , based on the augmented observations 3.3 .As known, the LS linear filter of Z k is the orthogonal projection of the vector Z k onto L Y 1 , . . ., Y k , the linear space spanned by {Y 1 , . . ., Y k }; so the Orthogonal Projection Lemma OPL states that the estimator, Z k/k , is the only linear combination satisfying the orthogonality property Due to the fact that the observations are generally nonorthogonal vectors, we will use an innovation approach, consisting of transforming the observation process {Y k ; k ≥ 1} into an equivalent process innovation process of orthogonal vectors {ν k ; k ≥ 1}, equivalent in the sense that each set {ν 1 , . . ., ν k } spans the same linear subspace as The innovation process is constructed by the Gram-Schmidt orthogonalization procedure, using an inductive reasoning.Starting with ν 1 Y 1 , the projection of the next observation, ; if now we have an additional observation Y k , we project it onto L ν 1 , . . ., ν k−1 and the orthogonality property allows us to find the projection by separately projecting onto each of the previous orthogonal vectors; that is, Note that the projection Y k/k−1 is the part of the observation Y k that is determined by knowledge of {Y 1 , . . ., Y k−1 }; thus the remainder vector ν k Y k − Y k/k−1 can be regarded as the "new information" or the "innovation" provided by Y k and the process {ν k ; k ≥ 1} as the innovation process associated with {Y k ; k ≥ 1}.The causal and causally invertible linear relation existing between the observation and innovation processes makes the innovation process unique.
Next, taking into account that the innovations constitute a white process, we derive a general expression for the LS linear estimator of the augmented signal, Z k , based on {Y 1 , . . ., Y L }, which will be denoted by Z k/L .Replacing {Y 1 , . . ., Y L } by the equivalent set of orthogonal vectors {ν 1 , . . ., ν L }, the signal estimator is where the impulse-response function h k,j , j 1, . . ., L is calculated from the orthogonality property, which leads to the Wiener-Hopf equation Due to the whiteness of the innovation process, E ν j ν T s 0 for j / s and the Wiener-Hopf equation is expressed as and, therefore, the following general expression for the LS linear filter of the augmented signal is obtained where S k,i E Z k ν T i and Π i E ν i ν T i .Using the properties of the processes involved in 3.3 , as established in Propositions 3.1 and 3.2, and expression 4.8 for the filter, we derive a recursive algorithm for the linear filtering estimators, Z k/k , of the augmented signal Z k .As indicated above, the first n entries of these estimators provide the required quadratic filter of the original signal z k .
Theorem 4.1.The quadratic filter, z q k/k , of the original signal z k is given by where Θ is the operator which extracts the first n entries of Z k/k , the linear filter of the augmented signal Z k , which is obtained by where the vectors O k are recursively calculated from The innovation, ν k , satisfies and Π k , the covariance matrix of the innovation, verifies

4.14
The matrix function J is given by

4.15
where r k is recursively obtained from Proof.We start by obtaining an explicit formula for the innovations,

or, equivalently, for the one-stage predictor of Y k , which by denoting T k,i E Y k ν T
i is given by Using the hypotheses on the it is deduced that and hence,

4.19
Using again the hypotheses on the model, we obtain with Ξ k,k−1 given by expression 4.13 .Next, expression 4.10 for the filter Z k/k is derived.For this purpose, taking into account expression 4.8 , we obtain formulas to calculate the coefficients S k,i E Z k ν T i , i ≤ k.From the hypotheses on the model, replacing ν i by its expression in 4.21 , and using 4.8 for Z i/i−1 , we have where J is a function satisfying

4.24
Then, from 4.8 and 4.23 , expression 4.10 for the filter is deduced, where O k , defined by satisfies the recursive relation 4.11 .Analogously, it is obtained that the one-stage predictor of the signal is given by , which substituted into 4.21 leads to formula 4.12 for the innovation.Expression 4.15 for J k is derived making i k in 4.24 , using 4.23 , and defining the function From this definition, the recursive relation 4.16 is also immediately derived.
Finally, we obtain expression 4.14 for the innovation covariance matrix; from the hypotheses on the model, expression 4.12 , and the definition of r k , the following equation is obtained:

4.26
So, expression 4.14 for Π k is deduced taking into account that To conclude, as a measure of the estimation accuracy, we have calculated the filtering error covariance matrices,

Generalization to Correlation at Times k and s, with |k − s| r
The observation model considered in Section 2 assumes that the uncertainty is modeled by Bernoulli variables correlated at consecutive sampling times, but independent otherwise.In this section, such model is generalized by assuming correlation in the uncertainty at times k and s differing r units of time.Specifically, hypothesis H3 is replaced by the following one.
H3' For i 1, . . ., m, the noises {γ i k ; k ≥ 1} are sequences of Bernoulli random variables with P γ i k 1 p i k .For i, j 1, . . ., m, the variables γ i k and γ j s are assumed to be independent for |k − s| / r, and Cov γ i k , γ j s are known for |k − s| r.This correlation model allows us to consider certain situations where the signal cannot be missing at r 1 consecutive observations.Similar considerations to those made in Section 3 for the case of consecutive sampling times lead now to the following expression for the covariance matrices of the noise {V k ; k ≥ 1}:

5.1
Then, performing the same steps as in the proof of Theorem 4.1, the following algorithm is deduced.
Theorem 5.1.The quadratic filter, z q k/k , of the original signal z k is given by where Θ is the operator which extracts the first n entries of Z k/k , the linear filter of the augmented signal Z k , which is obtained by where the vectors O k are recursively calculated from The innovation, ν k , satisfies

ISRN Applied Mathematics
The covariance matrix of the innovation, Π k , verifies

5.7
The matrix function J is given by where r k is recursively obtained from Proof.It is analogous to that of Theorem 4.1, taking into account that, in this case, the onestage predictor of Y k satisfies and, from the model hypotheses, 5.12

Numerical Simulation Example
To illustrate the application of the proposed filtering algorithm a numerical simulation example is shown now.To check the effectiveness of the proposed quadratic filter, we ran a program in MATLAB which, at each iteration, simulates the signal and the observed values and provides the linear and quadratic filtering estimates, as well as the corresponding error covariance matrices.
For the simulations, this program has been applied to a scalar signal {z k ; k ≥ 1}, generated by the following first-order autoregressive model: where the initial state is a zero mean Gaussian variable with Var z 0 1 and {w k ; k ≥ 0} is a zero mean white Gaussian noise with Var w k 1.The autocovariance functions of the signal and their second-order powers are given in a semidegenerate kernel form, specifically, and the covariance function of the signal and their second-order powers is given by K zz 2 k,s 0, for all s, k.According to hypothesis H1 , the functions constituting these covariance functions can be defined as follows:

6.3
Consider two sensors whose measurements, according to our theoretical study, are perturbed by sequences of Bernoulli random variables {γ i k ; k ≥ 1}, i 1, 2, and by additive white noises, {v i k ; k ≥ 1}, i 1, 2; that is, 6.4 Assume that the additive noises are independent and have the following probability distributions: , ∀k ≥ 1, 6.5 Now, in accordance with the proposed uncertain observation model, we assume that the uncertainty at any time k > r is correlated with the uncertainty at time s only if |k − s| r, but independent otherwise.
To model the uncertainty in this way, we can consider two independent sequences of independent Bernoulli random variables, {θ 1; this fact implies that if the signal is missing at time k, it is assured that, at time k r, the observation contains the signal; therefore, the signal cannot be missing in r 1 consecutive observations.
For the application, we have assumed that the variables θ i k in each sensor have the same distribution; that is, P θ i k 1 θ i independent of k.So, in each sensor, the probability that the observation contains the signal, , is constant for all the observations.Since {θ and Summarizing, the correlation function of and, hence, the measurements above described are in accordance with the proposed correlation model.
To analyze the performance of the proposed estimators, the linear and quadratic filtering error variances have been calculated for different values of r and also for different θ 1 and θ 2 , which provide different values of the probabilities p 1 and p 2 .Since p i are the same if the value 1 − θ i is considered instead of θ i , only the case θ i ≤ 0.5 is examined here note that, in such case, p i is a decreasing function of θ i ; more specifically, the values θ i 0.1, 0.2, 0.3, 0.4, 0.5 which lead to p i 0.91, 0.84, 0.78, 0.76, 0.75, resp.have been used.
First, considering r 4, the linear and quadratic filtering error variances are calculated for the values θ 1 θ 2 0.1, θ 1 0.2 and θ 2 0.3, θ 1 0.4 and θ 2 0.5.Figure 1 shows the results obtained; for all the values of θ i , the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, thus confirming the superiority of the quadratic filter over the linear one in the estimation accuracy.Also, from  this figure it is gathered that, as θ 1 or θ 2 increase which means that the probability that the signal is present in the observations coming from the corresponding sensor decreases , the filtering error variances become greater and, hence, worse estimations are obtained.Next, we compare the performance of the linear and quadratic filtering estimators for the values θ i 0.1, 0.2, 0.3, 0.4, 0.5; since the linear and quadratic filtering error variances show insignificant variation from the 5th iteration onwards only the error variances at a specific iteration are considered.
In Figure 2 the linear and quadratic filtering error variances at k 50 are displayed versus θ 1 for constant values of θ 2 , and, in Figure 3, these variances are shown versus θ 2 for constant values of θ 1 .From these figures it is gathered that, as θ 1 or θ 2 decrease and, consequently, the probability that the signal is not present in the observations coming from the corresponding sensor, 1 − p i , decreases , the filtering error variances become smaller and, hence, better estimations are obtained.Note that this improvement is more significant for small values of θ 1 or θ 2 , that is, when the probability that the signal is present in the observations coming from one of the sensors is large.On the other hand, both figures show that, for all the values of θ 1 and θ 2 , the error variances corresponding to the quadratic filter are always considerably less than those of the linear filter, confirming again the superiority of the quadratic filter over the linear one.
Finally, for θ 1 θ 2 0.5 these values produce the maximum value for the probability that the signal is not present in the observations coming from both sensors and considering different values of r, specifically r 1, . . ., 16, the error variances at k 50 for the linear and quadratic filters are displayed in Figure 4. From this figure it is deduced that the performance of the estimators improves when the values of r are smaller and, hence, a greater distance between the correlated variables produces worse estimations in the sense of the mean squared error .As expected, this figure also shows that the estimation accuracy of the quadratic filters is superior to that of the linear filters and also that the error variances show insignificant variation when the values of r are greater.

Conclusion
A recursive quadratic filtering algorithm is proposed from correlated uncertain observations coming from multiple sensors with different uncertainty characteristics.This is a realistic assumption in situations concerning sensor data that are transmitted over communication networks where, generally, multiple sensors with different properties are involved.The uncertainty in each sensor is modelled by a sequence of Bernoulli random variables which are correlated at times k and k r.A real application of such observation model arises, for example, in signal transmission problems where a failure in one of the sensors at time k is detected and the old sensor is replaced at time k r, thus avoiding the possibility of missing signal in r 1 consecutive observations.Using covariance information, the algorithm is derived by applying the innovation technique to suitably defined augmented signal and observation vectors, and the LS quadratic estimator of the signal is obtained from the LS linear estimator of the augmented signal based on the augmented observations.The performance of the proposed filtering algorithm is illustrated by a numerical simulation example where the state of a first-order autoregressive model is estimated from uncertain observations coming from two sensors with different uncertainty characteristics correlated at times k and k r, considering several values of r.

s 3 .
13 and since Cov Γ k , Γ s 0 for |k − s| ≥ 2, the covariance matrices R V k,s are obtained.The uncorrelation between {V k ; k ≥ 1} and the processes {Z k ; k ≥ 1} and {D γ k H k Z k ; k ≥ 1} is derived in a similar way, taking into account that {z k ; k ≥ 0}, {v k ; k ≥ 1}, and {γ k ; k ≥ 1} are mutually independent and using the Kronecker and Hadamard products properties.
k ≥ 1.It is obvious that the variables γ i k take the value zero if and only if θ and hence, for all k, s, Cov γ i k , γ j s 0 for i / j.For fixed i 1, 2, the variance of γ i k is Var γ i k p i 1 − p i , and the correlation between two different variables γ i k and γ i s is obtained as follows.I For |k − s| / 0, r the variables θ independent and hence, uncorrelated; that is, Cov Linear filtering error variances Quadratic filtering error variances

Figure 1 :
Figure 1: Linear and quadratic filtering error variances for r 4 and different values of θ 1 and θ 2 .