Optimal Fusion Filtering in Multisensor Stochastic Systems with Missing Measurements and Correlated Noises

The optimal least-squares linear estimation problem is addressed for a class of discrete-time multisensor linear stochastic systems with missing measurements and autocorrelated and cross-correlated noises. The stochastic uncertainties in the measurements coming from each sensor (missing measurements) are described by scalar random variables with arbitrary discrete probability distribution over the interval [0, 1]; hence, at each single sensor the information might be partially missed and the different sensors may have different missing probabilities. The noise correlation assumptions considered are (i) the process noise and all the sensor noises are one-step autocorrelated; (ii) different sensor noises are one-step cross-correlated; and (iii) the process noise and each sensor noise are two-step cross-correlated. Under these assumptions and by an innovation approach, recursive algorithms for the optimal linear filter are derived by using the two basic estimation fusion structures; more specifically, both centralized and distributed fusion estimation algorithms are proposed. The accuracy of these estimators is measured by their error covariance matrices, which allow us to compare their performance in a numerical simulation example that illustrates the feasibility of the proposed filtering algorithms and shows a comparison with other existing filters.


Introduction
For a long time, the least-squares (LS) estimation problem in linear stochastic systems from measurements perturbed by additive noises has received considerable attention in the scientific community due to its wide applicability in many practical situations (e.g., video and laser tracking systems, satellite navigation, radar and meteorological applications, etc. [1]).As it is well known, one of the major contributions made to solve this problem is the Kalman filter, which provides a recursive algorithm for the optimal LS estimator when the additive white noises and the initial state are Gaussian and mutually independent (or, equivalently, uncorrelated due to the Gaussianity assumption) and, therefore, the optimal LS estimator is the optimal LS linear estimator.From the publication of the Kalman filter [2] in 1960, numerous results and several solution methods have been reported in the literature to address the state estimation problem from noisy observations, which depend on models representing possible relationships between the unknown state and the observable variables and also on the noise processes assumptions.
Specifically, during the past decades, there has been an increasing interest in the filtering problem in multisensor systems, where sensor networks are used to obtain the whole available information on the system state and its estimation must be carried out from the observations provided by all the sensors.A basic matter for this class of systems is how to fuse the measurement data from the different sensors to address the estimation problem.Commonly, two methods are used to process the measured data coming from multiple sensors: centralized and distributed fusion methods.In the centralized fusion method all the measured data from sensors are communicated to the fusion center for being processed; nevertheless, as is widely known, centralized estimators have many computational disadvantages, which motivate the research into other fusion methods.In the distributed fusion method, each sensor estimates the state based on its own single measurement data, and then it sends such estimate to the fusion center for fusion according to a certain information fusion criterion.Although the use of sensor networks offers several advantages, the unreliable network characteristics usually cause problems during data transmission from sensors to the fusion center, such as missing measurements, random communication packet losses and/or delays.Taking into account these network uncertainties, the models representing the relationships between the state and measurements do not allow to apply the Kalman filter, and modifications of conventional estimation algorithms have been proposed (see e.g., [3][4][5][6][7][8][9] and references therein).
As in the Kalman filter, independent white noises are considered in all the mentioned papers; however, this assumption may not be realistic and can be a limitation in many realworld problems in which noise correlation may be present.This problem arises, for example, when a target is taking an electronic countermeasure, for example, noise jamming [10], or if the process noise and the sensor measurement noises are dependent on the system state, then there may be cross-correlation between different sensor noises and crosscorrelation between process noise and sensor noises.Also, if all the sensors are observed in the same noisy environment, the measurement noises of different sensors are usually correlated.
For these reasons, the estimation problem in systems with correlated noises has received significant research interest in recent years.For example, the optimal Kalman filtering fusion problem in systems with cross-correlated sensor noises is addressed in [10], while [11,12] study the same problem in systems with cross-correlated process noises and measurement noises; in these papers correlated noises at the same sampling time are considered.In general, the assumption of correlation and cross-correlation of the noise process and measurement noises in different sampling times makes difficult the identification of optimal estimators; this limitation has encouraged a wider research into suboptimal Kalmantype estimation problems.In [13], a Kalman-type recursive filter is presented for systems with finite-step correlated process noises, and the filtering problem with multistep correlated process and measurement noises is investigated in [14].The optimal robust nonfragile Kalman-type recursive filtering problem is studied in [15] for a class of uncertain systems with finite-step autocorrelated measurement noises and multiple packet dropouts.The problem of distributed weighted robust Kalman filter fusion is studied in [16] for a class of uncertain systems with autocorrelated and crosscorrelated noises.In [17], a stochastic singular system with correlated noises at the same sampling time is transformed into an equivalent nonsingular system with correlated noises at the same and neighboring sampling times.Also, in [18], an augmented parameterized system with correlated noises at the same and neighboring sampling times is used to describe the sensor delay, packet dropout, and uncertain observation phenomenons.
On the other hand, as noted above, the use of communication networks for transmitting measured data motivates the need of considering stochastic uncertainties.Missing measurements have been widely treated due to its applicability to model a large class of real-world problems, such as fading phenomena in propagation channels, target tracking or, in general, situations where there exist intermittent failures in the observation mechanism, accidental loss of some measurements, or inaccessibility of the data during certain times.The state estimation problem from missing measurement transmitted by multiple sensors has been studied based on the assumption that all the sensors are identical (see, e.g., [19][20][21][22]); however, this assumption can be unreasonable since some real systems usually involve multiple sensors with different characteristics.Recently, the filtering problem using missing measurements whose statistical properties are assumed not to be the same in all the sensors has been addressed by several authors under different approaches and hypotheses on the processes involved (see, e.g., [23][24][25][26][27]).In all the above papers, Bernoulli random variables are used to model the missing measurements phenomenon, and hence, it is assumed that the measurement signal is either completely lost (if the corresponding Bernoulli variable takes the value zero) or successfully transferred (when the Bernoulli variable is equal to one).Recently, this missing measurement model has been generalized considering any discrete distribution on the interval [0, 1], which allows to cover some practical applications where only partial information is missing (see [28,29] and references therein).
Motivated by the above considerations, our attention is focused on investigating the optimal LS linear centralized and distributed fusion estimation problems in multisensor systems with missing measurements and autocorrelated and cross-correlated noises.In each sensor, the missing measurement phenomenon is governed by a scalar random variable with arbitrary discrete probability distribution over the interval [0, 1], and the different sensors may have different missing probabilities.Assume that the process noise and all the sensor noises are one-step autocorrelated; different sensor noises are one-step cross-correlated; and the process noise and each sensor noise are two-step cross-correlated.This paper makes a twofold substantial novel contribution: (1) unlike most previous results with correlated noises, in which suboptimal Kalman-type estimators are proposed, in this paper optimal LS linear estimators are obtained by using an innovation approach, which provides a simple derivation of the estimation algorithms due to the fact that the innovations constitute a white process; and (2) our missing measurement model considers at each sensor the possibility of observations containing only partial information about the state, or even only noise.
The paper is organized as follows.In Section 2 the system model with autocorrelated and cross-correlated noises and missing measurements coming from multiple sensors is described.Also, the suitable properties on the state and noise processes are specified and a brief description of the innovation approach to the optimal LS linear estimation problem is included.In Section 3 a recursive algorithm for the centralized optimal linear filter is presented for the considered model (the derivation has been deferred to Appendix 6).Next, in Section 4, the local LS linear filters and their corresponding error covariance matrices between any two local estimates are provided, and then the distributed optimal weighted fusion estimators and their error covariance matrices are obtained by applying the optimal information fusion criterion weighted by matrices in the linear minimum variance sense.Finally, in Section 5, a numerical simulation example is presented to show the effectiveness of the estimation algorithms proposed in the current paper, and some conclusions are drawn in Section 6.
Notation.The notation used throughout the paper is standard.For any matrix , the notation symbols   and  −1 represent its transpose and inverse, respectively; R  denotes the -dimensional Euclidean space and R × is the set of all real matrices of dimension  × .

Problem Formulation
Our aim is to obtain recursive algorithms for the optimal LS linear filtering problem in a class of discrete-time stochastic systems with missing measurements coming from multiple sensors, by using centralized and distributed fusion methods.In this section, firstly the system model and the assumptions about the state and noise processes are presented and, secondly, the optimal LS linear estimation problem is formulated using an innovation approach.

Stochastic System
Model.Consider a discrete-time linear stochastic system with autocorrelated and cross-correlated noises and missing measurements coming from  sensors.The phenomenon of missing measurements occurs randomly and, for each sensor, a different sequence of scalar random variables with discrete distribution over the interval [0, 1] is used to model this phenomenon.Specifically, the following system is considered: where   ∈ R  is the state, {  ;  ≥ 0} is the process noise, and   , for  ≥ 0, are known matrices with compatible dimensions.
Consider  sensors which, at any time , provide scalar measurements of the system state, perturbed by additive and multiplicative noises according to the following model: The correlation conditions of the process noise and the measurement noises considered in this paper are the same as those in [16].Systems with only finite-step correlated process noises or multistep correlated process and measurement noises are considered in [13][14][15], among others.The current study can be extended to more general systems involving finite-step autocorrelated and cross-correlated noises with no difficulty, except for a greater complexity in the mathematical derivations.
Remark 2. From the state equation ( 1) and assumptions (ii) and (iv), it is easy to deduce that   = [     ] is recursively calculated by Also, it is easy to see that the state   is correlated with the measurement noises V   , for  = 1, 2, . . ., , and the expectations Remark 3.According to assumption (iii), the scalar random variables    take values over the interval [0, 1] and they can satisfy any arbitrary discrete probability distribution over such interval, for instance, a Bernoulli distribution.Usually, Bernoulli random variables have been used to model the phenomenon of missing measurements (see, e.g., [25] and references therein), with    = 1 meaning that the state   is present in the measurement    coming from the th sensor at time , while    = 0 means that the state is missing in the measured data at time  or, equivalently, that such observation only contains additive noise V   .However, in practice, the information transmitted at a sampling time can usually be neither completely missing nor completely successful, but only part of the information can go through; in such situations, only partial information is missing and the proportion of missed data at one moment is a fraction other than 0 or 1 (see, e.g., [28,29] and references therein).

Stacked Measurement Equation.
As noted above, our aim is to solve the optimal LS linear estimation problem of the state   based on the measurements {  1 ,   2 , . . .,    }, for  = 1, 2, . . ., , by using centralized and distributed fusion methods to process the measured sensor data.The centralized fusion method considers that all the measurement data coming from  sensors are transmitted to a fusion center for being processed; for this purpose and to simplify the notation, the measurement equation ( 2) is rewritten in a stacked form as follows: where )  , and Θ  = Diag( 1  ,. . .,    ).The following properties of the noises in (6) are easily inferred from the model assumptions (ii)-(iv) previously stated.
(ii) The state vector   and the measurement noise vector (iii) The random matrices , . . .,     ).Moreover, for any random matrix  independent of where ∘ denotes the Hadamard product [23].

Innovation Approach to the Optimal LS Linear Estimation Problem.
To address the optimal LS linear estimation problem of the state   based on the measurements {  1 ,   2 , . . .,    },  = 1, 2, . . ., , the centralized and distributed fusion methods will be used.In both cases, recursive algorithms for the LS linear estimators will be established using an innovation approach and the orthogonal projection Lemma (OPL); more specifically we have the following.
Centralized Fusion Estimation Problem.Our aim is to obtain the optimal LS linear filter, x/ , of the state   based on the measurements { 1 ,  2 , . . .,   }, given in (6), by recursive algorithms.
As known, the LS linear filter x/ is the orthogonal projection of the state   over the linear space spanned by { 1 ,  2 , . . .,   }.These observations are generally nonorthogonal vectors, but the Gram-Schmidt orthogonalization procedure allows us to substitute them by a set of orthogonal vectors, called innovations, defined as the difference between each observation and its one-stage predictor.Due to the orthogonality property of the innovations and since the innovation process is uniquely determined by the observations, the LS linear filter, x/ , can be calculated as linear combination of the innovations; namely, where   =   − ŷ/−1 are the innovation vectors, with ŷ/−

Optimal LS Linear Centralized Fusion Estimation
In this section a recursive algorithm for the centralized optimal (under the LS criterion) linear filter, x/ is derived.Such algorithm is deduced using (10) and the OPL, and it is presented in Theorem 5. Firstly, in order to simplify the proof of Theorem 5, the following lemma is established.
Lemma 4.Under assumptions (i)-(iv), the following results hold: . Now, using ( 1) and ( 6), W ,  can be calculated as follows: Taking into account that V  is independent of  1 , . . .,  −2 , the calculation of V , −1 is similar to that of W ,  , and hence the proof is omitted.
Theorem 5.For the system model (1) and measurement model (6), under assumptions (i)-(iv), the optimal LS linear filter x/ is obtained as where the state predictor, x/−1 , satisfies The innovation,   , is given by The matrix X ,  = [     ] is calculated by where The prediction error covariance matrix,  /−1 , is obtained by where The filtering error covariance matrix,  / , is given by The innovation covariance matrix, Π ,  , satisfies The matrices   ,   , W ,  , and V , −1 are given in (4), ( 8), (12), and (13), respectively.
Remark 6.In conventional estimation problems in systems with missing measurements and uncorrelated additive white noises, the one-stage state and observation predictors are calculated as x/−1 =  −1 x−1/−1 and ŷ/−1 = Θ    x/−1 , respectively.However, this is not true for the problem at hand since, due to the correlation assumption (ii), the noise estimators ŵ−1/−1 and V/−1 must be taken into account for the derivation of the predictors.Besides the fact of considering missing measurements, this is the main difference between the optimal estimators proposed in the current paper and the suboptimal Kalman-type ones proposed in [16], where the noise estimators are considered to be equal to zero.

Distributed Fusion Estimation
One of the main disadvantages of the centralized fusion estimators derived in Section 3 is that they may have a high computational cost due to augmentation.Moreover, as is widely known, the centralized approach has several other drawbacks, such as fault detection, isolation, poor reliability, and so forth.To overcome these disadvantages, our aim in this section is to address the optimal distributed fusion estimation problem, in which each single sensor provides its local LS linear estimator and their estimation error covariance matrices, and then these local estimators along with the covariances and cross-covariance matrices of the estimation errors between any two sensors are sent to the fusion center for fusion based on the matrices-weighted fusion estimation criterion in the linear minimum variance sense [30].
Proof.The proof, based on the innovation approach and the OPL, is omitted for being analogous to that of Theorem 5. Nevertheless, it should be indicated that, in this proof, the Hadamard product is not used since, instead of the diagonal stochastic matrix Θ  , the scalar variable    is now involved in the derivation of the estimators.Remark 8.As indicated in Remark 6 for the centralized estimators, it must be noted that, due to the correlation assumption (ii) of the additive noises {  } and {V   }, the estimators ŵ algorithms with uncorrelated white noises.This issue, along with the consideration of missing measurements at each single sensor, constitutes the main difference between the current optimal local estimators and the suboptimal local estimators proposed in [16].

Cross-Covariance Matrices of Local Estimation Errors.
To apply the optimal fusion criterion weighted by matrices in the linear minimum variance sense, the filtering,   / , and prediction,   /−1 , error cross-covariance matrices between local estimators of any two subsystems must be calculated.
For simplicity, besides the notation of Theorem 7, for  ̸ = , ,  = 1, 2 . . ., , we introduce the following notation: Also, in order to simplify the calculation of the error crosscovariance matrices, the following lemmas are given.
(a) The expectation [ x and since and expression (33) is immediately obtained.Finally, the is similar to that of ( 13) and hence it is omitted.(c) Taking into account expression (26) for    , with (2) for    , we have Proof.Taking into account expression (26) for    , with (2) for    , we have where Proof.Taking into account expression (26) for    , with (2) for    , we have where   /−1 , the cross-covariance matrix of the prediction error between the th and the th sensor subsystems, satisfies where  Following an analogous reasoning, using now (25) Finally, using again (24) for x −1/−1 , and since and the expression for the cross-covariance matrices of the local prediction errors is easily obtained.

Distributed Fusion Filtering Estimators. Once the local LS linear filtering estimators x𝑖
/ and their error covariance matrices   / , given in Theorem 7, along with the error crosscovariance matrices,   / , given in Theorem 12, are available, the distributed optimal weighted fusion estimators and their error covariance matrices are obtained by applying the optimal information fusion criterion weighted by matrices in the linear minimum variance sense [30].
Proof.The proof is omitted because it follows directly from the optimal information criterion weighted by matrices in the linear minimum variance sense [30].
Remark 14.The proposed distributed optimal LS linear fusion filter requires the computation of an  ×  inverse matrix, with  the dimension of the system state and  the number of sensors.Consequently, the proposed distributed fusion method has a computational complexity of [() 3 ], equal to that of the distributed Kalman-type filter in [16] and less than that of the distributed fusion filters based on the state augmentation approach.Hence, our distributed fusion method is superior to the filter proposed in [16] (since it has the same computation burden but better accuracy) and also to the distributed fusion filters based on state augmentation (since it has less computational complexity).

Numerical Simulation Example
In this section, a numerical simulation example is presented to illustrate the effectiveness of the centralized and distributed filtering algorithms proposed in this paper.Consider a scalar first-order autoregressive model with missing measurements coming from two sensors with autocorrelated and cross-correlated noises.According to the proposed observation model, two different independent sequences of random variables with a certain probability distribution over the interval [0, 1] are used to model the missing phenomenon.Specifically, the following model is considered as follows: where the initial state  0 is a zero-mean Gaussian variable with variance  0 = 1.The noise processes {  ;  ≥ 0} and {V   ;  ≥ 1},  = 1, 2, are defined by   = 0.6 ( +1 +  +2 ) , where the sequence of variables {  ;  ≥ 1} is a zero-mean Gaussian white process with variance 0.
The phenomenon of missing measurements for each sensor is described as follows.
(1) In the first sensor, a sequence of independent and identically distributed (i.i.d.) random variables, { 1  ;  ≥ 1}, is considered, with probability distribution given by If  1  = 0, which occurs with probability 0.1, the state   is missing and the observation  1   contains only noise V 1  ; if  1  = 0.5, only partial information of the state   is missing in such observation, which happens with probability 0.5; and, finally, the state is present in the observation  1   with probability 0.4 when  1  = 1.The mean and variance of these variables are easily calculated, being  (2) In the second sensor, a sequence of i.i.d.Bernoulli random variables, { Clearly, for all ,  2  =  and   2  = (1 − ).To illustrate the feasibility and effectiveness of the proposed estimators we ran a program in MATLAB, in which fifty iterations of the proposed algorithms have been performed considering different values of   and .Using simulated values of the state and the corresponding observations, both distributed and centralized filtering estimates of the state are calculated, as well as the corresponding error variances, which provide a measure of the estimation accuracy.
Firstly, for  = 0.8, the local, centralized, and distributed filtering error variances are displayed in Figure 1 considering the values  1 = 1 and  2 = 0.5.According to Theorem 13, this figure corroborates that the optimal fusion distributed filter performs quite better than each local filter, but lightly worse than the centralized filter.Nevertheless, although the distributed fusion filter has a bit lower accuracy than the centralized one, both filters perform similarly and provide good estimations.Moreover, this slight difference is compensated because the distributed fusion structure is in general   more robust, reduces the computational cost, and improves the reliability due to its parallel structure.For these reasons, the distributed filter is generally preferred in practice.
Next, to analyze the performance of the proposed estimators versus the probability that the state   is present in the measurements of the second sensor, the centralized and distributed filtering error variances have been calculated for  1 = 1,  2 = 0.5 and different values of the probability  = 0.2, 0.6 and 0.8.The results are displayed in Figure 2; analysis of this figure reveals that as  increases (or, equivalently, the probability 1 −  that the state is missing in the observations from the second sensor decreases), the filtering error variances become smaller and, hence, better estimations are obtained.Also, this figure shows that, for all the considered probability values, the error variances corresponding to the centralized filter are always less than  On the other hand, to compare the performance of the estimators for different degrees of correlation between the state and the observation noises, the centralized and distributed filtering error variances have been calculated considering  2 = 0.5,  = 0.8 and different values of  1 , specifically,  1 = 0.25, 0.5, 0.75, and 1.These values provide different correlations between the noise process {  } and the first sensor observation noise {V 1  } and, consequently, different correlations,  1  , between the state and the first sensor observation noise.The error variances are displayed in Figure 3, from which it is inferred that the error variances are smaller (and, consequently, the performance of the estimators is better) as the value  1 is greater; these results were expected, since the correlation between the state and observations increases with  1 .Analogous results are obtained for different values of  2 and other values of the probability .Now, completing the results of the two previous figures, the performance of the filters is analyzed when  2 = 0.5, the probability  is varied from 0.1 to 0.9, and the values  1 = 0.25, 0.5, 0.75, 1, 1.25, and 1.5 are considered.It must be noted that in all the cases examined, the error variances present insignificant variation from a certain iteration on and, consequently, only the values at a specific iteration (viz.,  = 50) are shown.The results are presented in Figure 4 which, for the sake of clarity, only displays the distributed filtering error variances.Agreeing with the comments about Figures 2 and 3, this figure shows that, for a fixed value of  1 , the performance of the estimators improves as  becomes greater, and for a fixed value of , also more accurate estimations are obtained as  1 increases.Hence, from this figure it is gathered that, as  1 or  decreases (which means that the correlation between the state and the first sensor observation noise decreases or the probability that the state is present in the second sensor measurements decreases, resp.), the filtering error variances become greater and, consequently, worse estimations are obtained.Filter [16] Filter [22] Proposed distributed filter Proposed centralized filter Finally, a comparative analysis is presented between the classical Kalman filter [2], the Kalman-type filter with correlated and cross-correlated noises given in [16], the filter proposed in [23] for systems with different failure rates in multisensor networks, and the centralized and distributed filters proposed in this paper.For the comparison, the same parameter values as in Figure 1 are considered ( 1 = 1,  2 = 0.5, and  = 0.8).
On the basis of one thousand independent simulations of the mentioned algorithms, a comparison between the different filtering estimates is performed using the mean square error (MSE) criteria.For  = 1, . . ., 1000, let { ()   ,  = 1, . . ., 50} denote the th set of artificially simulated data (which is taken as the th set of true values of the state), and x() / the filtering estimate at the sampling time  in the th simulation run.For each algorithm, the filtering MSE at time  is calculated by MSE  = (1/1000) ∑ 1000 =1 ( ()  − x() / ) 2 .The values MSE  , for  = 1, . . ., 50, are displayed in Figure 5 which shows that, for all , the proposed centralized and distributed filters have approximately the same MSE  values, which in turn are smaller than the MSE  values of the filter in [23] and considerably less than those of the filters [2,16].Hence, we can conclude that, according to the MSE criterion, the proposed filtering estimates perform significantly better than other filters in the literature.

Conclusions
The LS linear estimation problem from missing measurements has been investigated for multisensor linear discretetime systems with autocorrelated and cross-correlated noises.The main contributions are summarized as follows.
(1) Using both centralized and distributed fusion methods to process the measurement data from the different sensors, recursive optimal LS linear filtering algorithms are derived by an innovation approach.
(2) At each sensor, the possibility of missing measurements (i.e., observations containing only partial information about the state or even only noise) is modelled by a sequence of independent random variables taking discrete values over the interval [0, 1].
(3) The multisensor system model considered in the current paper covers those situations where the sensor and process noises are one-step autocorrelated and two-step cross-correlated.Also, one-step crosscorrelations between different sensor noises is considered.This correlation assumption is valid in a wide spectrum of applications, for example, in target tracking systems with process and measurement noises dependent on the system state, or situations where a target is observed by multiple sensors and all of them operate in the same noisy environment.Nevertheless, the current study can be extended to more general systems involving finite-step autocorrelated and cross-correlated noises with no difficulty, except for a greater complexity in the mathematical expressions.
(4) The applicability of the proposed centralized and distributed filtering algorithms is illustrated by a numerical simulation example, where a scalar state process generated by a first-order autoregressive model is estimated from missing measurements coming from two sensors with autocorrelated and cross-correlated noises.The results confirm that centralized and distributed fusion estimators have approximately the same error variances, with a slight inferiority of the distributed one which is compensated by a reduced computational burden and reduced communication demands for the sensor networks.Also, compared with some existing estimation methods, the proposed algorithms provide better estimations in the mean square error sense.

Theorem 12 .
and, from (32) for [  −1 x /−1 ], expression for Π  −1,  is immediately derived.In the following theorem, recursive formulas to calculate the filtering and prediction error cross-covariance matrices,   / and   /−1 , respectively, are derived.Under assumptions (i)-(iv), the cross-covariance matrices,   / , of the filtering errors between the th and the th sensor subsystems are recursively computed by Taking into account that [     ] = X  ,  and [ x /−1    ] =    , the recursive expression for the cross-covariance matrices of the local filtering errors is immediately deduced.

First
sensor local filtering error variances Second sensor local filtering error variances Distributed fusion filtering error variances Centralized fusion filtering error variances

Figure 5 :
Figure 5: Comparison of MSE for different filters.
The shorthand Diag( 1 , . . .,   ) denotes a diagonal matrix whose diagonal entries are  1 , . . .,   .If the dimensions of matrices are not explicitly stated, they are assumed to be compatible for algebraic operations. − is the Kronecker delta function, which is equal to one, if  = , and zero otherwise.Moreover, for arbitrary random vectors  and , we will denote Cov[, ] = [( − [])( − [])  ] and Cov[] = Cov[, ], where [⋅] stands for the mathematical expectation operator.Finally, α denotes the estimator of  and α =  − α the estimation error.