A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks

,


Introduction
Multisensor fusion is also known as multisensor data fusion [1,2] or multisensor information fusion [3], which is an emerging technology originally catering for the military needs such as battlefield surveillance, automated target recognition, remote sensing, and guidance and control of autonomous vehicles.In recent years, the multisensor fusion technology has been extensively applied to a variety of civilian applications such as monitoring of complex machinery, medical diagnosis, robotics, video and image processing, and smart buildings.The essence of multisensor fusion techniques is to combine data from multiple sensors and related information from associated databases, to achieve improved accuracies and more specific inferences than that could be achieved by the use of a single sensor alone [1].
Arguably, Kalman filtering algorithm [4] is one of the most popular multisensor fusion methods mainly due to its simplicity, ease of implementation, and optimality in a mean-squared error sense [2], and many important fusion results based on it have been reported.Notably, the Kalman filtering based multisensor fusion algorithms require either independence or prior knowledge of the cross covariance of estimate errors to produce consistent results [2].Unfortunately, in many practical scenarios, their cross covariances are often unknown, which are usually referred to as unknown correlations [5], unknown cross covariances [6], or unavailable cross-correlations [7].Unknown correlations, if not dealt with appropriately, may significantly degrade the system performance.It is no wonder that fusion with unknown correlations has also drawn constant attention from both theoretic and engineering perspectives.
Recent years have witnessed significant development in the area of sensor networks with motivating applications ranging from wireless camera networks [8] to simultaneous localization and mapping [9] to distributed multitarget tracking [10].The general setup is to observe the underlying process through a group of sensors organized according to a given network topology, which renders the individual observer estimates the system state based not only on its own measurement but also on its neighbors' .Consequently, a fundamental problem in sensor networks is to develop distributed algorithms so as to estimate the state of interest more effectively; such a problem is often referred to as the distributed filtering or estimation problem [11,12].As we all know, in a typical distributed filtering setting, each node is sensing a local knowledge of the state of interest.
To combine limited information from individual nodes, a suitable multisensor fusion is necessary to represent the state of the object that appeared in the surrounding environment.Besides, the scalability requirement, the lack of a fusion center, and the limited knowledge of the whole sensor network advocate the use of consensus approaches [13][14][15] to achieve a collective fusion over the network by iterating local fusion with neighboring nodes.It is these reasons that give rise to the development of consensus filtering [10,16] and its structure is shown in Figure 1.
In fact, the idea of consensus filtering stems from [17,18]; the term consensus filtering was first dabbed in [19,20] in 2005.It was the time of a golden age for consensus theory [13] and the 7th year after the first practical realization of wireless sensor networks in 1998 by the Smart Dust project at the University of California at Berkeley [21].Since then, it has triggered a new wave of researches on consensus filtering, not only in theoretic development but also in engineering practice.
There are several ways in the literature to design the consensus filter.For a standard Kalman filter, consensus scheme can be applied to either update step or prediction step so as to construct a consensus filter without losing of the Kalman filtering feature [27].More broadly, consensus filtering approaches such as consensus on estimate, consensus on measurement, and consensus on information have been proposed to design a variety of consensus filters [24,28].Recently, the  ∞ criterion has also been used to devise the  ∞ consensus filter.Along the similar classifications as [24,28], we mainly cover the consensus filtering approaches in four groups, that is, consensus on estimate, consensus on measurement, consensus on information, and  ∞ consensus.
In this paper, we mainly focus on the multisensor fusion and consensus filtering for the distributed state estimate problems and provide a systematic review for the advances in these two areas.First, both theories and applications of multisensor fusion techniques are revisited, in particular, multisensor fusion with unknown correlations which pervasively exist in most of distributed filtering fashion.Next, we classify the existing consensus filtering schemes into four groups; both benefits and drawbacks of each group are discussed.Furthermore, a series of newly published results on the multisensor fusion and consensus filtering are surveyed.Finally, some conclusion remarks are drawn and several related future research directions are pointed out.
The remainder of this paper is outlined as follows.The multisensor fusion algorithms with or without unknown correlations are investigated in Section 2. In Section 3, four commonly used consensus approaches for designing consensus filters are reviewed rigorously.Several latest results on the multisensor fusion and consensus filtering can be found in Section 4. In Section 5, both conclusions and future research topics are given.
Notation.We denote by R  the -dimensional Euclidean space.‖⋅‖ refers to the Euclidean norm in R  .For a matrix ,   and  −1 separately represent its transpose and inverse, and  > 0 means that matrix  is positive definite and tr{} is the shorthand for the trace of matrix .The sensor nodes of the network are communicated over a connected digraph G = (N, E), where N = {1, 2, . . ., } is the set of sensor nodes and E denotes the set of connections between nodes.An edge (, ) ∈ E indicates that node  can receive information from node .Further, if node  is included in its neighbors, we denote it as N  (N  = { | (, ) ∈ E}), and otherwise, we denote it as N  \ {}.The notation |N| denotes the cardinality of the N.

Multisensor Fusion
The development of multisensor fusion was originally triggered by the demands from military area.Later, numerous applications in nonmilitary areas provided another impetus to the multisensor fusion technology.In this section, we investigate the most commonly used multisensor fusion techniques for the local estimation errors with correlations, without correlations, or with unknown correlations.The important fusion rules for those three types are listed in Table 1, where ( x ,   ) ∈N and ( x ,   ) are local and fused estimate and error covariance pair, respectively.Let us now conduct a systematic review on the development of multisensor fusion technology during the past few decades.

Multisensor Fusion.
At the beginning of the 1970s, the US navy merged data about Soviet naval movements at data fusion centers; the result turned out to be more accurate than using data from single sonar [29].This discovery has heralded the research on multisensor fusion; since then, voluminous results have been published during the past several decades; see, for example, [3,[30][31][32][33][34][35][36][37][38][39].For example, in 1971, the multisensor fusion problem was first studied in [30], where the estimation errors in two track files from different sensors but corresponding to the same target were assumed to be independent.Later, more general cases can be found in [31,32] including Carlson's federated square root filter [32].Moreover, a decentralized linear estimator in the presence of correlated measurement noise was constructed in [33].Further, in 1986, [34] came up with a fusion equation with the consideration of cross covariance between local estimates.In 1994, [35] generalized the results in [34] and an optimal fusion equation was deduced in the sense of maximum likelihood estimate, but the posterior probability density function should be of the standard normal density function.Ten years later, this limitation was overcome by [3], where an optimal information fusion criterion was rederived in the linear minimum variance sense with the help of Lagrange multiplier; from the mathematical point of view, this fusion rule is equivalent to the best linear unbiased estimation fusion rule in [36], or weighted least squares fusion rule in [37].In a recent paper [38], a multisensor fusion scheme was devised specifically for nonlinear estimate within the unscented Kalman filtering framework.More recently, for the case of singular estimation errors covariances and measurement noises covariances, an optimal distributed Kalman filtering fusion strategy was proposed in [39].Besides, if the local estimates are generated with different rates, the corresponding multirate fusion results were given in [40,41].

Optimal
Unknown correlations (unknown correlated) Nowadays, with the rapid development of sensor network technology, the multisensor fusion over sensor network has become an active research area.Recently, the focus of the multisensor fusion has been shifted from centralized fusion to distributed one mainly due to the scalability, robustness to failure, structure flexibility, and less communication resources of the distributed sensor networks; see, for example, [17,18].For instance, [17] has proposed a distributed multisensor fusion scheme that allows the nodes of a sensor network to track the average of inverse-covariance-weighted measurements and inverse-covariance matrices so as to reach so-called dynamic average consensus.Almost in the same time, [18] has also introduced a distributed multisensor fusion scheme based on average consensus; what is different is that the scheme diffuses information across the network by updating each node's data with a weighted average of its neighbors which finally converges to the global maximumlikelihood estimate.In fact, it was these two papers that gave rise to the consensus filtering, which is the next focus of this survey.
It is worth noting that the above results are obtained within the Kalman filtering framework (broadly speaking, the Bayesian framework), which is only a small fraction of the whole family of multisensor fusion techniques; the other frameworks may include but are not limit to evidential belief reasoning [42], fuzzy reasoning [43], probabilistic fusion [44], hybrid fusion [45], and random set theoretic fusion [46]; the interested readers may consult the survey paper [2] for more details.

Fusion with Unknown Correlations.
Unknown correlations [5] are often referred to as unknown cross covariances [6], or unavailable cross-correlations [7].The main causes of unknown correlations can be categorized into the following two groups: (1) Lack of knowledge of the true system: (i) Unidentified correlations: for example, correlations from the observation noises, as yet unidentified, may occur during the moving process of vehicle which equipped a suite of navigation sensors [5].(ii) Approximation implementation: it is often assumed that the prior estimation error and the new measurement error are uncorrelated, which may introduce certain degree of unknown correlations in the final implementation [47].
(2) Correlations that are too difficult to describe: (i) Involving too many variables: in the applications like map building and weather forecasting, the process model could involve thousands of states, which means maintaining a full covariance matrix is impractical [5].(ii) Data incest [24] or double counting problem [48]: due to the presence of network loops, the information is often inadvertently used several times in a distributed fusion setting.(iii) Difficulty of calculation: for example, how to acquire reliable correlations in the nonlinear estimates is still an open question.
To summarize, it is widely believed that the unknown correlations ubiquitously exist in a diverse range of multisensor fusion problems.Neglecting the effect of unknown correlations can result in grave consequence of performance deterioration even divergence.As such, it has attracted and sustained attention from researchers for decades.However, owing to the intricate unknown nature, it is not easy to come up with a satisfied scheme to address the fusion problems with unknown correlations.If we ignore the correlations directly, which is the Naive fusion [49], it may lead to divergence of the filter.To compensate this kind of divergence, a normal suboptimal approach is to increase the system noise artificially.However, this heuristic requires substantial certain expertise and compromises the integrity of the Kalman filter framework [50].
Among existing fusion solutions for systems with unknown correlations, it was the covariance intersection algorithm [5], which was invented by Julier and Uhlmann in 1997, that provides an effective tool to tackle unknown correlations.According to [51], the covariance intersection fusion rule has the advantages lying in the following: (i) The identification and computation of the cross covariances are completely avoided.(ii) It yields a consistent fused estimate, and thus a nondivergent filter is obtained.(iii) The accuracy of the fused estimate outperforms each local one.(iv) It gives a common upper bound of actual estimation error variances, which has robustness with respect to unknown correlations.Since then, it has attracted much interest from a wider community; see, for example, [7,[49][50][51][52][53][54][55][56][57][58][59][60][61][62][63][64].Some of them focused on improving the covariance intersection methodology.For instance, a generalized covariance intersection, known as split covariance intersection, has been created in [52] for the purpose of incorporating known independent information.Reference [53] has improved the main results in [5] by investigating the linear combination gains in an  2dimensional space.Meanwhile, an information-theoretical justification of covariance intersection rule and its generalization can be found in [54] and [55], respectively.Later, the Chernoff fusion rule which includes covariance intersection as a special case is also reviewed in [49].Recently, [56,57] have separately proposed the largest ellipsoidal algorithm and ellipsoidal intersection state fusion method which lead to a higher accuracy for the fused estimate.Moreover, an accuracy comparison between covariance intersection and three different optimal fusion rules has been presented in [58], while the others turned to apply covariance intersection algorithm to a plethora of areas, such as target tracking [7], fault-tolerant estimate [59], simultaneous localization and mapping [60], image fusion [61], vehicle localization [62], and NASA Mars rover [63].
One of the significant drawbacks of covariance intersection algorithm and its variants is the computation burden.When the number of information sources to be fused is more than 2, this problem is exactly a nonlinear optimization problem with constraints in Euclidean space R  , and it rapidly becomes computationally intractable, especially in large distributed sensor networks.Therefore, there is a great need to develop fast covariance intersection algorithm to circumvent this issue.Fortunately, there are several solutions that have been reported in this spirit.One of them is the sequential covariance intersection [51], where the multidimension nonlinear optimization problem was reduced to the optimization of several one-dimensional nonlinear cost functions by batch processing procedure.The second method is the suboptimal noniterative algorithm [50]; the method has been further used in [64] for designing a diffusion Kalman filtering scheme and [7] for an information-theoretic interpretation.The third one is the ellipsoidal intersection [57], where the obtained algebraic fusion formulas made it computationally feasible.Different from the above methods trying to approximate the optimal value, the fourth solution can give an exact solution of the optimal weights in case of low-dimensional covariance matrices by the close-form optimization of covariance intersection [65], which reduced the nonlinear optimization problem into the polynomial root-finding problem.

Consensus Filtering Approaches
This section gives a systematic overview of recent advances on several consensus approaches, which are suitable to design consensus-based filters.In general, most of the publications on consensus filtering approaches can be classified into four groups: consensus on estimates (CE), consensus on measurements (CM), consensus on information (CI), and  ∞ consensus; the mechanisms of these four types of consensus filtering approaches for the typical linear time-varying systems with multiple sensor observations are depicted in Table 2.In the following, we will take a deep investigation of these consensus filtering approaches one by one in order to inspire more research interest.

Consensus on Estimates.
The consensus approach belonging to the first group is the consensus on estimates (CE), in which only state estimates are averaged out to reach consensus.It is the most basic consensus filtering approach, in the early stage of CE ( ẋ  () = ∑ ∈   , (  () −   ()) + ∑ ∈   , (  () −   ()), where   () and   () are the current state and measurement of node , resp.,   denotes the set of neighbors of node , while   denotes the set of inclusive neighbors of node , and  , is the (, ) entry of the adjacency matrix of the associated communication graph) [19]; each sensor node was treated as an agent, and with this respect, the full-fledged consensus theory in multiagent systems can be directly applied to the distributed filters plus one extra consensus term to reflect the measurement features.This consensus algorithm was further modified to the Kalman-like distributed estimator that combined updated state estimate and consensus term, which is also called Kalman consensus filter [22,23].It should be noticed that the CE is not limited to the Kalman-like filters (following the same definition as in [23], we refer to any recursive estimator that has the similar filter structure as Kalman filter as Kalman-like filters); in fact, the well-known  ∞ consensus can also be included in this group.However, due to its ever-growing importance and influence in a variety of engineering areas, we treat it as a separate consensus filtering approach for the purpose of better discussing these issues.
From the algorithm perspective, it does not necessarily require the information of local error covariance matrix or local probability density functions.It is no wonder that CE and its variants have been employed to design an extensive body of consensus filters; see, for example, [12,23,[66][67][68][69][70][71][72].
For instance, by using the consensus strategy, [66] has constructed a local estimate based on its own measurements and on the estimates from its neighbors, and [67] has introduced a consensus-based filter that can provide reliable estimate despite of the existence of missing observations and communication faults.In [68], with the help of the theory of synchronization and consensus in complex networks and systems, a novel CE consensus has been born with the pinning observers.Further, a CE term has been embedded in the penalty function [69,Equation (10)] to increase the accuracy of the local estimates, which is fundamental to guarantee convergence of the state estimates to the state of the observed system.More specifically, [12] has devised a consensus filter in two steps.In the first step, an update was produced using a Luenberger-type observer.In the second step called consensus step, every sensor computed a convex combination between its local update and the updates received from its neighbors.Meanwhile, in [70], two different distributed consensus filters have been integrated in the proposed distributed sensor fusion algorithm to achieve cooperative sensing among sensor nodes.Recently, based on the CE, [71] has designed two types of consensus filters for target tracking problem over heterogeneous sensor networks, and their unbiasedness and optimality were discussed as well.Lastly, [72] has presented a decentralized observer with a consensus filter that blended with its neighborhood, so that the state estimate of each agent can reach consensus after  times of iterations at each time interval.

Consensus on Measurements.
The CE does not involve the error covariance matrices, which may lead to certain degree of conservatism in designing consensus filter within the Kalman-like filter framework.It is well known that the error covariance matrices may contain valuable information which has been successfully used to improve the filter performance.Taking this into account, another consensus approach, that is, consensus on measurements (CM), is proposed for fulfilling this requirement.It performs a consensus on local measurements or, more specifically, local innovation pairs so as to approximate, in a distributed way, the correction step of the centralized Kalman filter.It is worth noticing that the stability of CM consensus filtering can be ensured only when a sufficiently large number of consensus steps are carried out during each sampling interval, so that the local information provided by the innovation pairs has time to spread throughout the whole network.Besides, this kind of approach relies on the assumption that the measurement errors coming from different sensors are mutually independent, and this approach is limited to the Kalman-like filters.
During the last decade, the CM has been broadly used in both the signal processing and control communities; see, for example, [20,22,[73][74][75][76][77].The idea of CM consensus originally appeared in [20] to solve the data fusion problems in a distributed way by using low-pass and band-pass consensus filters.A modified version of this consensus approach was presented in [22, Algorithm 1], where two identical consensus filters were employed and it is applicable to sensor network with different observation matrices.Further, [73] showed that CM can guarantee that the local estimates of the error covariance matrix and local estimates of the state converge to their centralized ones.Later, this consensus approach was applied to design consensus filter for jump Markov systems [74] and discrete-time nonlinear systems with non-Gaussian noise [75].In a following paper [76], a pseudo-measurement matrix was reconstructed by embedding a statistical linear error propagation approach [78], which facilitates the use of CM in the unscented Kalman filtering fashion.Recently, CM was interpreted as likelihood consensus and applied to the distributed particle filtering setting [77].

Consensus on Information.
Due to the facts that only one or few consensus iterations per time can be afforded in order to reduce the communication overhead for higher energy efficiency in particular wireless sensor network environment and there could not be enough time to wait for CM to convergence [79], consequently, an alternative consensus approach, namely, consensus on information (CI), was recently invented to circumvent these issues.From an algorithm standpoint, CI is nothing but forming a local average on information matrices and information vectors.The approach can guarantee stability for any number of consensus steps (even for single one) but its mean-squared estimation error performance may be hampered since the fusion rule adopts a conservative point of view by assuming that the correlations between local estimates are completely unknown [28].It was originally introduced in [25] for a distributed state estimation problem; later, a mathematical rigorous treatment of it was detailed in [24], where CI is interpreted as a consensus on probability density functions in the Kullback-Leibler average sense.Following the same consensus paradigm, [10] presented a novel consensus cardinalized probability hypothesis density filter to study the distributed multitarget tracking problem over a sensor network, and [80] designed a consensus-based multiplemodel Bayesian filter for the distributed tracking task of a maneuvering target.More recently, [81] applied CI to design the distributed unscented Kalman filters for systems with state saturations and sensor saturations.
3.4. ∞ Consensus.It should be pointed out that the aforementioned consensus filtering algorithms are mainly based on the traditional Kalman filtering theory which requires the statistic information about plant model to be known perfectly; unfortunately, the practical systems are often accompanied with parameter uncertainties and exogenous disturbances.Consequently, there is a great impetus to develop consensus filtering scheme as robust as possible.With these needs considered, the  ∞ consensus has been recently introduced and has been widely recognized; see, for example, [11,26,[82][83][84][85][86][87][88].
The term  ∞ consensus was officially coined by [11]; the main intuition behind it arises from the notion of  ∞ disagreement between adjacent nodes to quantify consensus performance of the filter network [85].In [11], the  ∞ consensus performance requirement has been defined to quantify bounded consensus regarding the filtering errors (agreements) over a finite horizon; the paper has considered a distributed  ∞ consensus filtering problem for sensor networks with multiple missing measurements.In the similar vein, [26] has studied the distributed  ∞ filtering with randomly occurring saturation and successive packet dropout.Furthermore, the  ∞ consensus approach has been utilized for solving control problems of multiagent systems, for example, in a novel fuzzy model setting [83] and system with missing measurements [82].Most recently, [84] has put forward distributed event-triggered  ∞ consensus filtering problems in mobile sensor networks where the transmission of each sensor was triggered by an event.At about the same time as [11], Ugrinovskii followed a different path to investigate the  ∞ consensus by using vector dissipativity methodology; see, for example, [85][86][87][88].For instance, by pursuing  ∞ consensus on estimates, the distributed robust filtering problems have been discussed for uncertain systems with measurement uncertainty [85], a nonvanishing nonlinear disturbance with unbounded energy [86], and switching topologies [87]; lately, [88] has laid out  ∞ consensusbased synchronization protocol scheduled for each agent to synchronize with a reference parameter-varying system.

Latest Progress
Very recently, the research on multisensor fusion and consensus filtering is receiving an increasing attention; many inspiring results have been published.Here, we highlight some of the newest work with respect to this topic.
(1) Fusion with Incomplete Information.Due to limited capacity of signal transmission through networked systems, the incomplete information [89] (such as missing measurement, packet dropout, quantization, saturation, and networked-induced delay) is inevitable in most of real implementations.These phenomena may potentially deteriorate the system performance; accordingly, the multisensor fusion for systems with incomplete information becomes popular.For example, [90] has developed fusion strategies for communication between the sensors and the estimation center was subject to random packet loss.When sensors experience randomly delayed measurements and sensor failures, [91] has devised a robust information fusion estimator.Lastly, [92] has designed an optimal distributed fusion Kalman filter with the consideration of missing sensor measurements, random transmission delays, and packet dropouts.Very recently, [93] was concerned with the distributed Kalman filtering problem for a class of networked multisensor fusion systems with communication bandwidth constraints.
(2) Hybrid CMCI.In [28], a new class of consensus filters (named Hybrid CMCI) which enjoy the complementary benefits from two existing consensus filtering approaches, that is, CM and CI, have been introduced for distributed state estimation over sensor networks.The results have claimed that, under minimal requirements (i.e., collective observability and network connectivity), the guaranteed stability of the Hybrid CMCI filter can be achieved.The idea first appeared in [94]; more recently, a mathematical rigorous analysis of the Hybrid CMCI consensus for extended Kalman filtering setting has been given in [95].
(3) Set-Theoretic Consensus.In [96], under the assumption of unknown but bounded measurement errors, a set-theoretic/set-membership consensus has been formulated in a set-theoretic framework.Further, the paper has analyzed the consensus algorithms in the case of undirected and stationary communication graphs.The result has shown that, for both types of protocols, asymptotic consensus cannot be guaranteed with respect to all possible noise realizations, but the disagreement among the agent states is asymptotically bounded.
(4) Weighted Average Consensus-Based Unscented Kalman Filtering.Reference [97] has investigated the consensus-based distributed state estimation problems for a class of sensor networks within the unscented Kalman filter framework.Without approximating a pseudo-measurement matrix, a weighted average consensus-based unscented Kalman filtering algorithm has been developed which directly implemented consensus on state vectors and error covariance matrices, and its estimation error was proven to be bounded in mean square.
(5) Distributed  ∞ Consensus Filtering for Piecewise Discrete-Time Linear Systems.Reference [98] has studied the distributed  ∞ consensus filtering problem for a class of piecewise discrete-time linear systems.First, the modes and their transitions of augmented piecewise linear systems as well as distributed filters have been formulated.Next, the structure of augmented distributed filter gains has been presented in virtue of the adjacent matrix of sensor networks.Besides, a set of sufficient conditions have been provided for the distributed filter to ensure that its dynamics were global asymptotically stable with the  ∞ consensus performance constraint.
(6) Distributed Kalman Consensus Filter with Intermittent Observations.Reference [99] has considered the distributed state estimation problem for linear timevarying systems with intermittent observations.In the paper, an optimal Kalman consensus filter has been developed by minimizing the mean-squared estimation error for each node.To derive a scalable algorithm for the covariance matrices update, a suboptimal filter has also been proposed by omitting the edge covariance matrices among nodes.Besides, a sufficient condition was presented for ensuring the stochastic stability of the suboptimal filter by using the Lyapunov-based approach.(7) Information Weighted Consensus.As noted in [8], the consensus estimate is suboptimal when the cross covariances between the individual state estimates across different nodes are not incorporated in the distributed filtering framework.The cross covariance is usually neglected because of the limitations of the computational and bandwidth requirements.As such, a consensus filtering scheme should guarantee convergence to the optimal centralized estimate as well as maintaining low computation and communication resource requirements.Motivated by the above discussions, [8] has proposed the information weighted consensus algorithm which can secure the optimal estimate by proper weighting of the prior state and measurement information.

Conclusion and Future Directions
In this paper, we have introduced both classic results and recent advances developed in multisensor fusion and consensus filtering.First, we recalled some important results in the development of multisensor fusion technology, in particular, multisensor fusion with unknown correlations.Next, we gave a systematic review of several consensus filtering approaches which are widely used to design consensus filter.Further, some latest progress on multisensor fusion and consensus filtering was also presented.To conclude this survey paper, based on the literature reviewed, we will offer readers a glimpse of several future directions that may spark their interest.
(1) Further Divide the Unknown Correlations.Although the reasons causing unknown correlations are abundant, in many real situations, we can obtain the partial information about unknowns such as uncertain system matrix [100,101], uncertain-covariance noise [102], nonfragile filter [103], and uncertain stochastic nonlinearity [100,104], which can be tackled by robust filtering [100], extended Kalman filtering [105], recursive filtering [106], and  ∞ robust filtering [107].Therefore, according to the unknown nature of unknown correlations, it is necessary to further divide them into two groups: partial unknown correlations and complete unknown correlations.Based on that, it is possible to develop new fusion mechanisms or improve the existing ones to get a better solution for fusion with unknown correlations.
(2) Mathematical Characterization of Unknown Correlations.Due to the intricate unknown nature, most of publications involving unknown correlations are just labeling the error cross covariance  , as unknown without giving their structures, which, unfortunately, did not capture the information of correlations, and their fusion results are inevitably leading to certain degree of conservatism.Reference [57] has made a few attempts to provide an explicit characterization for unknown correlation; however, the derivation is complex and not intuitive.Hence, it is still an open question to get a general yet concise mathematical model to describe the unknown correlations.
(3) Consensus Filtering with Unknown Correlations.The unknown correlations are ubiquitously existing in general distributed filtering problems; they are also the major source for the "data insect" phenomenon in the network.It is therefore of great importance to study the consensus filtering problem for systems with unknown correlations.Despite its significance, the progress on this topic is slow.Take the covariance intersection rule, for example, how to design consensus filter which involves information matrices, information vectors, and optimal weights simultaneously without comprising its performance, it would be another interesting topic.
(4) Hybrid Consensus.The increasing complexity of system dynamics and high demands for filter performance call for the design of the consensus filter which should be as good as possible.In this spirit, the hybrid consensus filtering scheme which benefits from different consensus approaches may meet these requirements.Although initial interests have appeared in recent years, see, for example, [94,95], it is a trend that more and more hybrid consensus filtering schemes, which blend two or more consensus filtering approaches, will be constructed in the coming future.
(5) Stochastic Stability Analysis of Kalman-Like Consensus Filter.In a Kalman-like filter, there are two primary sources for error in the estimation: initialization error and stochastic errors due to the process and measurement noise [108].However, most work has ignored the process and measurement noise when analyzing the stability of the consensus filter, which is hardly a reasonable treatment.The stochastic stability lemma [109] may provide a possible solution and has been used as an effective tool for analyzing the single Kalman-like filter; see, for example, [109][110][111][112], but for the distributed case, its stochastic stability analysis remains to be established.
(6) Beyond Consensus.Even though the consensus is the mostly used strategy for the distributed filtering problems, it may not be the best solution for certain circumstance; for example, the diffusion strategy [113] is particularly suited for problems involving the recursive minimization of cost functions as opposed to consensus strategy.It is therefore of significant engineering importance to find new strategy or make a trade-off between different strategies for the distributed filtering problems with new networkinduced phenomena such as randomly occurring nonlinearities and fading measurements [114,115].

Figure 1 :
Figure 1: The architecture of the consensus filtering algorithm.

Table 2 :
The mechanisms of four consensus filtering approaches.  is the systems matrix.For each node ,    ,    , and    are, respectively, measurement matrix, covariance matrix of measurement noise, and measurement output,    and    are the filter and consensus gains to be determined, and  , , is the consensus weight after  step consensus.Further, denote Ω | ≜ ( | ) −1 and  | = ( | ) −1 x| as information matrix and information vector and ( * Throughout the table, * * Where z and   0 are the filtering error and initial error for node , respectively. is the disturbance attenuation level, and V represents noise, while   is the given positive definite matrix.