Consensus algorithm for networked dynamic systems is an important research problem for data fusion in sensor networks. In this paper, the distributed filter with consensus strategies known as Kalman consensus filter and information consensus filter is investigated for state estimation of distributed sensor networks. Firstly, an in-depth comparison analysis between Kalman consensus filter and information consensus filter is given, and the result shows that the information consensus filter performs better than the Kalman consensus filter. Secondly, a novel optimization process to update the consensus weights is proposed based on the information consensus filter. Finally, some numerical simulations are given, and the experiment results show that the proposed method achieves better performance than the existing consensus filter strategies.
1. Introduction
In recent years, there has been a surge of interests in the area of distributed sensor networks. The advantages of distributed sensor networks lie in their low processing power, cheap memory, scalable sensing features, and fault tolerance capabilities.
One of the most basic problems for distributed sensor networks is to develop distributed algorithms [1] for the state estimation of a process of interest. When a process is observed by a group of sensors organized in a network, the goal of each sensor node is to obtain the accurate state estimation for the process. Kalman filtering has been proved to be an effective algorithm for state estimation of dynamic processes [2, 3]. Because of this, most papers focusing on distributed estimation propose different mechanisms by combining the Kalman filter with a consensus filter in order to ensure that the estimates asymptotically converge to the same value, schemes which will be henceforth called consensus based distributed filtering algorithms. Based on the idea mentioned above, a scheme for distributed Kalman filtering (DKF) was proposed in [4] based on reaching an average-consensus [5, 6], and in [7] Olfati-Saber proposed a scalable and distributed Kalman filtering algorithm based on reaching a dynamic average consensus [8]. Olfati-Saber’s algorithm [7] has been further developed by other researchers [9] with similar algorithms. However, methods based on such kind of algorithms produce relatively weak performances. According to [10], the performance is compared with the collective estimation error of n noncooperative local Kalman filters, which is a trivial base performance level for distributed estimation in sensor networks. To solve this, Olfati-Saber developed the Kalman consensus filter (KCF) in [11], where a consensus filter runs directly on the estimator state space variables. In addition, a formal derivation followed by optimality and stability analysis of KCF in discrete-time has been elaborated in [10]. However, in distributed implementations, there is a correlation between local estimates [12] in KCF. In general distributed networks, it is not possible to exactly determine this correlation [13] and it results in nonoptimal local estimates. Other techniques to accomplish distributed estimation for dynamic systems that rely on the inverse covariance filter or information filter have been around for many years [14, 15]. An information consensus filter (ICF) is presented in [16] that applies consensus filters to an information filter. This method does not exactly solve the problem of correlation between local estimates but it gives insight into the statistical effects of the correlation and is working much well in distributed sensor networks. Based on the ICF, we focus on designing the consensus weights to improve its performance.
In this paper, we firstly describe the existing distributed filter with consensus strategies. Then we make an in-depth comparison between the KCF and the ICF. Based on the ICF, we propose the consensus weights optimization for better performance of the system and refer this method as weights optimized information consensus filter (WO-ICF). We show experimentally that the proposed method achieves the best performance and it is closest to the optimal centralized performance.
The structure of this paper is organized as follows. In Section 2, the background knowledge on Kalman filter and the centralized information filter are provided. In Section 3, the consensus strategies are discussed. In Section 4, Kalman consensus filter is presented. In Section 5 we discuss the information consensus filter, and an ICF based weights optimization method is proposed. Simulation results and performance comparisons between the KCF and ICF algorithm are provided in Section 6. In Section 7, we make a brief summary.
2. Kalman Filter: Information Form2.1. Kalman Filter
Consider a dynamic process with the linear time-varying model as follows:
(1)x(k+1)=A(k)x(k)+B(k)w(k);x(0)∈N(x¯(0),P0),
where x(k)∈Rn and w(k)∈Rm are the state and input noise of the process at time k∈{0,1,2…}, respectively; x(0) is the initial state with a Gaussian distribution; A(k) is the model matrix, B(k) is the state noise matrix. We are interested in tracking the state of this target by the use of a sensor network with n sensors and the communication topology G=(V;E).
The observations at sensor i and time k are
(2)zi(k)=Hi(k)x(k)+vi(k),
where zi(k)∈Rpi with ∑i=1npi=p, Hi(k)∈Rpi×n is the local observation matrix for sensor i, and vi(k) is the local observation noise. We refer to zi(k) as sensor data. Assume that w(k) and vi(k) are zero mean white Gaussian noise with the following statistics:
(3)E[w(k)w(l)T]=Q(k)δkl,E[vi(k)vi(l)T]=Ri(k)δkl,
where δrs=1 if r=s and δrs=0, otherwise. We stack the observations at all n sensors in the sensor network to get the global observation model as follows.
Let the global observation vector z(k)∈Rp, the global observation matrix H(k)∈Rp×n, and the global observation noise vector v(k)∈Rn be
(4)z(k)=[z1(k),z2(k),…,zn(k)]T,H(k)=[H1(k),H2(k),…,Hn(k)]T,v(k)=[v1(k),v2(k),…,vn(k)]T.
Then the global observation model is given by
(5)z(k)=H(k)x(k)+v(k).
Since observation noises of different sensors are mutually independent, we can combine Ri(k) into one global observation noise covariance matrix R(k) as
(6)R(k)=blockdiag[Ri(k),…,Rn(k)].
Given the collective information Z(k)={z(0),z(1),…,z(k)}, the estimation of the state of the process can be expressed as
(7)x^(k):=x^(k∣Z(k))=E[x(k)∣Z(k)],x¯(k):=x^(k∣Z(k-1))=E[x(k)∣Z(k-1)].
We refer to x^(k) and x¯(k) as estimate and prior estimate (or prediction) of the state x(k), respectively. Then, the error covariance matrices associated with the estimates x^(k) and x¯(k) are given by
(8)M(k):=E[(x^(k)-x(k))(x^(k)-x(k))T]=E[η(k)η(k)T],P(k):=E[(x¯(k)-x(k))(x¯(k)-x(k))T]=E[η¯(k)η¯(k)T],
where η(k)=x^(k)-x(k) and η¯(k)=x¯(k)-x(k) denote the estimate errors and P(0)=P0. Then, the Kalman filter is a linear estimator in the form
(9)x^(k)=x¯(k)+K(k)(z(k)-H(k)x¯(k))
with the Kalman gain K(k).
Remark 1.
Throughout this paper, due to the importance of the node indices, we adopt a notation that is free of the time-index k and call it an index-free notation to represent all estimators [10]. The index-free form of the above estimator can be written as
(10)x^=x¯+K(z-Hx¯).
We also use the update operation {·+} defined in [10] to rewrite the sensing model of node i of the sensor network as
(11)x+=Ax+Bw,zi=Hix+vi.
Then, we get the index-free recursive equations of a centralized Kalman filter (CKF) for system (11):
(12)x^=x¯+K(z-Hx¯),K=PHT(R+HPHT)-1,M=P-PHT(R+HPHT)-1HP,P+=AMAT+BQBT,x¯+=Ax^.
2.2. Information Form: Centralized Information Filter
Using the matrix inversion lemma
(13)(I+ABC-1BT)-1A=(A-1+BC-1BT)-1=A-AB(BTAB+C)-1BTA.
By use of the identity BTABC-1=[(BTAB+C)C-1-I], we have
(14)(I+ABC-1BT)-1ABC-1=(A-1+BC-1BT)-1BC-1=AB(BTAB+C)-1.
From (14), we have
(15)K=PHT(R+HPHT)-1=(P-1+HTR-1H)-1HTR-1.
Using the matrix inversion lemma
(16)(A+BCD)-1=A-1-A-1B(C-1+DA-1B)-1DA-1,
we have
(17)(P-1+HTR-1H)-1=P-PHT(R+HPHT)-1HP=M.
Then we can rewrite (15) as
(18)K=MHTR-1.
Based on the above derivation, the recursive equations of the Kalman filter can be rewritten as
(19)x^=x¯+K(z-Hx¯),K=MHTR-1,M=(P-1+HTR-1H)-1,P+=AMAT+BQBT,x¯+=Ax^.
From (19), the estimate x^ can be expressed as
(20)x^=x¯+K(z-Hx¯)=x¯+MHTR-1(z-Hx¯)=x¯+M(HTR-1z-HTR-1Hx¯).
Now we define the n-dimensional global observation variables as
(21)y=HTR-1z,S=HTR-1H
and the n-dimensional local observation variables at sensor i as
(22)yi=HiTRi-1zi,Si=HiTRi-1Hi.
When the observations are distributed among the sensors, the KF can be implemented by collecting all the sensor observations at a central location, or with observation fusion by realizing that the global observation variables in (21), it can be written as
(23)y=HTR-1z=H1TR1-1z1+⋯+HNTRN-1zN=∑i=1Nyi.
Similarly,
(24)S=HTR-1H=H1TR1-1H1+⋯+HNTRN-1HN=∑i=1NSi.
Recall (20) where the estimate x^ could be written as
(25)x^=x¯+M(y-Sx¯).
Multiplication on the left by M-1 yields a variation of (25) as
(26)M-1x^=M-1x¯+y-Sx¯=(P-1+S)x¯+y-Sx¯=P-1x¯+y.
Let the inverse of M and P be the information matrices, I^ and I¯. Let i^ and i¯ be the information vectors. We have the following relations:
(27)I^=M-1,I¯=P-1,i^=M-1x^,i¯=P-1x¯.
Then (26) can be rewritten as
(28)i^=i¯+y,
and the update of predicted estimate x¯ can be expressed as
(29)x¯+=Ax^=A(MI^)x^=AM(I^x^)=AI^-1i^,
and we have
(30)i¯+=(P-1x¯)+=I¯+AI^-1i^.
Now we get the following simpler form of the filter in (19) and call it centralized information filter (CIF):
(31)I^=I¯+S,i^=i¯+y,I¯+=(AI^-1AT+BQBT)-1,i¯+=I¯+AI^-1i^,
where (31) is the filter step and prediction step (or update step) of the CIF, respectively.
3. Consensus Strategy
Consensus strategy defines a set of rules for a team of agents to agree on specific consensus states. With these rules each agent exchanges information with its neighboring agents and finally reaches an agreement (or consensus) concerning the consensus state over time [17, 18]. Furthermore, average consensus occurs when the final consensus state is the average of the initial values.
Consider a team of n agents to agree on specific consensus states, and at any discrete-time instant τ, the communication topology between n agents can be described by the graph G[τ]=(V,E[τ]), the graph G is undirected, V={1,2,…,n} is the vertex set, and E[τ]⊆V×V is the edge set. In the consensus algorithm, each agent in the network maintains a local copy of the consensus state ζi∈Rn and updates this value using its neighbors’ consensus states according to the rule:
(32)ζi[τ+1]=ζi[τ]+∑j=1nβij[τ](ζj[τ]-ζi[τ]),
where τ indicates the consensus filter iteration step. To choose the weights βij[τ], we can use the Maximum-degree weights or the Metropolis weights [19]. Here we use the latter which preserves the averaging in consensus filters and can be computed by
(33)βij[τ]={(1+max{di[τ],dj[τ]})-1if(i,j)∈E[τ],1-∑(i,l)∈E[τ]βil[τ]ifi=j,0otherwise,
where di[τ] is the degree of agent i in the graph G[τ]. Arrange the local consensus states into the vector ζ[τ]=[ζ1T[τ],…,ζnT[τ]]T, and define the matrix (B[τ])ij=βij[τ] for i≠j; otherwise (B[τ])ii=1-∑(i,l)∈E[τ]βil[τ], and we can rewrite the update in (32) as
(34)ζ[τ+1]=(B[τ]⊗I)ζ[τ],
where I is the appropriate size identity matrix and ⊗ denotes the matrix Kronecker product.
The ijth element of B[τ] in (34) satisfies the following four conditions: (1) (B[τ])ij≥0, (2) ∑i(B[τ])ij=1, (3) ∑j(B[τ])ij=1, (4) and each nonzero entry is both uniformly upper and lower bounded. Based on these conditions, we have the following results [20] for average consensus.
Lemma 2.
Under switching interaction topologies, if there exists a finite T≥0 that for every interval [τ,τ+T] the union of the interaction graph across interval is strongly connected, then consensus protocol (34) achieves average consensus asymptotically; that is, ζi[τ]→(1/n)∑i=1nζi[0] as τ→∞.
Remark 3.
In order to calculate the metropolis weights in (33), we assume undirected communication throughout this paper. Therefore, if the graph G[τ] is connected, the matrix B[τ] is a doubly stochastic matrix, and the four conditions on B[τ] are satisfied. This implies that average consensus is achieved asymptotically as long as every graph is connected [21].
4. Distributed Kalman Filter: Consensus on Estimate
In this section, we discuss an alternative approach to distribute the Kalman filtering that relies on communicating state estimates between neighboring nodes and refer to it as Kalman consensus filter (KCF). Before presenting the KCF algorithm, we first need to discuss a more primitive DKF algorithm called local Kalman filter (LKF) which forms the basis of the KCF.
4.1. Local Kalman Filter
In local Kalman filtering, let Ni={j:(i,j)∈E} be the set of neighbors of node i on graph G. Each node i of the sensor network communicates its measurement zi, covariance information Ri, and observation matrix Hi with its neighbors Ni. For node i, we assume that the information flow from nonneighboring nodes to node i is prohibited if there is no nodes except for its neighbors Ni exist. Therefore, node i can use a central Kalman filter that only utilizes the observation vectors and observation matrices of the nodes in Ji=Ni∪{i} [11]. This leads to the following primitive DKF algorithm with no consensus on state estimation.
LKF Iterations. Assume that node i only receives information from its neighbors. Then, we have the iterations of node i in local Kalman filtering as
(35)yi=∑j∈JiHjTRj-1zj=∑j∈Jiyj,Si=∑j∈JiHjTRj-1Hj=∑j∈JiSj,x^i=x¯i+Mi(yi-Six¯),Mi=(Pi-1+Si)-1,Pi+=AMiAT+BQBT,x¯i+=Ax^i,
where yi and Si are local aggregate information vector and matrix, respectively and node i locally computes both yi and Si.
4.2. Kalman Consensus Filter
We now present the Kalman consensus filter (KCF). The KCF uses consensus strategy (32) on the state estimate in a distributed Kalman filter, where each node maintains a local Kalman filter. Corresponding to (32), let ζi[τ]=x¯i be the prior estimate at time τ and ζi[τ+1]=x¯ic be fused prior estimate; each node fuses the prior estimates from its neighbors according to the rule:
(36)x¯ic=x¯i+∑j∈Niβij[τ](x¯j-x¯i).
Using the fused prior estimate x¯ic, the filter estimate at node i could be implemented by
(37)x^i=x¯ic+Mi(yi-Six¯ic)=x¯i+Mi(yi-Six¯i)+(I-MiSi)∑j∈Niβij[τ](x¯j-x¯i).
The local KCF is summarized in Algorithm 1, where τ is the time index for the consensus strategy and Tp∈Z+ is the time interval between prediction updates. One-time step k-1→k is equivalent to Tp time steps of the consensus time index τ→τ+1; that is, for each node, the information exchanges between neighboring nodes occurred faster than the prediction update step. The three steps in KCF prediction, local filter estimate, and consensus update are not necessarily sequential.
Algorithm 1: Kalman consensus filter.
Initialization (for node i):
x¯i=x(0)Pi=P0
τ=1τp=τ+Tp
Loop {Local iteration on node i}
(1) Consensus update
x¯ic=x¯i+∑j∈Niβij[τ](x¯j-x¯i)
yi=∑j∈JiHjTRj-1zjSi=∑j∈JiHjTRj-1Hjτ←τ+1
(2) If new observations are taken then the Kalman consensus state estimate are computed
(3) If time for a predication step (i.e., τ=τp) then prediction step
Pi+=AMiAT+BQBT
x¯i+=Ax^i
τp=τ+Tp
End loop
The last term in (37) is the correction of filter estimate x^i compared to the standard Kalman estimator. Intuitively, adding the consensus term in (37) will force local estimators to reach a consensus regarding state estimates. The structure of node i in the KCF algorithm is shown in Figure 1.
The algorithm structure of node i in the KCF.
In [11], the author proposed the following Kalman consensus estimator which is in the form of
(38)x^i=x¯i+Mi(yi-Six¯i)+Ci∑j∈Ni(x¯j-x¯i),
where Ci is named consensus gain of node i. The choice of the consensus gain Ci is free. A poor choice of Ci leads to either the lack of consensus on estimates (e.g., setting Ci=0, for all i) or the lack of stability of the error dynamics of the filter. One possible choice is to let
(39)Ci=γPi=εPi1+∥Pi∥F,
where ε>0 is a relative small constant and ∥·∥F denotes the Frobenius norm of a matrix. The derivation of the optimal Kalman consensus filter can be found in [10], where we can also find that the computational complexity of updating the error covariance Pi+=AMiAT+BQBT of the optimal Kalman consensus filter is not scalable in n. To obtain a suboptimal approximation of the KCF which is distributed and scalable in n, we make an assumption that the consensus gains Ci=O(ε) are of the order of ε. Then we get the stable suboptimal KCF summarized in Algorithm 2.
Algorithm 2: Suboptimal Kalman consensus filter: DKF Algorithm with an estimator that has a rigorously derived consensus term (message passing during one time cycle for node).
Initialization:
x¯i=x(0), Pi=P0, and message mj={yj,Sj,x¯j}
While new data exists do
(1) Compute local observation vector and matrix of node i:
yi=HiTRi-1ziSi=HiTRi-1Hi
(2) Broadcast message mi={yi,Si,x¯i} to neighbors.
(3) Receive messages from all neighbors.
(4) Compute the local aggregate information vector and matrix:
yi=∑j∈JiyjSi==∑j∈JiSj
(5) Compute the Kalman consensus state estimate
x^i=x¯i+Mi(yi-Six¯i)+γPi∑j∈Ni(x¯j-x¯i)
Mi=(Pi-1+Si)-1
γ=ε1+∥Pi∥, ∥X∥=tr(XTX)1/2
(6) Update the state of the Kalman consensus filter:
Pi+=AMiAT+BQBT
x¯i+=Ax^i
End While
5. Distributed Filter: Consensus on Information Matrix
The KCF discussed previously applies consensus strategy on the prior estimate to the Kalman filter and improves the state estimate of KF. However, the error covariance matrix Mi is not improved because each node in KCF only fuses the prior estimates from its neighbors but neglects the helpful information about the error covariance matrix. In the next section, we adopt an information matrix weighted consensus strategy to improve the consensus based distributed Kalman filter algorithm in the estimation fusion of sensor networks. We refer to this method as information consensus filter (ICF) [16]. Before presenting the ICF algorithm, we need to discuss a more primitive DKF algorithm called local information filter (LIF) which forms the basis of the ICF.
5.1. Local Information Filter
To distribute the estimation of the global state vector, x in CIF, we implement local information filter (LIF) at each sensor i, which is based on the sensor model (11) and can be derived from local Kalman filter (LKF) in (35) after we use its information form. Each LIF computes local objects (matrices and vectors) which are then fused (if required) by exchanging information among the neighbors. In LIF, there is no centralized knowledge of the estimation of the global state that exists in CIF; however, it can be obtain by fusing the local state vector.
Let the inverses of Mi and Pi be the local information matrices, I^i and I¯i. Let i^i and i¯i be the local information vector. We have the following relations:
(40)I^i=Mi-1,I¯i=Pi-1,i^i=Mi-1x^i,i¯i=Pi-1x¯i.
Then we get the following simpler form of the filter in (35).
LIF Iterations. The local information filtering iterations for node i are in the form
(41)I^i=I¯i+Si,i^i=i¯i+yi,I¯i+=(AI^i-1AT+BQBT)-1,i¯i+=I¯i+AI^i-1i^i.
5.2. Information Consensus Filter
We now present the information consensus filter (ICF). The ICF uses consensus strategy (32) on both the information state and the information matrix in a distributed Kalman filter, where each node maintains a local information filter. Recalling (32), let i¯ic be the fused local information vector and I¯ic the fused local information matrix; each node fuses the local information from its neighbors according to the rule:
(42)i¯ic=i¯i+∑j∈Niβij[τ](i¯j-i¯i),I¯ic=I¯i+∑j∈Niβij[τ](I¯j-I¯i).
Using the fused local information vector and matrix, i¯ic and I¯ic, the local ICF is summarized in Algorithm 3.
Algorithm 3: Information consensus filter.
Initialization (for node i):
i¯i=i¯(0)I¯i=I¯(0)
τ=1τp=τ+Tp
Loop {Local iteration on node i}
(1) Consensus update
i¯ic=i¯i+∑j∈Niβij[τ](i¯j-i¯i)
I¯ic=I¯i+∑j∈Niβij[τ](I¯j-I¯i)
yi=∑j∈JiHjTRj-1zjSi=∑j∈JiHjTRj-1Hjτ←τ+1
(2) If new observations are taken then the information consensus estimate are computed
i^i=i¯ic+yi=i¯i+yi+∑j∈Niβij[τ](i¯j-i¯i)
I^i=I¯ic+Si=I¯i+Si+∑j∈Niβij[τ](I¯j-I¯i)
(3) If time for a predication step (i.e., τ=τp) then prediction step
I¯i+=(AI^i-1AT+BQBT)-1
i¯i+=I¯i+AI^i-1i^i
τp=τ+Tp
End Loop
Now we make a comparison between ICF and KCF based on the state estimate x^i and the error covariance matrix Mi. Let βii[τ]=1-∑j∈Niβij[τ]; we can rewrite (42) as
(43)i¯ic=βii[τ]i¯i+∑j∈Niβij[τ]i¯j,I¯ic=βii[τ]I¯i+∑j∈Niβij[τ]I¯j.
Then we have
(44)x¯ic=(I¯ic)-1i¯ic=(I¯ic)-1[βii[τ]i¯i+∑j∈Niβij[τ]i¯j]=Wiix¯i+∑j∈NiWijx¯j.
Here we use i¯i=I¯ix¯i, i∈Ji and Wil=βil[τ](I¯ic)-1I¯l, l∈Ji. Then we finally get
(45)x^i=x¯ic+Mi(yi-Six¯ic)=Wiix¯i+Mi(yi-SiWiix¯i)+(I-MiSi)∑j∈NiWijx¯j=((I¯ic)-1I¯i)x¯i+Mi[yi-Si((I¯ic)-1I¯i)x¯i]+(I-MiSi)∑j∈Niβij[τ][(I¯ic)-1I¯jx¯j-(I¯ic)-1I¯ix¯i],Mi=(I^i)-1=[I¯i+Si+∑j∈Niβij[τ](I¯j-I¯i)]-1,
where the error covariance matrix Mi has been improved by a consensus term compared with Mi=(I^i)-1=[I¯i+Si]-1 in KCF, and the state estimate x^i is also corrected by a factor ((I¯ic)-1I¯i) compared with the x^i in KCF.
5.3. The Optimization of Consensus Weights in ICF
The weights βij[τ] are important parameters of ICF and we can use the metropolis weights or the maximum-degree weights to determine it. In fact, the more reasonable approach is to choose different weights according to the fused local information Ii and ii. Here, a nice scheme to optimize the consensus weights is proposed base on the following objective function:
(46)Fi=αi1tr((I¯ic)-1)tr((I¯i)-1)+αi2∥i¯ic-i¯i,avc∥2+∑j∈Ni∥i¯j-i¯i,avc∥2∑j∈Ji∥i¯j-i¯i,av∥2,
where
(47)i¯i,avc=11+di(i¯ic+∑j∈Nii¯j),i¯i,av=11+di(i¯i+∑j∈Nii¯j),αi1, αi2 are the weight coefficients, and αi1+αi2=1(0<αi1<1,0<αi2<1).
The first term in the objective function Fi is used to assess the prior estimate error covariance of node i after fusing the local information of its neighbors, and the second term is used to assess the consensus of the fused local information vector in node i and the prior estimates of its neighbors. Base on the objective function Fi, the consensus weights optimization problem can be described as
(48)βi*=argminβiFis.t.βij≥0,(i,j)∈E[τ]βij=0,(i,j)∉E[τ]∥βi∥1=1,
where βi=[βi1,βi2,…,βin]. To solve the optimization problem (48),we only need the local information of node i and that of its neighbors. We refer to this method as weights optimized information consensus filter (WO-ICF).
6. Numerical Simulations
Let the linear system under consideration be represented by a second-order discrete time-varying model:
(49)x(k+1)=Ax(k)+Bw,
where A=I2+δA0+(δ2/2)A02+(δ3/6)A03 with A0=2[0110], δ=0.015 and B=δB0 with B0=I2, Q=25I2. The initial conditions are x0=[15,-10], P0=20I2. A sensor network with 20 randomly located nodes is used in this experiment (see Figure 2). The local observation matrix for sensor i is Hi=[0110], and the local observation noise covariance is Ri=100I2 for i≤10 otherwise Ri=3000I2.
A sensor network with 20 nodes and 51 links.
Define the averaged estimation error E(k) and the averaged consistency estimation error D(k) as the algorithm performance metrics, which can be computed as follows:
(50)E(k)=1n∑i=1n(x^i(k)-x(k))T(x^i(k)-x(k)),D(k)=1n∑i=1n(x^i(k)-x^av(k))T(x^i(k)-x^av(k)),
where x^av(k)=(1/n)∑i=1nx^i(k) is the averaged estimation of state.
Figure 3 demonstrates the averaged estimation error using different algorithms. We can see that the ICF and the WO-ICF behave in a similar manner (with comparable performances), and the averaged estimation accuracy in ICF and WO-ICF is improved compared to KCF. After 50 iterations, their performances are very close to CKF; this is because the average consensus is achieved after constantly information exchanging, fusing, and filtering.
The averaged estimation errors of different algorithms.
Figure 4 shows that our WO-ICF performs significantly better than both KCF and ICF, it was the fastest converged, and the consistency of estimates between different nodes in WO-ICF was improved by optimizing the consensus weights.
The averaged consistency estimation errors of different algorithms.
Figure 5 demonstrates the comparisons of different algorithms on the traces of the averaged estimation covariance matrices tr((1/n)∑i=1nMi(k)). A quick look at Figure 5 reveals that both ICF and our WO-ICF perform significantly better than KCF, of which the reason is that the information matrix weighted consensus strategy is adopted. Furthermore, by optimizing the consensus weights, the error covariance matrix Mi is improved significantly compared to ICF.
The traces of the averaged estimation covariance matrices of different algorithms.
7. Conclusions
In this paper, a description about the existing distributed Filter with consensus strategies is presented, including Kalman consensus filter (KCF) and information consensus filter (ICF). In addition, an in-depth comparison between the KCF and the ICF is made. Based on the ICF, the weights optimized information consensus filter (WO-ICF) is proposed to optimize the consensus weights. Simulation shows that both ICF and WO-ICF perform better than KCF; they improve not only the state estimate but also the error covariance matrix, and the proposed WO-ICF achieves better consistency estimation performance than ICF. Compared with the existing consensus filter, our WO-ICF achieves the best performance and is closest to the optimal centralized performance.
Acknowledgments
This work is supported in part by NSFC (Natural Science Foundation of China) Projects 61202400, Natural Science Foundation of Zhejiang Q12F020078, Shaoxing Project of Science and Technology 2011A22013, and Wenzhou Project of Science and Technology H20090054, S20100029, and H20100095.
LynchN. A.1997Morgan KaufmannMR1388778AndersonB. D. O.MooreJ. B.1979Englewood Cliffs, NJ, USAPrentice HallBrysonA. E.HoY. C.1975New York, NY, USAHemisphereMR0446628SpanosD. P.Olfati-SaberR.MurrayR. M.Approximate distributed kalman filtering in sensor networks with quantifiable performanceProceedings of the 4th International Symposium on Information Processing in Sensor Networks (IPSN '05)April 20051331392-s2.0-3374492616210.1109/IPSN.2005.1440912Olfati-SaberR.MurrayR. M.Consensus problems in networks of agents with switching topology and time-delays20044991520153310.1109/TAC.2004.834113MR2086916Olfati SaberR.MurrayR. M.Consensus protocols for networks of dynamic agents2Proceedings of the American Control ConferenceJune 20039519562-s2.0-0142169419Olfati-SaberR.Distributed Kalman filter with embedded consensus filtersProceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference (CDC-ECC '05)December 2005817981842-s2.0-3364500104710.1109/CDC.2005.1583486SpanosD.Olfati-SaberR.MurrayR. M.Dynamic consensus on mobile networksProceedings of the 16th International Federation of Automatic Control World Congress2005Prague, CzechCarliR.ChiusoA.SchenatoL.ZampieriS.Distributed Kalman filtering based on consensus strategies20082646226332-s2.0-4334909810210.1109/JSAC.2008.080505Olfati-SaberR.Kalman-Consensus filter: optimality, stability, and performanceProceedings of the 48th IEEE Conference on Decision and Control held jointly with 28th Chinese Control Conference (CDC/CCC '09)December 2009Shanghai, China703670422-s2.0-7795081753610.1109/CDC.2009.5399678Olfati-SaberR.Distributed Kalman filtering for sensor networksProceedings of the 46th IEEE Conference on Decision and Control (CDC '07)December 2007549254982-s2.0-5244912071910.1109/CDC.2007.4434303Bar-ShalomY.On the track-to-track correlation problem1981262571572UteteS.Durrant-WhyteH. F.Reliability in decentralized data fusion networksProceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI '94)October 19942152212-s2.0-0028697512GrimeS.Durrant-WhyteH. F.Data fusion in decentralized sensor networks1994258498632-s2.0-002851771410.1016/0967-0661(94)90349-2MutambaraA. G. O.1998Boca Raton, Fla, USACRC PressCasbeerD. W.BeardR.Distributed information filtering using consensus filtersProceedings of the American Control Conference (ACC '09)June 2009St. Louis, Mo, USAHyatt Regency Riverfront188218872-s2.0-7044967872910.1109/ACC.2009.5160531RenW.BeardR. W.AtkinsE. M.Information consensus in multivehicle cooperative control200727271822-s2.0-3414716335510.1109/MCS.2007.338264Olfati-SaberR.FaxJ. A.MurrayR. M.Consensus and cooperation in networked multi-agent systems20079512152332-s2.0-6414911933210.1109/JPROC.2006.887293XiaoL.BoydS.LallS.A space-time diffusion scheme for peer-to-peer least-squares estimationProceedings of the 5th International Conference on Information Processing in Sensor Networks (IPSN '06)April 20061681762-s2.0-3424736247410.1145/1127777.1127806KingstonD. B.BeardR. W.Discrete-time average-consensus under switching network topologiesProceedings of the American Control ConferenceJune 2006355135562-s2.0-34047227070CasbeerD. W.BeardR. W.Multi-static radar target tracking using information consensus filtersProceedings of the AIAA Guidance, Navigation, and Control Conference and ExhibitAugust 2009Chicago, Ill, USA2-s2.0-78049302631