We study reconstruction of time-varying sparse signals in a wireless sensor network, where the bandwidth and energy constraints are considered severely. A novel particle filter algorithm is proposed to deal with the coarsely quantized innovation. To recover the sparse pattern of estimate by particle filter, we impose the sparsity constraint on the filter estimate by means of two methods. Simulation results demonstrate that the proposed algorithms provide performance which is comparable to that of the full information (i.e., unquantized) filtering schemes even in the case where only 1 bit is transmitted to the fusion center.
1. Introduction
In recent years, wireless sensor networks (WSNs) have been widely applied in many areas. A WSN system is composed of a large number of battery-powered sensors via wireless communication. Reconstruction of time-varying signals is a key technology for WSNs and plays an important role in many applications of WSNs (see, e.g., [1–3] and the references therein). As we know, the lifetime of a WSN depends on the lifespan of its sensors, which are battery-powered. To prolong the lifespan of sensors, sensors are often allowed to transmit only partial (e.g., quantized/encoded) information to a fusion center. Therefore, quantization of sensor measurements has been widely taken into account in the practical applications [4–6]. Moreover, it is maybe infeasible to quantize and transmit sensor measurements directly. This is because, for unstable systems, while the states will become unbounded, a large number of quantization bits may be needed, and so higher bandwidth and rate of quantizer are used by sensors. However, as demonstrated in [1–3], the filtering schemes relying on the quantized innovations can provide the performance, which is comparable to that of the full (e.g., unquantized/uncoded) information filtering schemes.
On the other hand, due to sparseness of signals exhibited in many applications, recently developed compressed sensing techniques have been extensively applied in WSNs [7–9]. This enables reconstruction of sparse signal from far fewer measurements. Therefore, the demands for communication between sensors and the fusion center will be lessened by exploiting sparseness of signals in WSNs, so as to save both bandwidth and energy [9, 10]. Reconstruction of time-varying sparse signals in WSNs has been recently studied in [11] by using the group lasso and fused lasso techniques. This is a batch algorithm which relies on quadratic programming to recover the unknown signal. A computationally efficient recursive lasso algorithm (R-lasso) was introduced in [12], for estimating recursively the sparse signal at each point in time. In [13], the SPARLS algorithm relies on the expectation-maximization technique to find estimates of the tap-weight vector output stream from its noisy observations. Recently, many researchers have attempted to solve the problem in the classic framework of signal estimation, such as Kalman filtering (KF) and its variants [14–16]. The KF-based approaches can be divided into two classes: the hybrid and self-reliant. For the former, the peripheral optimization schemes were employed to estimate the support set of a sparse signal, and then a reduced-order Kalman filter was used to reconstruct the signal [14]. Meanwhile, for the latter, the sparsity constraint is enforced via the so-called pseudo-measurement (PM) [15]. In [15], two stages of Kalman filtering are employed: one for tracking the temporal changes and the other for enforcing the sparsity constraint at each stage. In [16], an unscented Kalman filter for the pseudo-measurement update stage was proposed. To the best of the authors’ knowledge, there is limited work on recursive compressed sensing techniques considering quantization as a mean of further reduction of the required bandwidth and power resources. However, an increased attention has been paid to develop algorithms for reconstructing sparse signals using quantized observations recently [17–21]. In [17], a convex relaxation approach for reconstructing sparse signal from quantized observations was proposed by using an l1-norm regularization term and two convex cost functions. In [18], Qiu and Dogandzic proposed an unrelaxed probabilistic model with l0-norm constrained signal space and derived an expectation-maximization algorithm. In addition, [19–21] have investigated the reconstruction of sparse source from 1-bit quantized measurement in the extreme case.
In this paper, we study reconstruction of time-varying sparse signals in WSNs by using quantized measurements. Our contributions are as follows:
We propose an improved particle filter algorithm which extends the fundamental results [2] to a multiple-observation case by employing information filter form. The algorithm in [2] is derived under the assumption that the fusion center has access to only one measurement source at each time step. The extension to a multiple-measurement scenario is straightforward but, in general, may lead to a computationally involved scheme. In contrast, the proposed algorithm can be implemented in a computationally efficient sequential processing form and avoids any matrix inversion. Meanwhile, the proposed algorithm has an advantage over numerical stability for inaccurate initialization.
We propose a new method to impose sparsity constraint on estimator by the particle filter algorithm. Compared to the iterative method in [15], the resulting method is noniterative and easy to implement.
In particular, the system has an underlying state-space structure, where the state vector is sparse. In each time interval, the fusion center transmits the predicted signal estimate and its corresponding error covariance to a selected subset of sensors. The selected sensors compute quantized innovations and transmit them to the fusion center. The fusion center reconstructs the sparse state by employing the proposed particle filter algorithm and sparse cubature point filter method.
This paper is organized as follows. Section 2 gives a brief overview of basic problems in compressed sensing and introduces the sparse signal recovery method using Kalman filtering with embedded pseudo-measurement. In Section 3, we describe the system model. Section 4 develops a particle filter with quantized innovation. To recover the sparsity pattern of the state estimate by particle filter, a sparse cubature point filter method is developed with lower complexity compared to reiterative PM update method in Section 5. The intact version of adaptively recursive reconstruction algorithm for sparse signals with quantized innovations and the analysis of their complexity are presented in Section 6. Section 7 contains simulation results, and the conclusions are concluded in Section 8.
Notation. Nd(μ,Σ) denotes d-dimensional Gaussian r.v. with mean μ and variance Σ, the d-dimensional Gaussian probability density with mean μ, and variance Σ is denoted by ϕd(·,μ,Σ). Φ(x;μ,Σ) denotes Gaussian probability distribution with mean μ and variance Σ, and Φ(S;μ,Σ) denotes the truncated probability, where S belongs to Borel σ-field. ui:j denotes the collection of random {ui,…,uj}. Boldfaced uppercase and lowercase symbols represent matrices and vectors, respectively. For a vector x, x(i) denotes its ith component and Cov[x] denotes the error covariance E[(x-Ex)(x-Ex)T]. For a matrix R, R(l,l) denotes the (l,l) entry of R.
2. Sparse Signal Reconstruction Using Kalman Filter2.1. Sparse Signal Recovery
Compressive sensing is a framework for signal sensing and compression that enables representation of a sensed signal with fewer samples than those ones required by classical sampling theory. Consider a sparse random discrete-time process {xk}k≥0 in RN, where xk0≪N, and xk is called K-sparse if xk0=K. Assume xk evolves according to the following dynamical equations:(1)xk+1=Fkxk+wk,(2)yk=Hkxk+vk,where Fk∈RN×N is the state transition matrix and Hk∈RM×N is the measurement matrix. Moreover, {wk}k≥0 and {vk}k≥0 denote the zero mean’s white Gaussian sequence with covariances Wk⪰0 and Rk⪰0, respectively. yk is M-dimensional linear measurement of xk. When M<N and rank(Hk)<N, it is noted that the reconstruction xk from underdetermined system is an ill-posed problem.
However, [22, 23] have shown that xk can be accurately reconstructed by solving the following optimization problem:(3)minx^k∈RNx^k0,s.t.yk-Hkx^k22≤ϵ.
Unfortunately, the above optimization problem is NP-hard and cannot be solved effectively. Fortunately, as shown in [23], if the measurement matrix Hk obeys the so-called restricted isometry property (RIP), the solution of (3) can be obtained with overwhelming probability by solving the following convex optimization:(4)minx^k∈RNx^k1,s.t.yk-Hkx^k22≤ϵ.
This is a fundamental result in compressed sensing (CS). Moreover, for reconstructing a K-sparse signal xk∈RN, M≥C·Klog(N/K) linear measurements are needed, where C is a fixed constant.
2.2. Pseudo-Measurement Embedded Kalman Filtering
For the system given in (1) and (2), the estimation of xk provided by Kalman filtering is equivalent to the solution of the following unconstrained l2 minimization problem:(5)minx^k∈RNExk-x^k22∣Yk, where E·∣Yk is the conditional expectation of the given measurements Yk≜{y1,…,yk}.
As shown in [15], the stochastic case of (4) is as follows:(6)minx^k∈RNx^k1,s.t.Exk-x^k22∣Yk≤ϵk,and its dual problem is(7)minx^k∈RNExk-x^k22∣Yk,x^k1≤ϵk.In [15], the authors incorporate the inequality constraint x^k1≤ϵk into the filtering process using a fictitious pseudo-measurement equation(8)0=H~kxk-ϵk′,where H~k=sign(xkT) and ϵk′~N(0,Rϵ′) serves as the fictitious measurement noise; constrained optimization problem (7) can be solved in the framework of Kalman filtering and the specific method has been summarized as CS-embedded KF (CSKF) algorithm with l1-norm constraint in [15]. It is apparent from (8) that the measurement matrix H~k is state-dependent and can be approximated by H^k=sign(x^kT), where sign(·) is the sign function. The pseudo-measurement equation was interpreted in Bayesian filtering framework, and a semi-Gaussian prior distribution was discussed in [15]. Furthermore, Rϵ′ is a tuning parameter which regulates the tightness of l1-norm constraint on the state estimate x^k.
3. System Model and Problem Statement
Consider a WSN configured in the star topology (see Figure 1 for an example topology). In the star topology, the communication is established between sensors and a single central controller, called the fusion center (FC). The FC is mains powered, while the sensors are battery-powered and battery replacement or recharging in relatively short intervals is impractical. The data is exchanged only between the FC and a sensor. In our application, M sensors observe linear combinations of sparse time-varying signals and send the observations to a fusion center for signal reconstruction. Here, our attention is focused on Gaussian state-space models; that is, for sensor l, the signal and the observation satisfy the following discrete-time linear system:(9)xk+1=Fkxk+wk,yl,k=hl,kxk+vl,k,where hl,k∈R1×N is the local observation matrix and xk∈RN denotes time-varying state vector which is sparse in some transform domain; that is, xk=Ψsk, where the majority of components of sk are zero and Ψ is an appropriate basis. Without loss of generality, we assume that xk itself is sparse and has at most K nonzero components whose locations are unknown (K≪N). The fusion center gathers observations at all M sensors in the M-dimensional global real-valued vector yk and preserves the global observation matrix Hk=h1,kTh2,kT⋯hM,kTT∈RM×N which satisfies the so-called restricted isometry property (RIP) imposed in the compressed sensing. Then, the global observation model can be described in (2). All the sensors are unconcerned about the sparsity. Moreover, wk and vk are uncorrelated Gaussian white noise with zero mean and covariances Wk and Rk, respectively.
Network topology.
The goal of the WSN is to form an estimate of sparse signal xk at the fusion center. Due to the energy and bandwidth constraint in WSNs, the observed analog measurements need to be quantized/coded before sending them to the fusion center. Moreover, the quantized innovation scheme also can be used. At time k, the lth sensor observes a measurement yl,k and computes the innovation el,k=yl,k-hlx^k∣k-1, where hlx^k∣k-1 together with the variance of innovation Cov[el,k] is received from the fusion center. Then, the innovation el,k is quantized to ql,k and sent to the fusion center. As the fusion center has enough energy and enough transmission bandwidth, the data transmitted by the fusion center do not need to be quantized. The decision of which sensor is active at time k and consequently which observation innovation el,k gets transmitted depends on the underlying scheduling algorithm. The quantized transmission of el,k also implies that ql,k can be viewed as a nonlinear function of the sensor’s analog observation. The aforementioned procedure is illustrated in Figure 2.
System model.
4. A Particle Filter Algorithm with Coarsely Quantized Observations
Most of the earlier works for estimation using quantized measurements concentrated upon using numerical integration methods to approximate the optimal state estimate and make an assumption that the conditional density is approximately Gaussian. However, this assumption does not hold for coarsely quantized measurements, as demonstrated in the following.
Firstly, suppose {xk} and {yl,k} are jointly Gaussian; then it is well known that the probability density of xn conditioned on yl,0:k is a Gaussian with the following parameters:(10)xk∣yl,0:k~ηk+Σxkyl,0:kΣyl,0:k-1yl,0:k,whereηk~Nd0,Σxk-Σxkyl,0:kΣyl,0:k-1Σyl,0:kxk︸≜Σxkyl,0:kΔ,where yl,0:k≜{yl,0,…,yl,k}. When {xk} and {yl,k} follow the linear dynamical equations defined in (9), it is well known that the covariance Σxkyl,0:kΔ≜Pk∣k=Cov[xk-E[xk∣yl,k]] can be propagated by Riccati recursion equation of KF. Let {ql,k} denote the quantized measurements obtained by quantizing {yl,k}; that is, {ql,k} is a measurable function of {yl,0:k}. It will be shown that the probability density of xk conditioned on the quantized measurements ql,0:k≜{ql,0,…,ql,k} has the similar characterization as (10). The result is stated in the following lemma.
Lemma 1 (akin to Lemma 3.1 in [2]).
The state xk conditioned on the quantized measurements ql,0:k can be given by a sum of two independent random variables as follows:(11)xk∣ql,0:k~ηk+Σxkyl,0:kΣyl,0:k-1yl,0:k∣ql,0:k,whereηk~Nd0,Σxkyl,0:kΔ.
Proof .
See Appendix.
It should be noted that the difference between (10) and (11) is the replacement of yl,0:k by random variable yl,0:k∣ql,0:k. Apparently, yl,0:k∣ql,0:k is a multivariate Gaussian random variable truncated to lie in the region defined by ql,0:k. So, the covariance of xk∣ql,0:k can be expressed as(12)Covxk∣ql,0:k=Σxkyl,0:kΔ+Σxkyl,0:kΣyl,0:k-1Covyl,0:k∣ql,0:kΣyl,0:k-1Σyl,0:kxk.
Under an environment of high rate quantization, it is apparent that yl,0:k∣ql,0:k converges to yl,0:k and xk∣ql,0:k approximates Gaussian. From Lemma 1, we note that xk∣ql,0:k is not Gaussian. For nonlinear and non-Gaussian signal reconstruction problems, a promising approach is particle filtering [24]. The particle filtering is based on sequential Monte Carlo methods and the optimal recursive Bayesian filtering. It uses a set of particles with associated weights to approximate the posterior distribution. As a bootstrap, the general shape of standard particle filtering is outlined below.
Algorithm 0 (standard particle filtering (SPF))
Initialization. Initialize the Np particles, x0∣-1i~p(x0) and x0∣-1=0.
At time k, using measurement ql,k=Q(yl,k), the importance weights are calculated as follows: ωki=p(ql,k∣xk=xk∣k-1i,ql,0:k-1).
Measurement update is given by (13)x^k∣kpf=∑i=1Npωki¯xk∣k-1i,
where ωki¯ are the normalized weights; that is, (14)ωki¯=ωkj∑i=1Npωki.
Resample Np particles with replacement as follows. Generate i.i.d. random variables Jιι=1Np such that P(Jι=i)=ωki¯: (15)xk∣kι=xkJι.
For i=1,…,Np, predict new particles according to (16)xk+1∣kj~pxk+1∣xk=xk∣ki,ql,0:k,i.e., xk+1∣kj=Fkxk∣ki.
Consider x^k+1∣kpf=Fkx^k∣kpf. Also, set k=k+1 and iterate from Step (2).
Assume that the channel between the sensor and fusion center is rate-limited severely, and the sign of innovation scheme is employed (i.e., ql,k=sign(yl,k-y^l,k∣k-1)). Obviously, the importance weights are given by ωki=Φ(ql,khl(xk∣k-1i-x^k∣k-1);0,Rek(l,l)). We note that E[xk∣ql,0:k]=Σxkyl,0:kΣyl,0:k-1E[yl,0:k∣ql,0:k]. Therefore, it will be sufficient to propagate particles that are distributed as ξk∣ql,0:k, where(17)ξk=Σxkyl,0:kΣyl,0:k-1yl,0:k.
In addition, note that the quantizer output, ql,k at time k, is calculated by quantizing a scalar valued function of yl,k, ql,0:k-1. So, on receipt of ql,k and by using the previously received quantized values ql,0:k-1, some Borel measurable set containing yl,k, that is, yl,k∈Sk,ql,0:k, can be inferred at the fusion center.
In order to develop a particle filter to propagate ξk∣ql,0:k, we need to give the measurement update of the probability density p(ξk-1∣ql,0:k-1)→p(ξk-1∣ql,0:k) and time update of the probability density p(ξk-1∣ql,0:k)→p(ξk∣ql,0:k), which are described by Lemmas 2 and 3, respectively.
Lemma 2.
The likelihood ratio between the conditional laws of ξk-1∣ql,0:k and ξk-1∣ql,0:k-1 is given by(18)pξk-1∣ql,0:kpξk-1∣ql,0:k-1∝ΦSk,ql,0:k;hl,kFkξk-1,Rekl,l. So, if {ξk-1∣k-1i}i is a set of particles distributed by the law p(ξk-1∣ql,0:k-1), then, from Lemma 2, a new set of particles {ξk-1∣kι}ι can be generated. For each particle ξk-1∣k-1i, associate a weight ωi=Φ(Sk,ql,0:k;hl,kFkξk-1,Rek(l,l)), generate i.i.d. random variables {Jι} such that P(Jι=i)∝ωi, and set ξk-1∣kι=ξk-1∣k-1Jι. This is the standard resampling technique from Steps (3) and (4) of Algorithm 0 [25]. It should be noted that this is equivalent to a measurement update since we update the conditional law p(ξk-1∣ql,0:k-1) by receiving the new measurement ql,k.
Lemma 3.
The random variable yl,k∣ξk-1,ql,0:k is a truncated Gaussian and its probability density function can be expressed as ϕ(Sk,ql,0:k;hl,kFkξk-1,Rek(l,l)).
This result should be rather obvious. Here, one can observe that ξk is the MMSE estimate of the state xk given yl,0:k. Since {xk} and {yl,k} have the state-space structure, Kalman filter can be employed to propagate ξk recursively as follows:(19)ξk∣k=Fkξk∣k-1+Kkyl,k-hl,kFkξk-1∣k-1Kk=Pk∣k-1hl,kThl,kPk∣k-1hl,kT+Rkl,l. However, the information filter (IF), which utilizes the information states and the inverse of covariance rather than the states and covariance, is the algebraically equivalent form of Kalman filter. Compared with the KF, the information filter is computationally simpler and can be easily initialized with inaccurate a priori knowledge [26]. Moreover, another great advantage of the information filter is its ability to deal with multisensor data fusion [27]. The information from different sensors can be easily fused by simply adding the information contributions to the information matrix and information state.
Hence, we substitute the information form for (19) as follows:(20)Yk∣k=Yk∣k-1+Ikzk∣k=zk∣k-1+ik,where Yk∣k=Pk∣k-1 and zk∣k=Yk∣kξk∣k are the information matrix and information state, respectively. In addition, the covariance matrix and state can be recovered by using MATLAB’s leftdivide operator; that is, Pk∣k=Yk∣k∖IN and ξk∣k=Yk∣k∖zk∣k, where IN denotes an N×N identity matrix. The information state contribution ik and its associated information matrix Ik are(21)Ik=hl,kTRk-1l,lhl,kik=hl,kTRk-1l,lyl,k.
Together with (20), Lemma 3 completely describes the transition from p(ξk-1∣ql,0:k) to p(ξk∣ql,0:k). Following suit with Step (5) of Algorithm 0, suppose {ξk-1∣kι}ι is a set of particles distributed as p(ξk-1∣ql,0:k); then a new set of particles {ξk∣ki}i, which are distributed as p(ξk∣ql,0:k), can be obtained as follows. For each ξk-1∣kι, generate {yl,k∣ki} by the law described as follows:(22)pyl,k∣ξk-1∣kι,ql,0:k=ϕSk,ql,0:k;hlFkξk-1∣kι,Rekl,l. Also, set zk∣ki=zk∣k-1ι+hl,kTRk-1(l,l)yl,k∣ki.
From the above, the particle filter using coarsely quantized innovation (QPF) has been derived for individual sensor. The extension of multisensor scenario will be described in Section 6.
5. Sparse Signal Recovery
To ensure that the proposed quantized particle filtering scheme recovers sparsity pattern of signals, the sparsity constraints should be imposed on the fused estimate, that is, (x^k∣k). Here, we can make the sparsity constraint enforced either by reiterating pseudo-measurement update [15] or via the proposed sparse cubature point filter method.
5.1. Iterative Pseudo-Measurement Update Method
As stated in Section 2, the sparsity constraint can be imposed at each time point by bounding the l1-norm of the estimate of the state vector, x^k∣k1≤ϵk. This constraint is readily expressed as a fictitious measurement 0=x^k∣k1-ϵk′, where ϵk can be interpreted as a measurement noise [15, 28]. Now we construct an auxiliary state-space model of the form as follows:(23)γτ+1=γτ,0=H^τγτ-ϵτ,where γ1∣1=x^k∣k and P1∣1pm=Pk∣k and H^τ=sign(γ^1,τ∣τ)⋯sign(γ^N,τ∣τ), τ=1,2,…,L,γ^j,τ∣τ, denotes the jth component of the least-mean-square estimate of γτ (obtained via Kalman filter). Finally, we reassign x^k∣k=γ^L∣L and Pk∣k=PL∣Lpm, where the time-horizon of auxiliary state-space model (23) L is chosen such that γ^L∣L-γ^L-1∣L-12 is below some predetermined threshold. This iterative procedure is formalized below:(24)γ^τ+1∣τ+1=γ^τ∣τ-Pτ∣τpmsignγ^τ∣τγ^τ∣τ1signγ^τ∣τTPτ∣τpmsignγ^τ∣τ+Rϵ′,(25)Pτ+1∣τ+1pm=Pτ∣τpm-Pτ∣τpmsignγ^τ∣τsignγ^τ∣τTPτ∣τpmsignγ^τ∣τTPτ∣τpmsignγ^τ∣τ+Rϵ′.
5.2. Sparse Cubature Point Filter Method
Suppose a novel refinement method based on cubature Kalman filter (CKF). Compared to the iterative method described above, the resulting method is noniterative and easy to implement.
It is well known that unscented Kalman filter (UKF) is broadly used to handle generalized nonlinear process and measurement models. It relies on the so-called unscented transformation to compute posterior statistics of ħ∈Rm that are related to x by a nonlinear transformation ħ=f(x). It approximates the mean and the covariance of ħ by a weighted sum of projected sigma points in the Rm space. However, the suggested tuning parameter for UKF is κ=3-N. For a higher order system, the number of states is far more than three, so the tuning parameter κ becomes negative and may halt the operation. Recently, a more accurate filter, named cubature Kalman filter, has been proposed for nonlinear state estimation which is based on the third-degree spherical-radial cubature rule [29]. According to the cubature rule, the 2N sample points are chosen as follows:(26)χs=x^+NSxs,ωs=12N,χN+s=x^-NSxs,ωN+s=12N, where s=1,2,…,N and Sx∈RN×N denotes the square-root factor of P; that is, P=SxSxT. Now, the mean and covariance of ħ=f(x) can be computed by(27)Ex=12N∑s=12Nχs∗,Covy=12N∑s=12Nχs∗χs∗T-x^x^T,where χs∗=f(χs), s=0,1,…,2N.
By the iterative PM method, it should be noted that (24) can be seen as the nonlinear evolution process during which the state gradually becomes sparse. Motivated by this, we employ CKF to implement the nonlinear refinement process. Let Pk∣k and χs,k∣k be the updated covariance and the ith cubature point at time k, respectively (i.e., after the measurement update). A set of sparse cubature points at time k is thus given by(28)χs,k∣k∗=χs,k∣k-Pk∣ksignχs,k∣kχs,k∣k1signχs,k∣kTPk∣ksignχs,k∣k+Rˇϵ,(29)whereRˇϵ=Oχs,k∣k22+gTPk∣kg+Rϵ′for s=1,…,2N, where g∈RN is a tunable constant vector. Once the set {χs,k∣k∗}s=12N is obtained, its sample mean and covariance can be computed by (27) directly. For readability, we defer the proof of (29) to Appendix.
6. The Algorithm
We now summarize the intact algorithm as follows (illustrated in Figure 3):
Initialization: at k=0, generate {ξ^0∣-1i,ξ^0∣-1,P0∣-1}; then compute {z^0∣-1i,z^0∣-1,Y0∣-1}.
The fusion center transmits Rek(l,l) which denote the (l,l) entry of the innovation error covariance matrix (see (30)) and predicted observation y^l,k (see (31)) to the lth sensor:(30)Rek=HkPk∣k-1HkT+Rk,(31)y^l,k∣k-1=hl,kξ^k∣k-1.
The lth sensor computes the quantized innovation (see (32)) and transmits it to the fusion center(32)ql,k=Qyl,k-y^l,k∣k-1Rekl,lRekl,l.
On receipt of quantized innovation (see (32)), the Borel σ-field Sk,ql,0:k can be inferred. Then, a set of observation particles (see (33)) and corresponding weights (see (34)) are generated by fusion center:(33)yl,k∣ki~ϕSk,ql,0:k;hl,kξ^k∣k-1i,Rekl,l,(34)ωl,ki~ΦSk,ql,0:k;hl,kξ^k∣k-1i,Rekl,l.
Run measurement updates in the information form (see (35)) using an observation particle yk∣ki=y1,k∣ki⋯yl,k∣kMT generated in Step (4):(35)Yk∣k=Yk∣k-1+∑l=1Mhl,kTRk-1l,lhl,k,z^k∣ki=z^k∣k-1i+∑l=1Mhl,kTRk-1l,lyl,k∣ki.
Resample the particles by using the normalized weights.
Compute the fused filtered estimate z^k∣k:(36)z^k∣k=∑i=1Npωkiz^k∣ki,
where ωki=∏l=1Mωl,ki.
Impose the sparsity constraint on fused estimate ξ^k∣k by either (a) or (b):
iterative PM update method;
sparse cubature point filter method.
Determine time updates z^k+1∣ki, z^k+1∣k, Yk+1∣1, ξ^l,k+1∣k, and Pk+1∣k for the next time interval:(37)z^k+1∣ki=Fkz^k∣ki,z^k+1∣k=Fkz^k∣k,Yk+1∣k=FkPk∣kFkT+Wk+1-1,ξ^k+1∣k=Yk+1∣k∖z^k+1∣k,Pk+1∣k=Yk+1∣k∖IN.
Illustration of the proposed reconstruction algorithm.
Remarks. Here, the use of symbol ξ is just for algorithm description and also can be interchanged with x. In addition, it should be noted that the proposed algorithm amounts to Np Kalman filters running in parallel that are driven by the observations {yl,ki}i=1Np.
6.1. Computational Complexity
The complexity of sampling Step (4) in the general algorithm is O(Np). Measurement update Step (5) is of the order O(N2M)+O(NNp); resampling Step (7) has a complexity O(Np). Step (9) has complexity either O(N2L) or O(N). The complexity of time update Step (10) is O(N2Np)+O(N2M).
7. Simulation Results
In this section, the performance of the proposed algorithms is demonstrated by using numerical experiment, in which sparse signals are reconstructed from a series of coarsely quantized observations. Without loss of generality, we attempt to reconstruct a 10-sparse signal sequence {xk} in R256 and assume that the support set of sequence is constant. The sparse signal consists of 10 elements that behave as a random walk process. The driving white noise covariance of the elements in the support of xk is set as Wk(i,i)=0.12. This process can be described as follows:(38)xk+1i=xki+wki,if i∈suppxk,0,otherwise, where i~Uint[1,256] and x0(i)~N(0,52). Both the index i~supp(xk) and the value of xk(i) are unknown. The measurement matrix H∈R72×256 consists of entries that are sampled according to N(0,1/72). This type of matrix has been shown to satisfy the RIP with overwhelming probability for sufficiently sparse signals. The observation noise covariance is set as Rk=0.012I72. The other parameters are set as x^0∣-1=0, Np=150, and L=100. There are two scenarios considered in the numerical experiment. The first one is constant support, and the second one is changing support. In the first scenario, we assume severely limited bandwidth resources and transmit 1-bit quantized innovations (i.e., sign of innovation). We compare the performance of the proposed algorithms with the scheme considered in [15], which investigates the scenario where the fusion center has full innovation (unquantized/uncoded). For convenience, we refer to the scheme in [15] as CSKF; the proposed QPF with iterative PM update method and sparse cubature point filter method are referred to as Algorithms 1 and 2, respectively.
Figure 4 shows how various algorithms track the nonzero components of the signal. The CSKF algorithm performs the best since it uses full innovations. Algorithm 1 performs almost as well as the CSKF algorithm. The QPF clearly performs poorly, while Algorithm 2 performs close to Algorithm 1 gradually.
Nonzero component tracking performance.
Figure 5 gives a comparison of the instantaneous values of the estimates at time index k=100. All of three algorithms can correctly identify the nonzero components.
Instantaneous values at k=100.
Finally, the error performance of the algorithms is shown in Figure 6. The normalized RMSE (i.e., xk-x^k2/xk2) is employed to evaluate the performance. As can be seen, Algorithm 1 performs better than Algorithm 2 and very close to the CSKF before k=40. However, the reconstruction accuracies of all algorithms almost coincide with each other after roughly k=45. It is noted that the performance is achieved with far fewer measurements than the unknowns (<30%). In the example, the complexity of Algorithm 2 is dominated by O(N2M)=4.915×106, which is of the same order as that of QPF, while the complexity of Algorithm 1 is dominated by O(N2L)=6.553×106. It is maybe preferable to employ Algorithm 2.
Normalized RMSE.
In the second scenario, we verify the effectiveness of the proposed algorithm for sparse signal with slow change support set. The simulation parameters are set as N=160, M=40, Wk(i,i)=0.012, and Rk=0.252I40 and the others are the same as the first scenario. We assume that there are only K=4 possible nonzero components and that the actual number of the nonzero elements may change over time. In particular, the component x{4} is nonzero for the entire measurement interval, x{42} becomes zero at k=61, x{91} is nonzero from k=41 onwards, and x{98} is nonzero between k=41 and k=61. All the other components remain zero throughout the considered time interval. Figure 7 shows the estimator performance compared with the actual time variations of the 4 nonzero components. As can be seen, the algorithms have good performance for tracking the support changed slowly.
Support change tracking.
In addition, we study the relationship between quantization bits and accuracy of reconstruction. Figure 8 shows normalized RMSE versus number of quantization bits when k=100. The performance gets better but gains little as the quantization bits increase. However, more quantization bits will bring greater overheads of communication, computation, and storage, resulting in more energy consumption of sensors. For this reason, 1-bit quantization scheme has been employed in our algorithms and is enough to guarantee the accuracy of reconstruction.
Normalized RMSE versus number of quantization bits.
Moreover, it is noted that the information filter is employed to propagate the particles in our algorithms. Compared with KF, apart from the ability to deal with multisensor fusion, the IF also has an advantage over numerical stability. In Figure 9, we take Algorithm 1, for example, and give the comparison of two cases whether Algorithm 1 is with KF or with IF. As can be seen, Algorithm 1-IF gives good performance, but Algorithm 1-KF diverges.
Support change tracking.
8. Conclusions
The algorithms for reconstructing time-varying sparse signals under communication constraints have been proposed in this paper. For severely bandwidth constrained (1-bit) scenarios, a particle filter algorithm, based on coarsely quantized innovation, is proposed. To recover the sparsity pattern, the algorithm enforces the sparsity constraint on fused estimate by either iterative PM update method or sparse cubature point filter method. Compared with iterative PM update method, the sparse cubature point filter method is preferable due to the comparable performance and lower complexity. A numerical example demonstrated that the proposed algorithm is effective with a far smaller number of measurements than the size of the state vector. This is very promising in the WSNs with energy constraint, and the lifetime of WSNs can be prolonged. Nevertheless, the algorithm presented in this paper is only suitable for the time-varying sparse signal with an invariant or slowly changing support set, and the more general methods should be combined with a support set estimator which will be discussed in our future work further.
Appendix Proof.
In order to prove Lemma 1, we will show that the moment generating function (MGF) of xk∣ql,0:k can be regarded as the product of two MGFs corresponding to the two random variables in (11). Recall that the MGF of a d-dimensional r.v. a is expressed as Ma(s)=E[esTa],∀s∈Rd. Note that(A.1)pxk∣ql,0:k=∫pxk,yl,0:k∣ql,0:kdyl,0:kand p(xk∣yl,0:k,ql,0:k)=p(xk∣yl,0:k), so the MGF of xk∣ql,0:k can be given by(A.2)EesTxk∣ql,0:k=∫esTxkpxk∣yl,0:kpyl,0:k∣ql,0:kdxkdyl,0:k=e1/2sTΣxkyl,0:kΔs∫esTΣxkyl,0:kΣyl,0:k-1yl,0:kpyl,0:k∣ql,0:kdyl,0:k︸MGF of Σxkyl,0:kΣyl,0:k-1yl,0:k∣ql,0:k⟹Mxk∣ql,0:ks=MηksMyl,0:k∣ql,0:kΣyl,0:k-1Σyl,0:kxks,where ηk~Nd(0,Σxkyl,0:kΔ). Here, we used the fact that xk∣yl,0:k~Nd(Σxkyl,0:kΣyl,0:k-1,Σxkyl,0:kΔ) and Mb(ct)=Mcb(t). Then, the result should be rather obvious from (A.2).
Proof.
Consider the pseudo-measurement equation(A.3)0=xk1-ϵk′. As xk is unknown, the relation xk=x^k+x~k can be used to get(A.4)0=signxkTxk-ϵk′=signx^kT+gTxk-ϵk′,where g≤c almost surely. Equation (A.4) is the approximate pseudo-measurement with an observation noise ϵˇk=gTxk-ϵk′. Note that E[ϵk′]=0 and E[ϵk′2]=Rϵ′. Since the mean of ϵˇk cannot be obtained easily, we approximate the second moment(A.5)Eϵˇk2=EgTx^k+x~kx^k+x~kTg∣Yk+Rϵ′.Here, we used the fact that x^k and ϵk′ are statistically independent. Substituting Pk∣k obtained by KF for the error covariance in (A.5), we can get(A.6)Rˇϵ=Eϵˇk2=gTx^kx^kTg+gTEx~kx~kT∣Ykg+Rϵ′≈Ox^k22+gTPk∣kg+Rϵ′.
Competing Interests
The authors declare that they have no competing interests.
Acknowledgments
This work was supported by the fund for the National Natural Science Foundation of China (Grants nos. 60872123 and 61101014) and Higher-Level Talent in Guangdong Province, China (Grant no. N9101070).
RibeiroA.GiannakisG. B.RoumeliotisS. I.SOI-KF: distributed Kalman filtering with low-cost communications using the sign of innovations200654124782479510.1109/tsp.2006.8820592-s2.0-33947111302SukhavasiR. T.HassibiB.The Kalman-like particle filter: optimal estimation with quantized innovations/measurements201361113113610.1109/tsp.2012.2226164MR30086232-s2.0-84871362934BattistelliG.BenavoliA.ChisciL.Data-driven communication for state estimation with sensor networks201248592693510.1016/j.automatica.2012.02.028MR29128192-s2.0-84859741306MansouriM.KhoukhiL.NounouH.NounouM.Secure and robust clustering for quantized target tracking in wireless sensor networks201315216417210.1109/jcn.2013.0000292-s2.0-84877283727MansouriM.IlhamO.SnoussiH.RichardC.Adaptive quantized target tracking in wireless sensor networks20111771625163910.1007/s11276-011-0368-12-s2.0-80053569054ZhouY.WangD.PeiT.LanY.Energy-efficient target tracking in wireless sensor networks: a quantized measurement fusion framework201420141068203210.1155/2014/6820322-s2.0-84896472999QaisarS.BilalR. M.IqbalW.NaureenM.LeeS.Compressive sensing: from theory to applications, a survey201315544345610.1109/jcn.2013.0000832-s2.0-84889657929LiuY.YuH.WangJ.Distributed Kalman-consensus filtering for sparse signal estimation20142014713814610.1155/2014/1381462-s2.0-84899459721KarakusC.GurbuzA. C.TavliB.Analysis of energy efficiency of compressive sensing in wireless sensor networks20131351999200810.1109/JSEN.2013.22440362-s2.0-84876221016HauptJ.BajwaW. U.RabbatM.NowakR.Compressed sensing for networked data20082529210110.1109/msp.2007.9147322-s2.0-41949106208AngelosanteD.GiannakisG. B.GrossiE.Compressed sensing of time-varying signalsProceedings of the 16th International Conference on Digital Signal Processing (DSP '09)July 2009Santorini, Greece1810.1109/icdsp.2009.52011682-s2.0-70449564983AngelosanteD.GiannakisG. B.RLS-weighted Lasso for adaptive estimation of sparse signalsProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09)April 2009Taipei, Taiwan3245324810.1109/icassp.2009.49603162-s2.0-70349223782BabadiB.KalouptsidisN.TarokhV.SPARLS: the sparse RLS algorithm20105884013402510.1109/tsp.2010.2048103MR27801652-s2.0-77952558371VaswaniN.LuW.Modified-CS: modifying compressive sensing for problems with partially known support20105894595460710.1109/tsp.2010.2051150MR27527242-s2.0-77954588549CarmiA.GurfilP.KanevskyD.Methods for sparse signal recovery using Kalman filtering with embedded pseudo-measurement norms and quasi-norms20105842405240910.1109/TSP.2009.2038959MR27520822-s2.0-77949418978CarmiA. Y.MihaylovaL.KanevskyD.Unscented compressed sensingProceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '12)March 2012Kyoto, Japan5249525210.1109/icassp.2012.62891042-s2.0-84867626073ZymnisA.BoydS.CandèsE.Compressed sensing with quantized measurements201017214915210.1109/LSP.2009.20356672-s2.0-79851510360QiuK.DogandzicA.Sparse signal reconstruction from quantized noisy measurements via GEM hard thresholding20126052628263410.1109/tsp.2012.2185231MR29618252-s2.0-84860014267ChenC.-H.WuJ.-Y.Amplitude-aided 1-bit compressive sensing over noisy wireless sensor networks20154547347610.1109/lwc.2015.2441702DongX.ZhangY.A MAP approach for 1-Bit compressive sensing in synthetic aperture radar imaging20151261237124110.1109/lgrs.2015.23906232-s2.0-84924857721KnudsonK.SaabR.WardR.One-bit compressive sensing with norm estimation20166252748275810.1109/tit.2016.2527637DonohoD. L.Compressed sensing20065241289130610.1109/tit.2006.871582MR22411892-s2.0-33645712892CandesE. J.RombergJ.TaoT.Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information200652248950910.1109/tit.2005.862083MR22361702-s2.0-31744440684ArulampalamM. S.MaskellS.GordonN.ClappT.A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking200250217418810.1109/78.9783742-s2.0-0036475447LiT.BolićM.DjurićP. M.Resampling Methods for Particle Filtering: classification, implementation, and strategies2015323708610.1109/msp.2014.23306262-s2.0-84927634985WangS.FengJ.TseC. K.A class of stable square-root nonlinear information filters20145971893189810.1109/tac.2013.2294619MR32320812-s2.0-84903187876ChandraK. P. B.GuD.-W.PostlethwaiteI.Square root cubature information filter201313275075810.1109/JSEN.2012.22264412-s2.0-84873172376SimonD.ChiaT. L. I.Kalman filtering with state equality constraints200238112813610.1109/7.9932342-s2.0-0036245315ArasaratnamI.HaykinS.Cubature Kalman filters20095461254126910.1109/tac.2009.2019800MR25326142-s2.0-67649598885