Some basic statistical properties of the compressed measurements are investigated. It is well known that the statistical properties are a foundation for analyzing the performance of signal detection and the applications of compressed sensing in communication signal processing. Firstly, we discuss the statistical properties of the compressed signal, the compressed noise, and their corresponding energy. And then, the statistical characteristics of SNR of the compressed measurements are calculated, including the mean and the variance. Finally, probability density function and cumulative distribution function of SNR are derived for the cases of the Gamma distribution and the Gaussian distribution. Numerical simulation results demonstrate the correctness of the theoretical analysis.
National Natural Science Foundation of China61301101616711761. Introduction
Compressed sensing is proposed to recover the original signal from the compressed measurements [1]. Its key technologies consist of sparse representation, the design of the measurement matrix, and the recovery algorithm. Sparse representation of the signal is a premise and basis of compressed sensing, RIP of the measurement matrix can guarantee a unique solution of signal reconstruction mathematically [2], and the recovery algorithm finds the unique solution through different methods. Therefore, to some extent, compressed sensing is considered as a sampling technology similar to Shannon’s sampling theorem. However, the signal is sampled according to the sparsity of the signal but not the bandwidth required by Shannon’s sampling theorem. In other words, compressed sensing only extracts useful information but not the signal itself. Based on the these advantages, compressed sensing is viewed as a promising technology for many research fields, such as remote sensing, image processing, and wireless communication [3].
Currently, the wideband signal must be employed for wireless communication to satisfy the demand of high-rate data transmission, which brings many challenges for signal sampling devices, such as high sampling rate and cost. To cope with these difficulties, compressed sensing has been employed in many signal processing tasks of wireless communication, for example, channel estimation and spectrum sensing [4].
In the beginning, the communication signal is processed in accordance with standard technological process of compressed sensing [2, 3]; that is, the signal is sampled by virtue of the measurement matrix to obtain the compressed measurements. And then, the original signal is reconstructed in terms of the recovery algorithm. Finally, the recovered signal is further manipulated to finish different signal processing tasks. It has been proved that the recovery algorithm has high computational complexity [2]. Nevertheless, some signal processing tasks, especially inference problems, have only concentrated on the decision results and the related parameters but not the signal itself, such as signal detection, signal classification, and signal parameters identification. The recovered signal is meaningless for the inference problem. That is to say, the reconstruction-based signal processing methods can not fully exploit the merits of compressed sensing. Consequently, the nonreconstruction framework of signal processing is presented [5, 6]. Under the nonreconstruction framework, the compressed measurements are directly employed to deal with the inference problem without resorting to a full-scale signal reconstruction.
It is widely recognized that the communication signal is inevitably corrupted by the noise. Consequently, the performances of these signal processing tasks have close relation with SNR of the compressed measurements, which involves the energy of the compressed noise and the compressed signal. In literature [7], the noise folding in compressed sensing is considered, and the relations of SNR of the compressed measurements and the compressed ratio N/M were derived. Here, N is the dimension of the signal and M is the number of the compressed measurements. And then, the impact of the noise folding on the wideband signal acquisition is discussed [8], and the relation of SNR of the compressed measurements and SNR of the recovered signal plus noise was studied. However, the statistical properties of the energy and SNR of the compressed measurements are not further investigated. It is well known that the statistical properties are important to investigate the performance of signal detection and signal parameters identification. Therefore, it is vital to derive the statistical properties of the energy and SNR of the compressed measurements for the performance analysis of signal processing.
Because the random measurement matrix is frequently utilized, the energy of the compressed signal and the energy of the compressed noise are the random variables. Further, the resulting SNR of the compressed measurements is also the random variable. Consequently, we first discuss the mean, the variance, probability density function (PDF), and cumulative distribution function (CDF) about the energy of the compressed signal and the energy of the compressed noise. After that, we derive the statistical properties of SNR of the compressed measurements, including the mean, the variance, PDF, and CDF. These results provide a foundation for the performance analysis of signal processing tasks.
2. Statistical Properties of Compressed Measurements and Their Quadratic Sum
For compressed sensing, the compressed measurements can be expressed as(1)y=Φs+n,where y∈RM, Φ∈RM×N, s∈RN, and n∈RN denote the compressed measurements, the random measurement matrix, the signal vector with the sparsity K, and the noise vector, respectively. It is assumed that the signal and the noise are filtered by the band-pass filter before compressed sensing. Therefore, the noise folding problem may not be considered. Mathematically, the condition M≥Klog2(N/K) should be satisfied to recover the signal with high probability. Specifically, the entries of y can be calculated as (2)yk=∑i=1Nϕkisi+∑i=1Nϕkini,1≤k≤M.
Without loss of generality, we assume that the entries of the random measurement matrix are i.i.d. Gaussian random variables with mean zero and variance σϕ2, the entries of the noise vector are i.i.d. Gaussian random variables with mean zero and variance σn2, and the entries of the signal are i.i.d. random variables with mean zero and variance σs2. The entries of the random measurement matrix, the noise vector, and the signal vector are also statistically independent of each other. Because their means are zero, they are also uncorrelated and orthogonal.
Let skcs=∑i=1Nϕkis(i) and nkcs=∑i=1Nϕkin(i) denote the kth entry of the compressed signal and the compressed noise, respectively. We first analyze the statistical properties of nkcs. According to central limit theorem, nkcs can be considered as Gaussian random variable, whose mean is(3)Enkcs=E∑i=1Nϕkini=∑i=1NEϕkini=0.
Because the means of ϕki and n(i) are zero, the variance of nkcs is calculated as(4)Dnkcs=D∑i=1Nϕkini=∑i=1NDϕkini=NDϕkiDni=Nσϕ2σn2.
After that, we compute the mean of skcs:(5)Eskcs=E∑i=1Nϕkisi=∑i=1NEϕkisi=0.
Similarly, the variance of skcs can be also calculated as(6)Dskcs=D∑i=1Nϕkisi=∑i=1NDϕkisi=NDϕkiDsi=Nσs2σϕ2.
Therefore, the entries of the compressed measurements y are Gaussian random variables with mean zero and variance Nσϕ2(σs2+σn2).
The quadratic sum of the compressed signal is denoted as(7)Ps=Φs,Φs=∑k=1Mskcs2.
The quadratic sum of the compressed signal follows the central Gamma distribution with M degrees of freedom. The corresponding mean and variance are calculated as(8)Eps=EPs=MNσϕ2σs2,(9)σps2=DPs=2MNσϕ2σs22.
Its scale and shape parameters are κps=(Eps)2/σps2 and θps=σps2/Eps, respectively. Hence, Ps can be represented as Ps~Γκps,θps2.
Next, we analyze the energy of the compressed noise, which is written as(10)Pn=Φn,Φn=∑k=1Mnkcs2.
By analogy, the quadratic sum of the compressed noise also follows the central Gamma distribution with M degrees of freedom. The corresponding mean and variance are calculated as(11)Epn=EPn=MNσϕ2σn2,(12)σpn2=DPn=2MNσϕ2σn22.
Similarly, its scale and shape parameters are κpn=(Epn)2/σpn2 and θpn=σpn2/Epn, respectively. Thus, Pn is represented as Pn~Γκpn,θpn2.
3. The Statistical Properties of SNR
SNR of the compressed measurements is defined as(13)SNRcs=PsPn.
We now analyze the relation of Ps and Pn. Firstly, we calculate the mean of the product of the compressed signal and the compressed noise:(14)Eskcsnrcs=E∑i=1Nϕkisi∑m=1Nϕrmnm.
It can be seen that N2 items are obtained when (14) is expanded, and each item consists of the entry of the noise and the entry of the random measurement matrix with mean zero. Because these entries are uncorrelated and independent, the resulting mean E[skcsnrcs]=0, which means the orthogonality of skcs and nrcs. Considering (3) and (5), we can observe that skcs and nrcs are uncorrelated. Because the irrelevance and the independence are equivalent for Gaussian random variables, skcs and nrcs are independent of each other. In a straight way, we can conclude that Ps and Pn are also independent. Based on these results, we calculate the mean of SNR:(15)ESNRcs=EPsPn=EPsE1Pn.
Let X=Pn/Nσϕ2σn2; we have Pn=Nσϕ2σn2(Pn/Nσϕ2σn2)=Nσϕ2σn2X. Combining with (10) and (11), we can obtain that X follows the Chi-square distribution with degrees of freedom M. Hence, the mean of the reciprocal of X can be calculated as(16)E1X=2M/2ΓM2-1∫0∞x-1e-x/2xM/2-1dx=2M/2ΓM2-12M-2/2ΓM2-1=1M-2.
According to the relation between Pn and X, we have(17)E1Pn=E1Nσϕ2σn2X=1Nσϕ2σn2E1X=1Nσϕ2σn2M-2.
Substituting (8) and (17) into (15) yields(18)ESNRcs=EPsE1Pn=MNσϕ2σs2Nσϕ2σn2M-2=σs2Mσn2M-2.
After that, we analyze the variance of SNR, which is viewed as the product of two random variables. By virtue of the property of the variance, we have(19)DSNRcs=DPsPn=DPs·1Pn=DPsD1Pn+DPsE21Pn+E2PsD1Pn.
Now, we calculate the variance of 1/Pn. According to the definition of the variance, we can obtain(20)D1Pn=D1Nσϕ2σn2X=1Nσϕ2σn22D1X=1Nσϕ2σn22E1X2-E21X.
The item E[(1/X)2] can be calculated as(21)E1X2=2M/2ΓM2-1∫0∞x-2e-x/2xM/2-1dx=2M/2ΓM2-12M-4/2ΓM2-2=1M-2M-4.
By substituting (21) and (16) into (20), we can compute the variance of 1/Pn:(22)D1Pn=1Nσϕ2σn222M-22M-4.
In terms of (8), (9), (17), and (22), (19) can be rewritten as(23)DSNRcs=σs4σn44M2-4MM-22M-4.
Most importantly, probability density function (PDF) of SNR should be discussed. Because of the independence of Ps and 1/Pn, Ps~Γκps,θps2 and Pn~Γκpn,θpn2, pdf of the product of two random variables with the Gamma distribution can be expressed as(24)fSNRcsx=θpsθpnκpnΓκps+κpnΓκpsΓκpnxκps-1x+θps/θpnκps+κpn.
The corresponding cumulative distribution function (CDF) can be expressed as(25)FSNRcsx=Ix/x+θps/θpnκps,κpn,where Iq(·,·) is the regularized incomplete B function; that is, Iq(a,b)=Bq;a,b/Ba,b, Ba,b=ΓaΓb/Γa+b, and B(q;a,b)=∫0qta-1(1-t)b-1dt. Accordingly, (25) can be rewritten as(26)FSNRcsx=θpsθpnκpsΓκpsΓκpnΓκps+κpnxκpnκpn·F21κpn,κps+κpn;κpn+1;-θpsθpnx,where F21(·,·;·;·) is the Hypergeometric function.
We can observe that the expressions of CDF and PDF are very complicated, so it is difficult to directly exploit them in practical scenarios. Generally, the number of the compressed measurements M is relatively large; thus the quadratic sum Ps and Pn can be also viewed as the Gaussian distribution in terms of central limit theorem [9]. Because of the independence of Ps and 1/Pn, probability density function of SNR can be expressed as [10, 11](27)fSNRcsx=vxcxu3x2πσpsσpn2ϕvxax-1+1u2xπσpsσpne-1/2Eps2/σps2+Epn2/σpn2,where ux=x2/σps2+1/σpn2, vx=xEps/σps2+Epn/σpn2, cx=e(1/2)(b2x/a2x)-1/2Eps2/σps2+Epn2/σpn2, and ϕz=∫-∞z(1/2π)e-(1/2)w2dw.
The corresponding CDF is expressed as(28)FSNRcsx=ΦEpnx-Epsσpn2x2+σps2,where Φ is CDF of standard Gaussian random variable.
It can be seen that the resulting CDF and PDF are related with the sparsity K[M≥Klog2(N/K)], N dimension of signal, the variance of the signal, the variance of the noise, and the variance of the measurement matrix. Furthermore, CDF and PDF in Gaussian assumption are simpler than of the Gamma distribution.
It needs to be explained that we do not place any restrictions on the power range of the noise and the signal in the previous analysis, and the analysis result can be applied for any SNR. Hence, it is reasonable to not consider the dynamic range.
4. Simulation Results
To prove the theoretical analysis, some simulations are performed. Firstly, PDF of the energy of the compressed signal for different M is shown in Figure 1. The random variables following the binomial distribution are exploited as the signal, which is a common information model for the practical communication system. The mean and the variance of the binomial distribution are zero and 1, respectively; that is, σs2=1.
PDF of energy of compressed signal for M=50,100,200.
The simulation parameters are as follows: N=512, M=50,100,200. To remove the effect of the measurement matrix on the energy of the compressed signal and the compressed noise, the variance of each column of the measurement matrix Φ is set to 1/M; that is, D[ϕki]=1/M. Combining with the mean and the variance of the binomial distribution, (8) and (9) can be simplified as(29)Eps=Nσs2=N,σps2=DPs=2MNσs22=2N2M.
It can be demonstrated that the mean is fixed when M changes. In other words, the mean is independent of M. However, the variance is inversely proportional to M. Hence, the simulation result is consistent with the theoretical analysis (29).
For the noise, (11) and (12) can be rewritten as(30)Epn=Nσn2,σpn2=DPn=2MNσn22.
Figure 2 shows the impact of the number of the compressed measurements on the statistical properties. It can be observed that the variation tendencies of the mean and the variance are the same as those of the compressed signal. It should be pointed out that the statistical properties experience some changes, but the energy of the noise approximates to the energy of the compressed noise because of RIP of the measurement matrix, which is also correct for the signal and the compressed signal.
PDF of energy of compressed noise for M=0,50,100,200.
Finally, SNR of the compressed measurements is demonstrated for the different number of the compressed measurements M in Figure 3. It can be observed that the mean of SNR approximates for different M due to M/(M-2)≈1, and the variance of SNR decreases with the increasing of M. Consequently, we conclude that these simulated results coincide with (18) and (23).
PDF of SNR of compressed measurements for M=0,50,100,200.
5. Conclusion
In the framework of compressed sensing, some statistical properties of the compressed signal and the compressed noise were calculated and analyzed, which mainly consist of the mean, variance, probability density function, and cumulative distribution function. It has been illustrated that these statistical properties vary when the signal and the noise are processed by compressed sensing. If the entries of the measurement matrix are normalized, the mean of the compressed signal and the compressed noise remains unchanged, but their variance inversely varies with the number of the compressed measurements M. Based on these results, the mean and the variance of SNR were achieved. And then, by the independence of the energy of the compressed signal and the compressed noise, we derived the closed-form expressions of probability density function and cumulative distribution function for the cases of the Gaussian and Gamma distribution.
Competing Interests
The authors declare that there is no conflict of interests regarding the publication of this article.
Acknowledgments
This work is supported by National Natural Science Foundation of China (NSFC) (61301101, 61671176).
CandèsE. J.RombergJ.TaoT.Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information200652248950910.1109/tit.2005.862083MR22361702-s2.0-31744440684MalloyM. L.NowakR. D.Near-optimal adaptive compressed sensing20146074001401210.1109/tit.2014.2321552MR32259462-s2.0-84902972079WangL.LuK.LiuP.Compressed sensing of a remote sensing image based on the priors of the reference image201512473674010.1109/LGRS.2014.23604572-s2.0-84908661852WangY.GuoC.SunX.FengC.Time-efficient wideband spectrum sensing based on compressive samplingProceedings of the 81st IEEE Vehicular Technology Conference (VTC Spring '15)May 2015Glasgow, UK1510.1109/vtcspring.2015.71461352-s2.0-84940426466HongS.Direct spectrum sensing from compressed measurementsProceedings of the IEEE Military Communications Conference (MILCOM '10)November 20101187119210.1109/milcom.2010.56801032-s2.0-79951634606DavenportM. A.BoufounosP. T.WakinM. B.BaraniukR. G.Signal processing with compressive measurements20104244546010.1109/JSTSP.2009.20391782-s2.0-77949735239Arias-CastroE.EldarY. C.Noise folding in compressed sensing201118847848110.1109/LSP.2011.21598372-s2.0-79959962124DavenportM. A.LaskaJ. N.TreichlerJ. R.BaraniukR. G.The pros and cons of compressive sensing for wideband signal acquisition: noise folding versus dynamic range20126094628464210.1109/TSP.2012.2201149MR29605502-s2.0-84864478971GradshteynI. S.RyzhikI. M.20077thElsevierMR2360010HinkleyD. V.On the ratio of two correlated normal random variables19695663563910.1093/biomet/56.3.635MR0254946PapoulisA.19842ndNew York, NY, USAMcGraw-HillMcGraw-Hill Series in Electrical Engineering. Communications and Information TheoryMR836136