Statistical Properties of SNR for Compressed Measurements

Some basic statistical properties of the compressedmeasurements are investigated. It is well known that the statistical properties are a foundation for analyzing the performance of signal detection and the applications of compressed sensing in communication signal processing. Firstly, we discuss the statistical properties of the compressed signal, the compressed noise, and their corresponding energy. And then, the statistical characteristics of SNR of the compressed measurements are calculated, including the mean and the variance. Finally, probability density function and cumulative distribution function of SNR are derived for the cases of the Gamma distribution and the Gaussian distribution. Numerical simulation results demonstrate the correctness of the theoretical analysis.


Introduction
Compressed sensing is proposed to recover the original signal from the compressed measurements [1].Its key technologies consist of sparse representation, the design of the measurement matrix, and the recovery algorithm.Sparse representation of the signal is a premise and basis of compressed sensing, RIP of the measurement matrix can guarantee a unique solution of signal reconstruction mathematically [2], and the recovery algorithm finds the unique solution through different methods.Therefore, to some extent, compressed sensing is considered as a sampling technology similar to Shannon's sampling theorem.However, the signal is sampled according to the sparsity of the signal but not the bandwidth required by Shannon's sampling theorem.In other words, compressed sensing only extracts useful information but not the signal itself.Based on the these advantages, compressed sensing is viewed as a promising technology for many research fields, such as remote sensing, image processing, and wireless communication [3].
Currently, the wideband signal must be employed for wireless communication to satisfy the demand of high-rate data transmission, which brings many challenges for signal sampling devices, such as high sampling rate and cost.To cope with these difficulties, compressed sensing has been employed in many signal processing tasks of wireless communication, for example, channel estimation and spectrum sensing [4].
In the beginning, the communication signal is processed in accordance with standard technological process of compressed sensing [2,3]; that is, the signal is sampled by virtue of the measurement matrix to obtain the compressed measurements.And then, the original signal is reconstructed in terms of the recovery algorithm.Finally, the recovered signal is further manipulated to finish different signal processing tasks.It has been proved that the recovery algorithm has high computational complexity [2].Nevertheless, some signal processing tasks, especially inference problems, have only concentrated on the decision results and the related parameters but not the signal itself, such as signal detection, signal classification, and signal parameters identification.The recovered signal is meaningless for the inference problem.That is to say, the reconstruction-based signal processing methods can not fully exploit the merits of compressed sensing.Consequently, the nonreconstruction framework of signal processing is presented [5,6].Under the nonreconstruction framework, the compressed measurements are directly employed to deal with the inference problem without resorting to a full-scale signal reconstruction.
It is widely recognized that the communication signal is inevitably corrupted by the noise.Consequently, the performances of these signal processing tasks have close relation with SNR of the compressed measurements, which involves the energy of the compressed noise and the compressed signal.In literature [7], the noise folding in compressed sensing is considered, and the relations of SNR of the compressed measurements and the compressed ratio / were derived.Here,  is the dimension of the signal and  is the number of the compressed measurements.And then, the impact of the noise folding on the wideband signal acquisition is discussed [8], and the relation of SNR of the compressed measurements and SNR of the recovered signal plus noise was studied.However, the statistical properties of the energy and SNR of the compressed measurements are not further investigated.It is well known that the statistical properties are important to investigate the performance of signal detection and signal parameters identification.Therefore, it is vital to derive the statistical properties of the energy and SNR of the compressed measurements for the performance analysis of signal processing.
Because the random measurement matrix is frequently utilized, the energy of the compressed signal and the energy of the compressed noise are the random variables.Further, the resulting SNR of the compressed measurements is also the random variable.Consequently, we first discuss the mean, the variance, probability density function (PDF), and cumulative distribution function (CDF) about the energy of the compressed signal and the energy of the compressed noise.After that, we derive the statistical properties of SNR of the compressed measurements, including the mean, the variance, PDF, and CDF.These results provide a foundation for the performance analysis of signal processing tasks.

Statistical Properties of Compressed Measurements and Their Quadratic Sum
For compressed sensing, the compressed measurements can be expressed as where y ∈ R  , Φ ∈ R × , s ∈ R  , and n ∈ R  denote the compressed measurements, the random measurement matrix, the signal vector with the sparsity , and the noise vector, respectively.It is assumed that the signal and the noise are filtered by the band-pass filter before compressed sensing.Therefore, the noise folding problem may not be considered.Mathematically, the condition  ≥  log 2 (/) should be satisfied to recover the signal with high probability.Specifically, the entries of y can be calculated as Without loss of generality, we assume that the entries of the random measurement matrix are i.i.d.Gaussian random variables with mean zero and variance  2  , the entries of the noise vector are i.i.d.Gaussian random variables with mean zero and variance  2  , and the entries of the signal are i.i.d.random variables with mean zero and variance  2  .The entries of the random measurement matrix, the noise vector, and the signal vector are also statistically independent of each other.Because their means are zero, they are also uncorrelated and orthogonal.
Let  cs  = ∑  =1   () and  cs  = ∑  =1   () denote the th entry of the compressed signal and the compressed noise, respectively.We first analyze the statistical properties of  cs  .According to central limit theorem,  cs  can be considered as Gaussian random variable, whose mean is Because the means of   and () are zero, the variance of  cs  is calculated as After that, we compute the mean of  cs  : Similarly, the variance of  cs  can be also calculated as Therefore, the entries of the compressed measurements y are Gaussian random variables with mean zero and variance  2  ( 2  +  2  ).The quadratic sum of the compressed signal is denoted as The quadratic sum of the compressed signal follows the central Gamma distribution with  degrees of freedom.The corresponding mean and variance are calculated as Its scale and shape parameters are By analogy, the quadratic sum of the compressed noise also follows the central Gamma distribution with  degrees of freedom.The corresponding mean and variance are calculated as Similarly, its scale and shape parameters are    = (   ) 2 /  2   and    =  2   /   , respectively.Thus,   is represented as

The Statistical Properties of SNR
SNR of the compressed measurements is defined as We now analyze the relation of   and   .Firstly, we calculate the mean of the product of the compressed signal and the compressed noise: It can be seen that  2 items are obtained when ( 14) is expanded, and each item consists of the entry of the noise and the entry of the random measurement matrix with mean zero.Because these entries are uncorrelated and independent, the resulting mean [ c   cs  ] = 0, which means the orthogonality of  cs  and  cs  .Considering (3) and ( 5), we can observe that  cs  and  cs  are uncorrelated.Because the irrelevance and the independence are equivalent for Gaussian random variables,  cs  and  cs  are independent of each other.In a straight way, we can conclude that   and   are also independent.Based on these results, we calculate the mean of SNR: Let  =   / 2   2  ; we have Combining with (10) and (11), we can obtain that  follows the Chi-square distribution with degrees of freedom .Hence, the mean of the reciprocal of  can be calculated as According to the relation between   and , we have Substituting ( 8) and ( 17) into (15) yields After that, we analyze the variance of SNR, which is viewed as the product of two random variables.By virtue of the property of the variance, we have Now, we calculate the variance of 1/  .According to the definition of the variance, we can obtain The item [(1/) 2 ] can be calculated as By substituting ( 21) and ( 16) into (20), we can compute the variance of 1/  : Mathematical Problems in Engineering In terms of ( 8), ( 9), (17), and ( 22), ( 19) can be rewritten as Most importantly, probability density function (PDF) of SNR should be discussed.Because of the independence of   and 1/  ,   ∼ Γ(   ,  2   ) and   ∼ Γ(   ,  2   ), pdf of the product of two random variables with the Gamma distribution can be expressed as The corresponding cumulative distribution function (CDF) can be expressed as where   (⋅, ⋅) is the regularized incomplete  function; that is,   (, ) = (; , )/(, ), (, ) = Γ()Γ()/Γ( + ), and (; , ) = ∫  0  −1 (1 − ) −1 .Accordingly, (25) can be rewritten as where 2  1 (⋅, ⋅; ⋅; ⋅) is the Hypergeometric function.
We can observe that the expressions of CDF and PDF are very complicated, so it is difficult to directly exploit them in practical scenarios.Generally, the number of the compressed measurements  is relatively large; thus the quadratic sum   and   can be also viewed as the Gaussian distribution in terms of central limit theorem [9].Because of the independence of   and 1/  , probability density function of SNR can be expressed as [10,11] where .The corresponding CDF is expressed as where Φ( ) is CDF of standard Gaussian random variable.It can be seen that the resulting CDF and PDF are related with the sparsity [ ≥  log 2 (/)],  dimension of signal, the variance of the signal, the variance of the noise, and the variance of the measurement matrix.Furthermore, CDF and PDF in Gaussian assumption are simpler than of the Gamma distribution.
It needs to be explained that we do not place any restrictions on the power range of the noise and the signal in the previous analysis, and the analysis result can be applied for any SNR.Hence, it is reasonable to not consider the dynamic range.

Simulation Results
To prove the theoretical analysis, some simulations are performed.Firstly, PDF of the energy of the compressed signal for different  is shown in Figure 1.The random variables following the binomial distribution are exploited as the signal, which is a common information model for the practical communication system.The mean and the variance of the binomial distribution are zero and 1, respectively; that is,  2  = 1.
The simulation parameters are as follows:  = 512,  = 50, 100, 200.To remove the effect of the measurement matrix on the energy of the compressed signal and the compressed noise, the variance of each column of the measurement matrix Φ is set to 1/; that is, [  ] = 1/.Combining with the mean and the variance of the binomial distribution, (8) and ( 9) can be simplified as It can be demonstrated that the mean is fixed when  changes.In other words, the mean is independent of .However, the variance is inversely proportional to .Hence, the simulation result is consistent with the theoretical analysis (29).For the noise, ( 11) and ( 12) can be rewritten as Figure 2 shows the impact of the number of the compressed measurements on the statistical properties.It can be observed that the variation tendencies of the mean and the variance are the same as those of the compressed signal.It should be pointed out that the statistical properties experience some changes, but the energy of the noise approximates to the energy of the compressed noise because of RIP of the measurement matrix, which is also correct for the signal and the compressed signal.
Finally, SNR of the compressed measurements is demonstrated for the different number of the compressed measurements  in Figure 3.It can be observed that the mean of SNR approximates for different  due to /( − 2) ≈ 1, and the variance of SNR decreases with the increasing of .Consequently, we conclude that these simulated results coincide with (18) and (23).

Conclusion
In the framework of compressed sensing, some statistical properties of the compressed signal and the compressed noise were calculated and analyzed, which mainly consist of the mean, variance, probability density function, and cumulative distribution function.It has been illustrated that these statistical properties vary when the signal and the noise are processed by compressed sensing.If the entries of the measurement matrix are normalized, the mean of the compressed signal and the compressed noise remains unchanged, but their variance inversely varies with the number of the compressed measurements .Based on these results, the mean and the variance of SNR were achieved.And then, by the independence of the energy of the compressed signal and the compressed noise, we derived the closed-form expressions of probability density function and cumulative distribution function for the cases of the Gaussian and Gamma distribution.
2   and    =  2   /   , respectively.Hence,   can be represented as   ∼ Γ(   ,  2   ).Next, we analyze the energy of the compressed noise, which is written as