Peeling Decoding of LDPC Codes with Applications in Compressed Sensing

. We present a new approach for the analysis of iterative peeling decoding recovery algorithms in the context of Low-Density Parity-Check (LDPC) codes and compressed sensing. The iterative recovery algorithm is particularly interesting for its low measurement cost and low computational complexity. The asymptotic analysis can track the evolution of the fraction of unrecovered signal elementsineachiteration,whichissimilartothewell-knowndensityevolutionanalysisinthecontextofLDPCdecodingalgorithm. Ouranalysisshowsthatthereexistsathresholdonthedensityfactor;ifunderthisthreshold,therecoveryalgorithmissuccessful; otherwiseitwillfail.Simulationresultsarealsoprovidedforverifyingtheagreementbetweentheproposedasymptoticanalysis andrecoveryalgorithm.Comparedwithexistingworksofpeelingdecodingalgorithm,focusingonthefailureprobabilityofthe recoveryalgorithm,ourproposedapproachgivesaccurateevolutionofperformancewithdifferentparametersofmeasurement matricesandiseasytoimplement.Wealsoshowthatthepeelingdecodingalgorithmperformsbetterthanotherschemesbased onLDPCcodes.


Introduction
Compressed sensing (CS) is a novel signal sampling theory that exploits the sparsity of signal [1,2].In the noiseless settings, the CS problem can be considered as estimating an original sparse signal x ∈ R  from the linear measurement vector y ∈ R  ,  ≪ ; that is, y = x, where  is referred to as measurement matrix; this measurement process is also referred to as encoding.Generally, without additional information, it is impossible to recover x from y in the case  ≪ .The signal x ∈ R  is called -sparse if the signal has no more than  nonzero entries, for -sparse signal and with an appropriate measurement matrix , even though when  ≪ , y essentially contains enough information to recover the original signal x; this recovery process is also referred to as decoding.
Candés et al. in [1,2] used the Gaussian random matrix as the measurement matrix and the ℓ 1 norm minimization to recover the original signal.However, most of elements in Gaussian random matrix are nonzero, which requires a lot of storage space.Furthermore, these random matrices are often difficult or expensive in hardware implementation.Recently, many researchers have exploited some excellent sparse matrices of channel coding into the CS system [3][4][5]; compared with random measurement matrices, the sparse matrices could reduce the storage space and are easy to implement in hardware.As a special class of sparse matrices, the paritycheck matrices of Low-Density Parity-Check (LDPC) codes can be used as good measurement matrices for CS under ℓ 1 norm minimization [3].In [3], the authors have pointed out that the ℓ 1 norm minimization decoding of LDPC codes is very similar to the ℓ 1 norm minimization of CS.Inspired by the work of [3], the authors of [4,5] constructed a class of deterministic measurement matrices from finite geometry LDPC (FG-LDPC) codes.However, these works only focus on the construction of measurement matrices.
From the class of measurement matrix, decoding algorithms can be assorted as algorithms with dense matrices and sparse matrices.Generally, decoding algorithms based on sparse matrices have lower complexity than the algorithms associated with dense matrices.As shown in our analysis in Section 3, the decoding algorithm with sparse matrices is faster than the algorithms using dense matrices.Since the random dense matrices satisfy the well-known restricted isometry property (RIP) with high probability, the standard decoding algorithms for these matrices are in line with greedy or convex programming [1,2,6,7].The RIP is a sufficient condition which guarantees unique and perfect recovery of sparse signals via ℓ 1 -minimization [2].It has been shown that the sparse matrices do not satisfy RIP when  < Ω( 2 ) [8].However, the decoding algorithms with sparse matrices explore the insights from coding theory to assist in sparse recovery.In fact, through the design of sparse measurement matrices and decoding algorithm, both the measurement cost and decoding complexity can be simultaneously reduced.
For the complexity of recovery algorithm, the ℓ 1 norm minimization has a computation complexity of O( 3 ).To reduce this complexity, with the sparse measurement matrices, various low computational complexity message-passing algorithms have been introduced for reconstruction of sparse signals in CS [9][10][11][12][13][14][15][16][17][18][19][20][21][22].In [9], the authors used the random sparse matrix as a new sparse measurement matrix and proposed an iterative algorithm of complexity of O( log() log()), while it only required  = O( log()) measurements.It was essentially the same as the ideal of verification decoding of packet-based LDPC codes in [23] and is rediscovered by Zhang in [11].These verificationbased (VB) recovery algorithms in the context of CS have been analyzed rigorously via the density evolution in [10,11].In [12], the VB algorithm was applied to the wideband spectrum sensing, where the measurement matrix is designed based on a block sparse matrix.Based on special nonnegative sparse signals, a new message-passing algorithm, called the Interval-Passing (IP) algorithm, was proposed in [13], which has a better performance than the VB algorithm in [11].In [14], a combination scheme of both the IP algorithm and the VB algorithm was proposed for nonnegative sparse signals, which can perform better than either the IP algorithm or the VB algorithm.Based on expander graphs of the sparse measurement matrix, Jafarpour et al. in [15,16] proposed a different iterative algorithm with a complexity of O( log(/)) using O( log(/)) measurements in the noiseless case.It should be worth noting that the above fast recovery algorithms only can get rid of the noiseless case.
For the noise case, in [17], the authors assumed signal as Gaussian mixture priors and proposed a belief propagation (BP) algorithm.The main disadvantage with this approach is that decoding complexity grows exponentially.In a similar work [18], the authors assumed signal as Jeffreys' priors and applied well-known least-squares algorithms to recover the signal.However, in [18], the sparsity  was assumed to be known.
To overcome the restriction of the decoding algorithms discussed above, Pawar et al. in [19][20][21][22] designed a hybrid mix of the LDPC codes and Discrete Fourier Transform (DFT) framework (LDPC-DFT) and proposed a fast Short-and-Wide Iterative Fast Transform (SWIFT) peeling decoding recovery algorithm.It only needs O() measurements and O() iterative step to achieve the exact signal recovery in the noiseless case.In the presence of noise, both measurements and computational complexity for exact signal recovery are O( log 1.3 ()).These frameworks also have been used in spectrum sensing in [24]; Hassanieh et al. in [24] presented a prime sampling technology that can capture GHz of spectrum in real-time with low speed analog-to-digital converters (ADCs); this prime sampling technology is essentially identical to the framework of [19][20][21][22]25].These frameworks of [19][20][21][22] also have been used in compressed sensing phase retrieval [26].
In this study, we are interested in the sparse LDPC-DFT measurement matrices associated with SWIFT recovery algorithm [19][20][21][22], which can reduce both the measurement cost and computational complexity simultaneously.The authors in [19] analyzed the successful recovery probability of SWIFT algorithm via exploiting the connection between CS system and packet-communication system.Furthermore, in [22] the authors investigated the generalized family of LDPC-DFT measurement matrices in terms of the measurement cost, computational complexity, and recovery performance via a rough density evolution, where a local neighborhood of every edge in the graph should be cycle-free (tree-like).However, they only focused on reducing the number of measurements  required for successful recovery, for given  and .
In this paper, we investigate the performance measures of SWIFT recovery algorithm in the asymptotic case ( → ∞) in order to give an estimate of the performance for given sparse measurement matrix and  in the finite  case, where our proposed approach gives accurate evolution of performance with different parameters of measurement matrices and is easy to implement.The analysis shows that there exists a threshold on the density factor  = /; if under this threshold, the recovery algorithm is successful; otherwise it will fail.It is shown that the threshold is dependent on the parameter of LDPC matrix, which provides a basis for us to design the sparse measurement matrices.

A Class of Hybrid LDPC-DFT Measurement Matrix and Their Fast Recovery Algorithm
In this section, we consider a special class of hybrid mix of the parity-check matrix of binary LDPC codes and DFT matrix (referred to as "LDPC-DFT measurement matrix") and their fast recovery algorithm.We firstly introduce the sparse parity-check matrix of LDPC codes (referred to as "LDPC-based measurement matrix") and its bipartite graph.

LDPC-Based Measurement Matrix and Measurement
Graph.In channel coding, it is well known that the LDPC codes can be defined by the null space of its sparse paritycheck matrix; the codes are classified as binary codes and nonbinary codes.In this paper, we only consider the case of binary codes.A sparse parity-check matrix can be represented by a bipartite graph.In a bipartite graph, two sets of nodes are defined as the code symbols and parity-check equations; they are labeled as "variable nodes" and "check nodes," respectively.There is an edge between the variable node V and check node  if and only if V appears in paritycheck equation .
In CS, when the measurement matrix is sparse paritycheck matrix, the measurement also has its own bipartite graph (referred to as "measurement graph").We denote this measurement graph as  = (V ∪ C, ), where the set of variable nodes V represents the original signal and the check nodes C represent measurement observations and  is set of edges.The number of edges associated with every node (V or C) is called node degree.In this paper, we consider the case in which every variable node (check node) has same nodes degree  V (  ), which is also referred to as regular degree ( V ,   ).Additionally, we denote the check nodes set M(V) as all the check nodes incident to variable node V by edges.Similarly, the set of variable nodes N() is denoted as all the variable nodes incident to check node .In particular, due to the mathematical connection between channel coding and CS, we can analyze the fast recovery algorithm over the measurement graph.Hereinafter, we refer to the recovery process as decoding.

Decoding Algorithm Based on Sparse Measurement Graph.
We firstly introduce the principle of the decoding algorithm based on sparse measurement graph through an example.Consider an example consisting of a sparse signal x of length  = 15 with  = 4 nonzero elements.Let the 4 nonzero elements of signal be as follows: and x[13] = 7; let the number of measurements be  = 9.According to the definition of measurement graph, we have sparse measurement graph, shown in Figure 1.
In Figure 1, it is shown that the check nodes can be classified into three observation nodes.

Zero-Ton.
A check node does not contain any nonzero elements of signal, for example, nodes 1, 3, 4, and 9 in the right side of Figure 1.

Single-Ton.
A check node only contains one nonzero element of signal, for example, nodes 5, 6, and 7 in the right side of Figure 1.

Multiton.
A check node contains more than one nonzero element of signal, for example, nodes 2 and 8 in the right side of Figure 1.
Assuming that there exists an "oracle" can decide exactly which check nodes are zero-ton, single-ton, and multiton, and the "oracle" can further decode the corresponding value and index of variable nodes incident to the single-ton.In the above example, the "oracle" can decide that the nodes 5, 6, and 7 are single-ton and decode the value and index of variable nodes 1, 13, and 8, respectively.Then, the "oracle" could subtract variable nodes contributions from other check nodes and form new single-tons; for example, in Figure 1 check node 8 subtracts the contribution of the variable node x [13]; then check node 8 becomes a new single-ton.
Therefore, with the information of single-ton, the "oracle" repeats the following steps: (1) Selecting all the edges in sparse measurement graph with single-ton check nodes.(2) Removing these edges and the corresponding variable and check nodes connected to these edges.(3) Removing the other edges adjacent variable nodes that have been removed in step ( 2). ( 4) Subtracting the value of variable nodes that have been removed in step (3) from the corresponding check nodes.
When all the edges have been removed, the decoding is successful.The decoding algorithm is called "peeling decoding" in traditional erasure channel [23,27].
Luby et al. in [23,27] analyzed the performance of peeling decoding algorithm; it has been shown that the peeling decoding algorithm terminates successfully with probability of at least 1 − O( −3/4 ) with good variable degree distribution.However, in the decoding algorithm, the "oracle" can (1) decide the type of check nodes and (2) exactly estimate the value and index of single-ton incident to variable node.In order to get rid of the oracle, Pawar and Ramchandran introduced the DFT matrix to decode the single-ton in [19]; they also proved that the corresponding SWIFT algorithm with  = 2(1 + ) measurements could exactly recover the original signal with probability of at least 1 − O( −3/4 ) in the noiseless setting, where  > 0, arbitrarily close to 0.

Measurement Matrix Based on LDPC-DFT.
Denote  ∈ C 2× as the first two rows of  ×  DFT matrix; that is, Since the matrix  can decide the type of check nodes and decode the value as well as index of corresponding variable nodes, the matrix  is also referred to as "detection matrix." We design the required measurement matrix via the following definition.
Definition 1.Let  = 2 for some positive integers .Given  ×  sparse parity-check matrix  of LDPC, denoting    as x [4] x [7] x th row of  (1 ≤  ≤ ), and given a detection matrix  ∈ C 2× , then the  ×  measurement matrix  is designed by where ⊙ is row-tensor product and ⊗ is standard Kronecker product.
From the definition of Kronecker product, it is shown that the value of every check node (observation) is a 2-length vector, for example, the following check nodes , also shown in Figure 2: + x [15] ( 1 We use the following criterions to decide the type of check nodes and estimate the value and index of corresponding variable nodes. ) is nonzero with value given by y  [1].
The VB algorithm with decoding successfully claims that the nonzero elements of signal must satisfy continuous distribution [10,11].However, the above criterions are identified with probability of one whether nonzero elements of signal obey continuous distribution or not.Hence, the measurement matrix based on LDPC-DFT could be more applicable for various signal types.Then, we consider the case of the nonzero elements of original signal taking values from the finite discrete set.Similarly, we only prove the type of zero-ton.Considering the worst case, for  ̸ = , Pr(x[] = −x[]) → 1. However the y  = ∑ ∈N() x[] is 2-length complex vector; that is, when y  = 0, it claims not only y  [1] = 0, but also y  [2] = 0. Due to the sparse measurement matrix, y  = 0 holds, if and only if x[] = 0,  ∈ N().Hence, the decoder can decide the type of check node as zero-ton via the definition of zero-ton.
In the context of analysis of framework and simulation, we adopt the following way to generate the original signal.Let T be the set of nonzero elements of original signal and let  be the density factor, that is, the probability that a signal element belongs to T. For a given , every element of original signal takes value as follows: every element takes 0 with probability of 1 −  or the value from the continuous distribution  (discrete sets) with probability of .In this sense, the sparsity  and density ratio  = / are random variables.Furthermore, [] =  and [] = , where [⋅] is referred to as expected value.

Decoding Algorithm.
In the above subsections, we describe an iterative decoding algorithm to decode the sparse signal with the help of "oracle" that is implemented via LDPC-DFT measurement matrix and single-ton criterions.In this subsection, the iterative decoding algorithm is showed by using the step described in Section 2.2 (also see pseudocode in Algorithms 1 and 2).
Following the discussions in [19], it has been proven that the SWIFT decoding algorithm satisfies the criterions with probability of at least 1 − O( −3/4 ) via the corresponding Theorem 2 in [27].In this paper, we propose a general framework for the asymptotic analysis of SWIFT algorithm.Using the asymptotic analysis, we can track the evolution of density factor  and the corresponding threshold can be determined for different ( V ,   ).

Asymptotic Analysis of Decoding Algorithm
3.1.Analysis Model.In this paper, we only consider the regular ( V ,   ) measurement bipartite graph.However, the results can be generalized to irregular case.
Consider a sparse bipartite graph    = (V * ∪ C,  * ), where V * ⊂ V is the subset of variable nodes, C is the set of  check nodes, and  * is the set of corresponding edges.Since (2) Output: estimation of original signal x.
the locations of nonzero elements in signal are random, the graph    is left-regular, as shown in Figure 1.In the SWIFT decoding algorithm, the criterions employed by the peeling decode algorithm depend on the degree of check nodes.To describe the asymptotic analysis, we classify the set of check nodes C as C  being the set of check nodes having degree , 1 ≤  ≤   .Furthermore, the set T of nonzero elements of signal x is classified as T  being the set variable nodes having  edges connected to the set C 1 , 0 ≤  ≤  V .
Based on the above classification and criterions employed in Section 2, in each iteration of SWIFT algorithm, a variable node can be decoded successfully if the following condition holds.

Theorem 3. In each iteration, a variable node V can be decoded successfully with probability of almost 1 if and only if
Sufficiency.When V ∈ T 1 , from the definition of single-ton, it is trivial that the variable node V can be decoded successfully with probability of almost 1.When V ∈ ⋃  V =2 T  , that is, the variable node V being connected to more than one single-ton, since the decoder can decode successfully, with probability of almost 1, the corresponding variable nodes from the singleton, we can conclude that value of these single-tons must be identical.Hence, this case is identical to the case V ∈ T 1 .
Necessity.For the SWIFT algorithm, when a variable node V is decoded successfully, there must at least exist a single-ton check node that connected to the variable node V; that is, V ∈ ⋃ We remodel the sparse measurement graph via the definition of C  as well as T  and Theorem 3, which allows us to analyze the evolution of C  and T  .Let  (ℓ) C  denote the set of probability that a check node belongs to C  and let  (ℓ) T  denote the set of probability that a variable node belongs to T  , where ℓ is the iteration number.Furthermore, we denote  (ℓ) by the probability that a variable node belongs to T (ℓ) .Given  (ℓ) C  ,  (ℓ) T  , and  (ℓ) at iteration ℓ ≥ 1, our asymptotic analysis can calculate  (ℓ+1) C  ,  (ℓ+1) T  , and  (ℓ+1) at next iteration ℓ + 1.For given initial density factor , we can track the evolution of  (ℓ)  via the analysis approach.Once the probability  (ℓ) decreases monotonically to zero with the iteration number increase, we refer to this case as the decoding algorithm recovering the original signal successfully; while the probability  (ℓ) is bounded away from 0 with any iteration number, we call this case decoding algorithm failure.In what follows, we provide a rigorous calculation of the above sets of probability via combinatorial and probabilistic arguments.

Details of Asymptotic Analysis. Define
From Theorem 3, it is known that a variable node that belongs to S T can be decoded successfully and a variable node that belongs to S  T is left intact.Thus, the probability  (ℓ)  V that a variable node in T (ℓ) is being decoded successfully is Hence, after ℓth iteration, the probability of variable node V remaining undecoded is Once the variable node is decoded successfully, as discussed in SWIFT algorithm, the  V edges along with the variable node will be removed, which also results in the degree of check nodes being incident to these removed edges.Let  (ℓ)  C  denote the probability that the degree number  of a check node  reduces to number , where  ≤ .Based on Theorem 3, when a variable node V ∈ T  is decoded successfully,  edges along with the set C 1 as well as  V − edges along with the set C  conveniently, we use the total probability law: For the asymptotic analysis, we assume the length of signal  → ∞ and the designed sparse matrix is random.Hence, both the distribution of  edges between variable node V and the set C 1 and the  V −  edges between variable node V and the set C  1 can be regarded as uniform distributions.Let  C 1 and  C  1 be the probabilities that an edge is removed given that the edge is incident to a check node in the sets C 1 and C  1 , respectively.We have ,  = 2, . . .,   ,  = 0, . . ., .
Before calculating  C 1 and  C  1 , we should introduce a significant conditional probability.Let  (ℓ) denote the probability that a check node incident to an edge  belongs to C 1 given a variable node incident to the edge in T ℓ ; its diagram is shown in Figure 3.Then, the  (ℓ) can be calculated by From the definition of  C 1 , we have Similarly, the calculation of  C  1 can be obtained by To sum up, we can calculate the set of probability  (ℓ+1) C  of a check node in C  via ( 6)- (11).
Next, the  (ℓ+1) T  is discussed.When a variable node is decoded successfully, with the corresponding edges removal, it is also shown that these undecoded sets T (ℓ+1) should be reclassified into the set T ℓ+1  .Let  (ℓ) T  denote the probability that a variable node V ∈ T ℓ  changes into V ∈ T (ℓ+1)  ; from Theorem 3, we have  = 0. On the other hand, check nodes in the set C  1 also should be reclassified into the set C 1 .Let the set C Δ 1 denote the check nodes that reclassified C  1 to C 1 .Similarly, we assume the length of signal  → ∞  and the designed sparse matrix is random.We can exploit the following: the distribution of edges incident to the set of check nodes C  1 and C Δ 1 is uniform.It can be seen that the probability  (ℓ) T 0 is where  (ℓ) Δ is the probability of an edge incident to the set C  1 moving to the set C Δ 1 ; that is, Based on the total probability law, T  can be expressed as where 1 −  (ℓ)  V is a normalization factor, which makes the probability  (ℓ+1) T  satisfying standardization of probability; that is, Thus, according to the calculation of ( 12)-( 15), the probability  (ℓ+1) T  can be obtained.In what follows, we provide a precise summary of the above updated rules.
(1) Initialization.Setting the initial value  to  (0) , in the random sparse bipartite graph, it can be seen that the probability of a check node that belongs to the set T  can be given by the following binomial distribution: From Figure 3 and the probability  (ℓ) in ( 9), the probability  (0) T  is given by the following binomial distribution: (2) Calculating  (ℓ+1) C  .Substituting ( 7)-( 11) into ( 6), then  (ℓ+1) C  is calculated.

Comparison of VB Algorithm in the Asymptotic Regime.
In this subsection, we compare the above analytical success thresholds of SWIFT algorithm with VB algorithm in the asymptotic regime.In Table 1, we have listed the analytical success thresholds of the SWIFT and VB algorithms for different graphs.
As seen in Table 1, for every graph, the SWIFT algorithm has a better performance than the VB algorithm.Overall, it is shown that the oversampling ratio  = / =  V /( (0)   ) improves when decreasing both  V and   .Furthermore, when we keep compression ratio constant-that is,  V /  is a constant-such as (3,6), (4,8), and (5,10), as shown in Table 1, in general, when decreasing  V , SWIFT and VB algorithms perform better.Based on the relationship between the threshold and ( V ,   ), the sparse measurement matrix with superior performance can be designed.

Simulation Results and Analysis
In this section, simulation results are provided for analyzing the performance of the SWIFT decoding algorithm.We also provide asymptotic analysis described in Section 3 with  → ∞.According to the comparison of SWIFT decoding algorithm and asymptotic analysis, it is shown that there is a good agreement between them.

Asymptotic and Finite-Length
Results for SWIFT Algorithm.Two scenarios are adopted for verifying Theorem 2: (1) nonzero elements obey Gaussian distribution; (2) nonzero elements take value from the discrete set {±1}.The reconstruction signal-to-noise ratio is defined as SNR(x) = 10 ⋅ log 10 (‖x‖ 2 /‖x − x‖ 2 ).We declare that recovery is successful if SNR(x) ≥ 100; for each point, 1000 Monte Carlo trials are performed.In the asymptotic analysis, we declare that the recovery is successful if  (ℓ) < 10 −8 .To calculate the threshold of , the search step is set to Δ = 10 −4 .
In the first experiment, (8, 32) regular sparse measurement matrix is designed.The length of signal is set to  = 10000, a signal element belongs to the support set with probability of , and each nonzero signal element is drawn  from the Gaussian distribution (or the discrete set {±1}).The success recovery percentage of SWIFT algorithm versus the initial density factor  is shown in Figure 4. From Figure 4, it can be observed that, with the initial density factor  increasing, the performance of recovery decreases.In the figure, we can also see that the performance of two types of signals is almost identical.In addition, the threshold of the algorithm for the (8, 32) regular sparse measurement is obtained by the asymptotic analysis, portrayed by dotted line.As shown in Figure 4, the threshold has a good agreement with the simulation curves.
In the second experiment, we verify that the SWIFT only requires O() measurements and O() iterative step.In order to examine the measurement cost and run-time, a random sparse matrix with regular degree  V = 3 is adopted.Figure 5 illustrates the measurement costs and run-time as a function of the length of signal .As seen in the figure, the measurement and run-time costs remain almost constant independently of .
To investigate the level of agreement between the asymptotic analysis and SWIFT algorithm, we provide the evolution of  ℓ with iterations ℓ for (8, 32) regular sparse measurement in Figure 6.In this experiment, two cases of  (0) value are adopted: one above the threshold and another below the threshold.As shown in Figure 6, there is also a good agreement between simulation results and asymptotic analysis for the two cases of  (0) .
To further show the degree of agreement between the asymptotic analysis and simulation results, we apply SWIFT algorithm to (3,20)   is shown in Table 2.As shown in Table 2, when increasing , the simulation results are closer to the asymptotic threshold.

Comparison of SWIFT Algorithm and VB Algorithm at
Finite-Length.In this subsection, we compare the SWIFT algorithm with the VB algorithm in the noiseless setting.In the VB algorithm, two, (3,15) and (3, 72), regular sparse matrices are designed with signal length  = 2500, 12000, respectively.To have a fair comparison, in the SWIFT algorithm, two, (3, 30) and (3, 144), regular sparse matrices are generated with signal length  = 2500, 12000, respectively.Each nonzero signal element is taken from a standard   Gaussian distribution.The success recovery percentage of SWIFT and VB algorithms versus the sparsity  is shown in Figure 7.As shown in Figure 7, the SWIFT algorithm performs better than the VB algorithm especially for large signal length, which coincides with the analysis in Section 3.3.

Conclusion
In this paper, we investigate a new class of LDPC-DFT measurement matrices and their recovery algorithm.These new measurement matrices exploit the sparsity of paritycheck matrices of LDPC and the phase of DFT matrices to recover the original signal.In addition, a general framework for the asymptotic analysis of this fast recovery algorithm is proposed.The analysis shows that there exists a threshold on the density factor  = /; if below this threshold, the recovery algorithm is successful; otherwise it will fail.Moreover, it can be found that the threshold is dependent on the parameter of LDPC matrix which provides a basis for us to design the measurement matrices.We also observe that the SWIFT algorithm performs better than other schemes based on LDPC codes.

Figure 2 :
Figure 2: Multiton check node and its observation vector y  .

Theorem 2 .
Whether the nonzero elements of original signal obey a continuous distribution  or take values from the finite discrete set, the above criterions of zero-ton, single-ton, and multiton are identified with probability of one.Proof.Firstly, we prove the case of nonzero elements obeying the continuous distribution .Here, we only prove the type of zero-ton, which is valid for the other types with no major changes.If the nonzero elements of original signal obey the continuous distribution, for arbitrary  ̸ = , we have Pr(x[] = −x[]) = 0.For the sparse measurement graph, y  = ∑ ∈N() x[], when y  = 0; that is, ∑ ∈N() x[] = 0; by Pr(x[] = −x[]) = 0, we have Pr(x[] = 0) = 1, where  ∈ N(); the decoder can decide the type of check node as zero-ton via the definition of zero-ton.

Figure 4 :
Figure 4: Success recovery percentage of SWIFT algorithm versus the initial density factor ; analysis threshold is curved by dotted line.

Figure 5 :
Figure 5: Measurement costs and average running time versus length of signal .

Figure 7 :
Figure 7: Success recovery percentage of SWIFT and VB algorithms versus the sparsity .

Table 2 :
Asymptotic threshold and simulation success thresholds for SWIFT algorithm.