A Sparsity Preestimated Adaptive Matching Pursuit Algorithm

In the matching pursuit algorithm of compressed sensing, the traditional reconstruction algorithm needs to know the signal sparsity. (e sparsity adaptive matching pursuit (SAMP) algorithm can adaptively approach the signal sparsity when the sparsity is unknown. However, the SAMP algorithm starts from zero and iterates several times with a fixed step size to approximate the true sparsity, which increases the runtime. To increase the run speed, a sparsity preestimated adaptive matching pursuit (SPAMP) algorithm is proposed in this paper. Firstly, the sparsity preestimated strategy is used to estimate the sparsity, and then the signal is reconstructed by the SAMP algorithm with the preestimated sparsity as the iterative initial value. (e method reconstructs the signal from the preestimated sparsity, which reduces the number of iterations and greatly speeds up the run efficiency.


Introduction
e Nyquist sampling theorem specifies that, to avoid losing information when capturing a signal, one must sample at least two times faster than the signal bandwidth. e traditional information collection and compression process is accompanied by a large amount of data waste, resulting in the increase of equipment cost. With the advent of the era of big data, for some high-frequency signals, it is difficult for hardware devices to meet the sampling requirements. ere is an urgent need for a new method to capture and represent signals at a rate that is significantly below the Nyquist rate. Compressed sensing (CS) [1,2] has been a rising fancy signal processing theory in recent years, which is different from Nyquist sampling theorem. As a new signal processing theory, it has attracted extensive attention of most researchers in the industrial field. It has been applied in different fields and achieved gratifying results [3][4][5][6][7].
Most of research about compressed sensing focused on three directions: signal sparse representation, measurement matrix, and reconstruction algorithm. Signal sparse representation is to transform nonsparse signal into sparse signal by some transformations, such as discrete cosine transform, discrete Fourier transform, and wavelet transform. e compressed signal can be obtained by multiplying the sparse signal with the measurement matrix. e dimension of compressed signal must be smaller than that of the sparse signal. A large number of studies have shown that Gaussian matrix, Bernoulli matrix, and partial Hadamard matrix can be used as measurement matrix. Reconstruction algorithm is to reconstruct sparse signal from compressed signal, and the reconstruction algorithm directly affects the quality of the reconstructed signal. In recent years, great achievements have been made in the field of reconstruction algorithms. However, many reconstruction algorithms have their own limitations. Due to the expensive computing cost of the convex optimization algorithm [8,9], it is difficult for realization. Greedy reconstruction algorithms mainly include orthogonal matching pursuit (OMP) [10,11], regularized OMP (ROMP) [12], compressive sampling matching pursuit (CoSaMP) [13], subspace pursuit (SP) [14], generalized OMP (gOMP) [15], stagewise OMP (StOMP) [16], and sparsity adaptive matching pursuit (SAMP) [17]. Compared with convex optimization algorithm, greedy algorithm has faster reconstruction speed. Among the greedy reconstruction algorithms mentioned above, OMP, ROMP, CoSaMP, SP, and gOMP algorithms need to know the signal sparsity. If the signal sparsity is not known, we can only reconstruct the signal by guessing the sparsity. e sparsity directly affects the quality of signal reconstruction. StOMP and SAMP algorithms can directly reconstruct the original signal without knowing the signal sparsity. e SAMP algorithm reconstructs the signal by setting a fixed step size and approximating the true signal sparsity through multiple iterations. SAMP algorithm approximates the sparsity gradually from zero, which includes many iterations, extensive computation, and long reconstruction time. Several improved SAMP algorithms were proposed in [18][19][20]. To speed up the reconstruction speed, we developed a sparse preestimated adaptive matching pursuit (SPAMP) algorithm. First, a preestimated sparsity close to the true sparsity is obtained. en the SAMP algorithm is used to reconstruct the signal. e primary contributions of this are twofold: (1) We present a new sparsity preestimation method for estimating the sparsity quickly and accurately. e method uses the criteria of sparsity underestimation and overestimation to estimate the sparsity. (2) We developed an improved SAMP algorithm, termed SPAMP. In the first stage, the sparsity is estimated by sparse preestimation criteria. In the second stage, the SAMP is used to reconstruct the signal. e experimental simulations show that the reconstruction performance of SPAMP is almost the same as that of SAMP algorithm, while the run speed is obviously faster than that of SAMP algorithm. e rest of this paper is organized as follows. In Section 2, compressed sensing theory is introduced. Section 3 gives the criteria of sparsity underestimation and overestimation. Section 4 explains the proposed algorithm. Section 5 illustrates the simulation results. Finally, we conclude the paper with a summary in Section 6.
Notations. Boldface uppercase symbols denote matrices, while boldface lowercase letters represent vectors. 〈·, ·〉 represent the inner product of two vectors; ‖ · ‖ p is the ℓ p norm of a vector; | · | is the amplitude of a complex quantity or the cardinality of a set; (·) T is the transpose of a vector or a matrix; R denotes the field of real numbers; N(μ, σ 2 ) denotes a Gaussian random variable with mean μ and variance σ 2 .

Compressed Sensing
Let x be a real signal of length N, that is, x ∈ R N . If a signal x only has K nonzero elements, we can say that x is K-sparse. A measurement vector y is computed using an M × N matrix Φ; that is, where measurement vector y ∈ R N and measurement matrix Φ � [ϕ 1 , ϕ 2 , . . . , ϕ N ] ∈ R M×N . e measurement matrix Φ must allow the reconstruction of the length-N signal x from M measurements (the measurement vector y). Since M < N, the problem seems to be ill-conditioned. However, if it is known that the signal x only has K nonzero entries, the problem can be solved provided that M > K. Stable recovery procedure proposed in [21] shows that measurement matrix Φ satisfies the restricted isometry property (RIP) of order k if there exists δ k ∈ (0, 1) such that holds. e RIP is to ensure that each pair of columns of Φ is orthogonal to each other with a high probability. Research shows that the measurement matrix Φ whose entries are sampled from N(0, σ 2 ), σ 2 ≥ 1, is highly likely to satisfy RIP [22,23]. e reconstruction of the original signal from the observation signal can be transformed into the smallest ℓ 0 norm by solving Unfortunately, solving (3) is an NP-hard problem, requiring an exhaustive enumeration of all C K N possible locations of the nonzero entries in x. Considering the influence of noise, the optimization problem of (3) further becomes where ε is a minuteness. is method can obtain faster speed at the expense of reconstruction precision. ℓ 0 norm optimization is equivalent to ℓ 1 norm optimization under the RIP condition; that is, ℓ 1 norm optimization can use standard convex optimization methods to solve problems. is method has good reconstruction precision but high time complexity.

Criteria of Sparsity Preestimation
e method of sparsity underestimation for a specific atom set given in [24] provides us with a solution. To estimate the sparsity quickly and accurately, sparsity underestimation and overestimation methods are given in this section.
Notations. K 0 and K denote the preestimated sparsity and the true sparsity, respectively. u � Φ T y denotes the inner product of measurement matrix Φ and measurement vector y. u j (1 ≤ j ≤ N) denotes the j-th entry in vector u. Set Γ 0 consists of K 0 indexes of the maximum elements in |u j |, and |Γ 0 | � K 0 . e real support set of signals x is Γ, and |Γ| � K.

Criterion of Sparsity Underestimation
Proof. Suppose that the estimated sparsity K 0 is greater than or equal to the true sparsity K; that is, Based on RIP, the singular value of Φ Γ lies in [ Since ‖Φ Γ x‖ 2 2 ≤ (1 + δ)‖x‖ 2 2 , this yields From (6)- (8), we can obtain e proof is completed. From the converse negative proposition of Proposition 1, we can get the criterion of sparsity underestimation. □ then the estimated sparsity is less than the true sparsity; that is, K 0 < K.

Criterion of Sparsity Overestimation
Proof. Suppose that the estimated sparsity K 0 is less than or equal to the true sparsity K; that is, Based on the definition of RIP, the eigenvalue of Since Combining with (10)-(12), we can get the following result: erefore, e proof is completed.
From the converse negative proposition of Proposition 2, we obtain the criterion of sparsity overestimation.
In the following, the sparsity is estimated based on the criteria of sparsity underestimation and sparsity overestimation. is can prevent the preestimated sparsity from being too small or too large, even exceeding the true sparsity.

The Proposed Algorithm
In this section, the implementation of sparsity preestimated adaptive matching pursuit (SPAMP) algorithm is given. Figure 1 shows the flowchart of the SPAMP algorithm. e implementation steps of the Algorithm 1 are as follows.
e computational complexity of SAMP and SPAMP is O(k 1 MN) and O(k 2 MN), respectively. M is the observation dimension, N is the length of sparse signal, and k 1 and k 2 are the iteration times of SAMP and SPAMP algorithms, respectively. Due to the sparsity preestimation in SPAMP algorithm, k 2 is less than k 1 . e computational complexity of SPAMP is lower than SAMP.

Simulation Experiments
To verify the effectiveness of the proposed algorithm, a series of simulation experiments are carried out. In the following experiments, a K-sparse random signal is adopted. e experimental environment is CPU i7-6700U, main frequency of 3.40 GHz, and 16 G memory.

Parameter Selection Experiments.
In the first stage of SPAMP algorithm, the sparsity is preestimated. e performance of sparsity preestimation depends on two parameters: weak matching parameter a and estimation factor δ. e estimation factor δ determines the accuracy of sparsity estimation. e weak matching parameter a is a parameter of function K i � K 0 a i , which determines the speediness of sparsity estimation. e above parameters are obtained by simulation experiments.
A one-dimensional 35-sparse random signal x with length of 256 is generated. e measurement matrix Φ is a 128 × 256 Gaussian matrix. e measurement vector y is computed by y � Φx. Firstly, the estimation factor is fixed at δ � 0.1, and the weak matching parameter a changes. e simulation curve of the influence of weak matching parameter a on sparsity estimation is shown in Figure 2.
It can be seen from Figure 2 that the simulation results under different parameter a are the same when the weak matching parameter a is in the range of [0.3, 0.8]. It shows that the change of parameter a does not affect the result of the sparsity preestimation.
In order to verify the influence of estimation factor on the sparsity preestimation, the weak matching parameter a is fixed at 0.5, and the estimation factor δ changes. e simulation curves of the influence of different estimation factor δ on sparsity estimation are shown in Figure 3.

Sparsity preestimation
Candidate Λ t Final test Update F t Update residual Step 5 Step 7 Step 8 Step 9 Steps 1-4 Figure 1: Flowchart of the proposed SPAMP algorithm.
Input: M × N-dimensional measurement matrix Φ, N-dimensional measurement vector y, weak matching parameter a, estimated factor δ, step size S. Output: the reconstructed signal x. Initialization: residual r � y, initial value of K 0 � M, iteration counter i � 0, the lower bound of preestimated sparsity K D and the upper bound of preestimated sparsity K U . Stage 1: preestimate the sparsity.
Step 1: Compute the amplitude of the inner product of the residual r and the measurement matrix Φ, that is, u � |Φ T r| � |〈Φ, r〉|. e j-th entry of vector u is denoted by u j , j � 1, . . . , M.
Step 2: Initialize the preestimated sparsity by K i � K 0 a i .
Step 5: Compute u � |Φ T r t | � |〈Φ, r t 〉|, select L indices corresponding to the largest absolute values in u to form the set S t . Augment Step 6: If the length of Λ t is less than M, return x � 0, otherwise go to Step 7.
Step 7: Solve a least squares problem to obtain a new estimation: Step 8: Select L entries from the absolute value of x t to consist x tL , the corresponding submatrix is Φ tL , the corresponding index set is Λ tL , and update the finalist set F � Λ tL .
Step 9: Increment t, calculate the new residual r t � y − Φ tL x tL .
Step 10: If r t < 1e − 6, then go to Step 13, otherwise go to Step 11.
Step 11: If r t ≥ r t−1 , then L � L + S, go to Step 5, otherwise go to Step 12.
Step 12: Update Λ t � F, and go to Step 5.
Step 13: Return x based on the nonzero values of x t at position Λ t . ALGORITHM 1: Sparsity preestimated adaptive matching pursuit algorithm.   Journal of Electrical and Computer Engineering As can be seen from Figure 3, different estimation factors have a great influence on the sparsity preestimation. With the increase of estimation factor δ, the preestimated sparsity increases. When the estimation factor equals 0.2, the preestimated sparsity exceeds the true sparsity of the signal. Overestimated sparsity will lower the recovery quality of the second stage and increase the recovery time. In order to ensure that the preestimated sparsity does not exceed the true sparsity, the estimation factor δ is set as 0.15. In the following simulation experiments, the weak matching parameter a is 0.5, and the estimation factor δ is 0.15.

One-Dimensional Signal Experiments.
In this subsection, we first compare the effects of sparsity K and observation dimension M on the reconstruction probability and then compare the reconstruction time of each algorithm.
When M is fixed and K changes, the reconstruction probability is compared. e step sizes S of SAMP and SPAMP are 3, 5, and 7, respectively. e length of the random signal is 256, and the observation dimension ranges from 2 × K to 160. e measurement matrix Φ is an M × N-dimensional Gaussian matrix. e observation dimension M is 128 and 160, respectively. When the error between the reconstructed signal and the original signal is less than 10 − 6 , we consider that the signal is reconstructed successfully. e reconstruction probability is the ratio of the number of successful reconstructions to the number of experiments.
e simulation results are shown in Figures 4 and 5.
It can be seen from Figures 4 and 5 that, with the increase of signal sparsity, the reconstruction probability of traditional algorithms with known sparsity decreases rapidly. When the sparsity exceeds 40, the traditional algorithms cannot reconstruct the signal accurately. SAMP and SPAMP algorithms can also reconstruct the signal with high probability. e reconstruction probability of SPAMP algorithm is almost the same as that of SAMP algorithm. ey have obvious advantages compared with other traditional algorithms, which need to know the sparsity.
When K is fixed and M changes, the reconstruction probability is compared. e step sizes S of SAMP and SPAMP are 3, 5, and 7 respectively. e measurement matrix Φ is an M × N-dimensional Gaussian matrix. e sparsity K is 30 and 40, respectively. e simulation curves are shown in Figures 6 and 7.
It can be seen from Figures 6 and 7 that when the sparsity is fixed, with the increase of the observation dimension, the curves of SAMP and SPAMP rise faster, while the curves of traditional algorithms that need to know the sparsity rise slower.
is shows that the traditional algorithms have higher requirements for the sparsity of the signal, and the selection of the sparse signal is more stringent. When the sparsity cannot meet the requirements, the compressed signal cannot be reconstructed. However, SAMP and SPAMP algorithms have relatively low requirements for sparsity.
When the signal length is 256 and the observation dimension is 128, the simulation curve of average runtime versus sparsity is shown in Figure 8.
We can conclude from Figure 8 that the runtime of SPAMP algorithm is faster than that of SAMP algorithm, and the time is increased compared with the traditional algorithms, which need to know the sparsity in advance. In SPAMP algorithm, the process of preestimated sparsity takes a certain amount of time, and then the SAMP algorithm is used to reconstruct the signal. e runtime of SPAMP   algorithm is faster than that of SAMP algorithm, which indicates that the preestimated sparsity can effectively reduce the number of iterations of SAMP algorithm.
For the reconstruction of one-dimensional signal, we can draw the following conclusions through the above experiments: (1) When the step sizes are the same, the reconstruction probability of the proposed SPAMP algorithm is almost the same as that of the SAMP algorithm, and the run speed is obviously faster than that of SAMP algorithm. In SPAMP algorithm, the sparsity is preestimated, and then the signal is reconstructed by SAMP algorithm with the fixed step size, so the reconstruction quality will not be significantly improved, and only the run speed can be improved. (2) We notice that the run speeds of the proposed SPAMP and SAMP algorithms are lower than those of OMP, ROMP, SP, CoSaMP, and gOMP algorithms. e main reason is that SPAMP and SAMP algorithms adopt a tentative way to gradually reconstruct the signal. If the step is too large, the reconstruction probability will be reduced; if the step size is too small, the reconstruction time will be increased. Meanwhile other algorithms know the signal sparsity in advance; thus SPAMP and SAMP algorithms have no obvious advantage in the reconstruction time.

Two-Dimensional Image Experiments.
To verify the effectiveness of the proposed SPAMP algorithm in two-dimensional image reconstruction, simulation experiments are carried out in different compression ratios.
Lena image with 256 × 256 pixels is used in this experiment. Because the image is a nonsparse signal, the wavelet transform matrix is used to obtain the sparse signal. Peak signal-to-noise ratio (PSNR) and reconstruction time are used to evaluate the performance. e PSNR (dB) and reconstruction time (s) of different algorithms are shown in Table 1.      As can be seen from Table 1, with the increase of compression ratio, PSNR of different algorithms also increase. When SPAMP and SAMP algorithms adopt the same step size and compression ratio, the PSNR of the proposed SPAMP algorithm is slightly higher than that of SAMP algorithm. Meanwhile, we notice that the SPAMP algorithm effectively reduces the reconstruction time. Experimental results show that the proposed SPAMP algorithm reduces the reconstruction time and slightly improves the reconstruction quality.
For the reconstruction of two-dimensional image signal, we can draw the following conclusions: (1) In PSNR, the proposed SPAMP algorithm is better than OMP, SP, and ROMP algorithms and is slightly better than SAMP algorithm. In reconstruction time, the speed of SPAMP is obviously faster than that of SAMP algorithm. In the first stage of SPAMP algorithm, the sparsity is preestimated, and, in the second stage, the signal is reconstructed by SAMP algorithm. Since the step size of SPAMP is the same as that of SAMP, its quality is confined to SAMP. (2) With the increase of compression ratio, the PSNR and reconstruction time of all algorithms increase. e increase of compression ratio, that is, the increase of observation dimension, will inevitably lead to the improvement of the reconstruction quality and the increase of reconstruction time.

Conclusion
An improved sparsity adaptive matching pursuit algorithm was proposed in this paper. In the first stage, the SPAMP algorithm preestimates the sparsity. In the second stage, the SAMP algorithm is adopted to reconstruct the signal.
rough the comparison of one-dimensional signal and two-dimensional image, it can be seen that the SPAMP algorithm has almost the same reconstruction quality as the SAMP algorithm, the reconstruction time is greatly reduced, and the reconstruction quality is better than those of OMP, ROMP, and SP algorithms. Since the SPAMP algorithm can reconstruct the signal without knowing the sparsity, it is more practical than OMP, ROMP, and other algorithms.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper.