An Adaptive Sparsity Estimation KSVD Dictionary Construction Method for Compressed Sensing of Transient Signal

Overcomplete dictionaries occupy an essential position in the sparse representation of signals. e dictionary construction method typically represented byK singular value decomposition (KSVD) is widely used because of its concise and ecient features. In this paper, based on the background of transient signal detection, an adaptive sparse estimation KSVD (ASE-KSVD) dictionary training method is proposed to solve the redundant iteration problem caused by xed sparsity in existing KSVD dictionary construction. e algorithm features an adaptive sparsity estimation strategy in the sparse coding stage, which adjusts the number of iterations required for dictionary training based on the pre-estimation of the sample signal characteristics. e aim is to decrease the number of solutions of the underdetermined system of equations, reduce the overtting error under the nite sparsity condition, and improve overall training eciency. We compare four similar algorithms under the speech signal and actual shock wave sensor network data conditions, respectively.e results show that the proposed algorithm has obvious performance advantages and can be applied to real-life scenarios.


Introduction
For the transient environmental eld signal detection (such as explosion shock wave eld testing, vibration detection, and acoustic eld analysis), the deployment of distributed wireless sensor networks is the mainstream method [1,2]. Transient signals have sizeable sudden energy changes in the time domain, short rise time, and comprehensive frequency coverage, often accompanying the release of destructive energy and harmonics, noise, and other interference [3]. Wireless data acquisition systems require complete data sampling before feature extraction and identi cation. Limited by the system bandwidth and energy consumption, it can no longer meet the high sampling and high concurrent measurement requirements by optimizing the network structure. erefore, using the data itself characteristics and propagation laws to achieve source compression or downsampling is a current research hotspot [4,5]. Applying compressed sensing to wireless sensing networks is one of the methods to achieve source downsampling. When we know the sparse characteristics of the target signal, the acquisition node only needs to transmit a small amount of downsampled measurements to accurately reconstruct the original signal at the receiver [6]. CS includes three components: sparse representation, encoding measuring, and sparse reconstruction. Sparse representation is an essential part of that [7,8], which has been widely used in signal processing and is the leading research background of this paper. e common sparse representations used in CS are xed and dynamic learning sparse basis [9]. e xed sparse basis such as discrete cosine transform (DCT) and discrete wavelet transform (DWT) is bene cial for extracting the signal time-frequency domain features. However, limited by the broad spectral range of transient signals, the reconstruction error is signi cant with limited sparsity, and the number of measurements required for reconstruction increases dramatically with it [10]. On the other hand, dynamic learning sparse basis is data-driven dictionary constructed based on target features, which is theoretically more conducive to the sparse representation of signals. e representative ones are MOD and KSVD and their series of derivative algorithms.
In recent years, various types of learning dictionaries combined with CS have been studied in many fields such as image classification and video denoising with mature technology and excellent performance [11][12][13][14][15]. However, relatively little attention has been paid to 1D signals, and most of them are focused on signal processing. [16] demonstrated that KSVD could effectively reduce the storage space for the input signal in a speech synthesis system and perform this feature classification at the same time. An underdetermined blind source separation algorithm based on an improved double sparse dictionary is proposed in [17] for reducing the computational effort of blind source separation. [18] proposed a data compression algorithm with a segmented training dictionary of signal features based on the matching degree of atomic features, which significantly improves the data compression rate.
We found KSVD has the following two main problems in 1D signal processing through the summary of related work. Firstly, it is difficult to build a universal data set because of the single characteristics of 1D signals and the apparent difference between various types. Secondly, the length of the 1D signal samples is usually long, leading to a large number of solution dimensions [10]. We often need to preprocess the target signal in segments, which further reduces intra-sample correlation and increases the difficulty of dictionary training. In most cases, it needs to be coupled with a priori sparsity range. And it takes longer for multiple iterations to obtain the results [19,20]. erefore, the adaptive improvement for KSVD and target signal has become a hot research topic. Most of the current studies revolve around the sparse coding part of the dictionary by improving the solution process of the underdetermined system of equations to enhance training efficiency and reduce sparsity [21,22]. Introducing an adaptive estimation strategy for sample sparsity in the encoding process can effectively improve dictionary training efficiency. e literature [21]proposed a KSVD dictionary training method based on sparsity adaptive matching tracking (SAMP-KSVD), and the literature [23] proposed a KSVD introducing segmented orthogonal matching tracking strategy (StOMP-KSVD). Both methods demonstrate the feasibility of this improved strategy. Since the CS reconstruction phase has the same mathematical model as KSVD in the sparse coding phase, many reconstruction algorithms based on adaptive sparse strategies [22,24] are also derived, which can all be applied to the improvement of KSVD dictionary training methods. We summarize the advantages and disadvantages of related works and propose an ASE-KSVD algorithm that combines the estimation method of sparsity in greedy iteration [25][26][27]. e aim is to improve the dictionary training efficiency and reduce the sparsity error while maintaining a lower sparsity. e characteristic of the ASE-KSVD algorithm introduces an adaptive sparsity estimation strategy in the sparse encoding phase, which performs sparsity estimation on the target sample columns and adjusts the number of iterations required for dictionary training to eliminate the redundant iterations during stepwise computation. In this work, we firstly use the Hankel matrix to translate and segment the target signal [28] to solve the problem of high dimensionality for the 1D signal construction dictionary. Secondly, the improved ASE-KSVD is applied to the CS framework and compared with the same type of dictionary construction algorithms. e remainder of this paper is organized as follows: in Section 2, the basic construction principle of KSVD and its application model in CS are described to induce the theoretical framework. Section 3 mainly describes the improved ASE-KSVD dictionary construction algorithm based on adaptive sparsity estimation proposed in this paper and analyzes its improvement strategy and characteristics. In Section 4, we verify the feasibility of the algorithm of this paper in the CS framework by using a comparison experiment combining numerical simulations and measured shock wave signals. Finally, Section 5 concludes the paper with an outlook for future work.

KSVD Sparse Model.
e KSVD algorithm is a widely used dictionary learning method. e main idea of that is to solve the optimal linear representation of the original sample X ∈ R N×P in dictionary matrix D ∈ R N×L . In this paper, we e KSVD algorithm includes two stages: sparse coding and dictionary update stage. Firstly, the dictionary D is initialized according to the training sample set X or a fixed sparse basis (such as DCT and DWT), and the sparse coding algorithm such as orthogonal matching pursuit (OMP) is used to obtain the sparse coefficients θ under the current dictionary. Secondly, in the dictionary update phase, using the KSVD algorithm updates the atoms column-by-column.
As an example for the k-th column, define d n as the n-th column vector of the dictionary D and θ T n as the n-th row vector of the sparse matrix Θ, then equation (1) can be extended as In equation (2), E n � X − j ≠ n d j θ T j is the residual of this iteration, and the optimization problem can be described as Use the singular value decomposition for the residuals E n . We will get E n � UΣV T , the first column in U is the newly generated d n , and the first row of the ΣV T is used as the new θ n . Update the atoms d n , complete a single iteration, and repeat the steps to finally obtain the optimal representation dictionary D. In this work, the dictionary is constructed to pursue a better linear representation of the signal, thus obtaining a lower sparsity to serve the subsequent CS downsampling and reconstruction process.

CS eory Model.
In the CS framework, the N-point 1D signal is mapped to the N-dimensional vector space defined by x ∈ R N×1 as target signal, which is downsampled in y ∈ R M×1 denotes the measurement vector, Φ ∈ R M×N is the downsampled measurement matrix, and A � ΦΨ is the sensing matrix. Define M/N as the compressed measurement ratio (CR).
To ensure the unique reconstruction of the original signal from the downsampled data, the original signal x needs to have sparsity in the sparse basis Ψ with x � Ψθ and ‖θ‖ 0 � K. is condition has been guaranteed in the construction of KSVD. In addition, the sensing matrix A should also satisfy the restricted isometry property (RIP) shown in From [29,30], it is known that the exact δ K cannot be calculated in most cases, the degree of correlation between Φ and Ψ is determined by the correlation coefficient μ(Φ, Ψ) of equation (6), and the value of μ is negatively correlated with the probability of ΦΨ satisfying the RIP constraints.
Finally, under the RIP constraint, the problem of the underdetermined equations is transformed into the minimum l 1 norm optimization problem shown in equation (7), which can be uniquely solved using greedy algorithms and convex optimization.
In the ideal reconstruction process, when the number of measurements M ≥ 0.28Klog(N/K), the target sparse signal θ can be reconstructed without distortion [31]. e value of M is positively correlated with the value of K. erefore, the focus of this paper is on the accurate and fast optimal sparse representation of the target signal using KSVD. e lower the sparsity K, the lower the number of measurements M required for reconstruction, and the smaller the system bandwidth occupied by the final wireless network transmission accordingly.

Improved KSVD Algorithm Based on Adaptive Sparsity Estimation
is section describes the steps and characterization of the ASE-KSVD algorithm proposed in this paper. e main design idea is to introduce an adaptive sparsity estimation strategy in KSVD sparse coding phase. e estimated atomic set Γ 0 is obtained by correlation matching test on the premise that the dictionary matrix satisfies the RIP constraint. e number of atoms in this set is the estimated sparsity for this samples, Γ 0 0 � K 0 . e algorithm removes manually entered sparse parameters and avoids redundant iterations caused by excessive sparsity of different sample features. For the problem of long signals, we use a variant form of the Hankel matrix to segment and stitch the target signal to reduce the computational effort. At last, we add a dictionary filtering process to avoid residual rebound during the iterative process, which is used to ensure the final output is the optimal dictionary with minimum residuals.

Adaptive Sparsity Estimation Strategy.
e main idea of the adaptive sparsity estimation strategy is to make a matching test between the sample and the current dictionary. Define g � D T , X l , l ∈ (1, 2, . . . , L) and take the index of the first K l (1 ≤ K l ≤ N) maxima in g as the set Γ 0 , Γ 0 0 � K 0 , and use correlation projection matching to determine whether the current K 0 is less than the true sparsity K of this sample. If the condition in Lemma 1 is satisfied, the current K 0 is used as the sparsity estimate K ∧ l for the column; otherwise, the value of K 0 is incremented and recalculated.

Lemma 1. If D satisfies the RIP constraint with (K, δ) and there exists
Proof. Take the first K 0 maximal indexes in |g i |(1 ≤ i ≤ N) to obtain the set Γ 0 . When K 0 ≥ K, then Γ 0 ⊆Γ, and there exists According to the definition of RIP, the singular value of Combining with the definition of the RIP property, it can be known that . Substituting into equation (9), we can obtain According to the contrapositive proposition of the above conclusion, when ‖X l ‖ 2 , then K 0 < K. From this, the initial estimation method for K in the sample is obtained, incremented until the inequality does not hold. is sparsity estimation strategy effectiveness is verified in the following experiments, and the results are similar to the theoretical expectations. By reasonably setting δ value, the sparsity of unknown samples can be effectively estimated.
e ASE-KSVD algorithm is divided into four stages: parameter initialization, sample sparsity estimation, sparse coding, and dictionary update. Before the algorithm starts, input variables include sample set X N×L , initial dictionary D N×ds , sparse estimation parameter δ, and maximum number of iterations max iter , and initialize each iteration parameter.
Take the i-th iteration as an example. At first, calculate the projection correlation |〈D T , X l 〉| of X l on D i columnby-column and select the top K l most relevant columns to form the index set Γ. e matching test is performed by calculating the inequality of equation (10) with stepwise increment K l to obtain the sparsity estimate of X l on D i .
When the sparsity estimate K l of the current column sample is obtained, it is input into OMP(D i , X l , K l ) as a parameter to calculate the sparse coding estimate θ ∧ l of that column sample X l on the current D i . is cycle will continue until all L column sparse coding estimates are completed.
en, in the dictionary update stage, the same columnby-column update as KSVD is used. Take the n-th column in D i as an example, n ∈ (1, 2, . . . , ds), remove this update column D i (j) in the sample set, and calculate the residual matrix E n in equation (11) Because only nonzero values in the dictionary update have impact on the process, so just take the nonzero value position index of θ ∧ T , add it to the set ω n , add its corresponding row vector to the set θ ′T n , and then update the corresponding column of nonzero values in E n to E n ′ .
Next, the singular value decomposition of E n ′ , E n ′ � UΣV T , take the first column U 1 in U as the n-th column in the dictionary D i , and Σ(1, 1)V T 1 is the updated θ ′T n , updated to θ ∧ in accordance with the corresponding position, so as to complete the current front dictionary update, cycle until all ds columns of the dictionary updated.
Finally, calculate the dictionary residuals E(i), and determine whether the current is the minimum residual; when the number of iterations i meets the maximum number of iterations max iter , end the iteration. e optimal value D Min in the overall max iter iterations is taken as the final output dictionary. e algorithm steps are summarized as shown in Algorithm 1.

Simulation Results and Analysis
We conducted simulation experiments on the proposed algorithm to verify the algorithm's performance. In Subsection 4.1, to ensure the generality of data, the Gaussian sparse signal is used as simulation input to test the sparse adaptive strategy of the proposed algorithm and determine the optimal parameters. Sections 4.2 and 4.3 perform algorithm performance tests under two data conditions: shock wave sensor network and speech signal. We set up classification-KSVD (CC-KSVD) [18], progressive decrease of residual-KSVD (PDR-KSVD) [32], SAMP-KSVD [21], StOMP-KSVD [23], and DCT as sparse basis generation algorithms for comparison. e experimental analysis is carried out in terms of the sparsity characteristics of the dictionary constructed by the improved algorithm and the CS reconstruction characteristics, respectively. e CC-KSVD algorithm is based on the KSVD algorithm to classify the sample training set according to the signal characteristics, train the learning dictionary that fits the class of samples separately, and cascade the various dictionaries into a redundant dictionary with all signal characteristics. is method reduces the dictionary size, shortens the training time of the dictionary, is suitable for signals with noticeable feature differences, and requires a known sparsity, and the reconstruction quality is closely related to the target signal classification method.
e PDR-KSVD algorithm adds a judgment condition to each class of samples when training the dictionary and bounds the number of training sessions according to the rate of change of the residuals during the dictionary update. It is solving the problem of unknown target sparsity.
SAMP-KSVD and StOMP-KSVD algorithms are derived and improved from the CS reconstruction algorithm. Both algorithms are improvements to the method for solving underdetermined systems of equations during KSVD training and neither require a known sparsity. e SAMP-KSVD algorithm uses a stepwise sparsity matching strategy, and StOMP-KSVD works with a threshold parameter to achieve a pre-estimation of sparsity.
All experimental simulation parts of this paper were done on MATLAB R2016b software based on Windows 10 system (Intel(R) Core(TM) i5-5200U CPU @ 2.20 GHz, 4.0 GB).

Analysis of Sparsity Adaptive Strategy.
In the performance analysis of the sparsity estimation strategy, we use Gaussian sparse signals to ensure the generality of the parameters across signals of the same length. eoretically, any target signal can be superimposed by combining Gaussian sparse signals with equal length. e experiments compare the accuracy and stability for target sparsity estimation under different values of δ, respectively. Firstly, considering the case where the signal segmentation length is 128 points, set a Gaussian sparse signal with a same length in this part of the experiment. Set sparsity K incremented from 1 to 25 in step 1 and observe the accuracy of sparsity estimation. Secondly, fix the sparsity K � 10, and the data are generated randomly in a group of 20 times to test the stability of the sparsity estimation. Each data set was averaged after 500 Monte Carlo experiments. e results in Figure 1(a) show that as the sparsity of the target signal increases, the estimates tend to grow linearly and then tend to be steady state. e prerequisite of the weak matching strategy in sparsity estimation satisfies the RIP condition [26,33]. In most cases, we require M ≈ (3 − 4)K. erefore, with a fixed M, the smaller the K/M, the better the accuracy and linearity of the estimation effect. When the matching condition reaches the estimated upper limit, we can adjust the segment length M to solve it. e two parameters showed a positive correlation. Combining the results in Figures 1(a) and 1(b), when the input parameter δ is taken as 0.03, 0.05, 0.07, the adaptively estimated sparsity is closest to the signal's sparsity.
However, when solving the underdetermined equation, the size of the K value will affect the screening range for the support set in the subsequent iterations. If the estimated value is smaller than the actual value, the underestimation may make the error larger. If the K is larger than the actual value, an irrelevant smaller value may be calculated for an originally zero point. e error cost caused by the smaller value is much less than the missed estimate. When δ � 0.05, the sparsity estimate is slightly larger than the actual value and has the best linearity in the estimated range.

Dictionary Performance Analysis under Shockwave Sensor
Network Data. In this part of the simulation, we analyze the characteristics of the proposed ASE-KSVD algorithm in the CS framework in terms of the algorithm's efficiency, the sparsity effect of the test set signals, and the measurement matrix matching.

Experimental Conditions Setting.
is experiment uses the Endevco 85XX series 15psi range sensor shock wave field test data set to evaluate the performance of the improved algorithm dictionary. We intercept 4096 sampling points of the target signal at a sampling rate of 2 MSa/s, totaling 120 for analysis, of which 100 are used as the sample set and 20 as the test set.
Equation (15) shows that an improved Hankel matrix [20] is used to preprocess the signal by segmentation, dividing the x 4096×1 into X 128×32 . e total size of the sample set X is 128 × 3200, which meets the requirement that the number of training samples in the redundant dictionary is Input: X N×L , D N×ds , δ, max iter ;

Algorithm Efficiency Analysis.
In this part of the experiment, it is verified that the sparsity adaptive estimation strategy improves the efficiency of the dictionary sparse coding stage. e KSVD fixed sparsity is set K � 10 in the experiments, both dictionary coding strategies are OMP algorithms, the training sample set X 128×3200 is the same, the maximum number of iterations max iter � 2000, and a total of 200 Monte Carlo experiments are conducted to take the average value. In terms of the dictionary training speed, the sample set is incremented from X 128×400 to X 128×3200 in steps of 100 columns to compare the dictionary training time of the KSVD algorithm and the ASE-KSVD algorithm. e average number of iterations for each column in the overall iterative process is shown in Figure 2(a). Sparsity estimation is performed before iteration to determine the approximate sparsity of each column of samples and match the differences between sample sets. Compared to the fixed sparsity of KSVD, ASE-KSVD can effectively reduce the overall number of iterations without affecting the sparse coding process. Combined with the analysis of the mean calculation for the training set size in Figure 2(b), it also indirectly proves that eliminating redundant iterations can significantly improve the dictionary training efficiency.

Analysis of Measurement Matrix Matching Degree.
In this section, the experiments focus on applying the ASE-KSVD construction of sparse dictionaries in the CS framework. Furthermore, use the correlation coefficient μ(Φ, Ψ) to measure the matching degree of the measurement matrix and experimental screening of several types of measurement matrices that are currently more commonly used to determine the optimal combination of the current parameter characteristics. e dictionary constructed by the ASE-KSVD algorithm is used as the fixed sparse matrix for this experiment. e reconstruction error and the intercorrelation of the sensing matrix are taken into account. e comparative downsampling measurement matrices are Gaussian random matrix, Bernoulli matrix, sparse random matrix, Toeplitz matrix, partial Hadamard matrix, and circular matrix. e number of measurements M is increased from 320 to 1920 with a step value of 160. e OMP algorithm is used as the reconstruction algorithm to compare the reconstruction effect of different measurement matrices on the same signal. Calculate the correlation coefficient between ASE-KSVD and each measurement matrix, and take the average value after 200 Monte Carlo experiments for each set of data. As shown in Figure 3(a), each random matrix with an ASE-KSVD learning dictionary can reconstruct the original signal if the number of measurements is appropriate. e correlation coefficient between the sparse basis Ψ and the measurement matrix Φ can reflect the final reconstruction performance. After experimental comparison in Figure 3(b), the Gaussian random matrix has the lowest correlation coefficient and works best in the experimental reconstruction comparison.

Sparse Performance Analysis of Shock Wave Signals.
In this section of experiments, the sparse performance of the ASE-KSVD algorithm training dictionary is verified. Take 20 sets of shock wave signals in the test set as the target signal, perform the sparse transformation on them, and define NMAE < 0.1 as an acceptable error. After the signals are sparse, they are spliced and restored to the original length. After they are arranged in descending order, the inverse transform error of each sparse basis at a different sparsity is intercepted and compared with DCT, KSVD, PDR-KSVD, CC-KSVD, PDR-KSVD, SAMP-KSVD, and StOMP-KSVD algorithm. Each group of signals performs 100 repeated experiments and finally calculates the average value. e results in Table 1 also show that in the test set signals, ASE-KSVD reduces the average sparsity by 42.64%, 23.91%, 8.58%, 15.26%, 4.74%, and 26.89% compared to DCT, KSVD, CC-KSVD, PDR-KSVD, SAMP-KSVD, and StOMP-KSVD, respectively, and has the best effect among the compared algorithms. e results in Figure 4 show that the sparse error share of single-point sparsity decreases as the sparsity value increases. Compared with the comparison algorithm and the fixed sparse basis, the sparse coefficient energy in ASE-KSVD is more concentrated. Under the same sparse normalization error, it can maintain a lower sparsity, which is more conducive to retaining the main information in the signal.

Performance Analysis of CS Framework Reconstruction.
Under the same test conditions, the DCT, KSVD, PDR-KSVD, CC-KSVD, SAMP-KSVD, and StOMP-KSVD algorithms were used to generate different sparse bases Ψ for comparison and validation. e measurement matrix is set as Gaussian random matrix, and the number of measurements M is increased from 320 to 1920 with a step value of 160, and the downsampling data are reconstructed separately using the OMP algorithm.
As shown in Figure 5, with the increase of the number of measurements, the reconstruction error of each algorithm shows a downward trend, and the time required for reconstruction shows an upward trend. Among them, ASE-KSVD works best for the sparse representation of the signal with the lowest sparsity. Under the same error conditions, the reconstruction time and the number of measurements are also the lowest. Compared to KSVD, CC-KSVD, PDR-KSVD, SAMP-KSVD, and StOMP-KSVD, the number of measurements M was reduced by 48.18%, 15.56%, 20.83%, 16.79%, and 25.97% for the same degree of error (NMAE ≤ 0.1), respectively. Figure 6 shows the reconstructed results of four sets of signals randomly selected from the test set.

Dictionary Performance Analysis under Speech Signal
Data.
e same dictionary parameter settings are used in this section to develop experiments on the performance of the ASE-KSVD algorithm under speech signal data conditions.
e purpose is to verify the algorithm generality further.

Experimental Condition Setting. All data are from
Valentini-Botinhao et al.'s open speech signal data set [35] and sampled at 48 kHz. We randomly selected 120 signals from a clean speech library consisting of 28 speakers and intercepted 4096 points containing valid information as the sample set signals. e data segmentation, test set assignment methods, and parameter settings are consistent with the previous section.

Sparse Performance Analysis of Speech Signals.
e results in Table 2 also show that in the speech test signals, ASE-KSVD reduces the average sparsity by 31.02%, 15.65%, 12.02%, 13.2%, 6.19%, and 20.91% compared to DCT, KSVD, CC-KSVD, PDR-KSVD, SAMP-KSVD, and StOMP-KSVD, respectively, and has the best effect among the compared algorithms.         under the same degree of normalized mean absolute error (NMAE ≤ 0.1), the number of measurements M of ASE-KSVD algorithm was reduced by 48.75%, 28.18%, 49.44%, 55.16%, and 61.73%, respectively. e performance of ASE-KSVD algorithm is optimal among them. Figure 8 shows the reconstruction results of two randomly selected speech signals in the test set under 0.25 times downsampling (M � 1024). e relevant experimental results show that the ASE-KSVD algorithm proposed in this paper can be used to process actual sensor network test data with excellent performance.

Conclusions
In this work, we propose a novel ASE-KSVD dictionary training method. e method aims to adaptively adjust the number of iterations required for dictionary training by introducing an adaptive sparsity estimation strategy in the sparse encoding phase of dictionary training. By analyzing the algorithm characteristics and the comparative performance of CS reconstruction under two different data set conditions, it is confirmed that the ASE-KSVD algorithm can effectively handle the sparse signal representation and CS reconstruction problems. e algorithm significantly reduces the dictionary training time and sparse error compared to the fixed sparse basis algorithm. e reconstruction performance in the CS framework is better compared to similar algorithms. e upper limit of the sparsity estimation strategy in the current ASE-KSVD algorithm is determined by the parameter δ. e sparsity of the target signal is large under noise interference. In this case, the sparsity estimation will be smaller than the actual value, thus increasing the reconstruction error. In the subsequent research process, we will further explore the matching relationship between algorithm parameters and signal segment length to improve the algorithm's generality and robustness.

Data Availability
Example code for the algorithms in this article can be obtained by contacting the corresponding author via email address: hantl@cust.edu.cn. But, the raw data required to reproduce these findings cannot be shared at this time as the data also form part of an ongoing study.

Conflicts of Interest
e authors declare that they have no conflicts of interest.