Signal ReconstructionBased onProbabilistic Dictionary Learning Combined with Group Sparse Representation Clustering

In order tomake full use of nonlocal and local similarity and improve the efficiency and adaptability of the NPB-DL algorithm, this paper proposes a signal reconstruction algorithm based on dictionary learning algorithm combined with structure similarity clustering. Nonparametric Bayesian for Dirichlet process is firstly introduced into the prior probability modeling of clustering labels, and then, Dirichlet prior distribution is applied to the prior probability of cluster labels so as to ensure the analyticity and conjugation of the probability model. Experimental results show that the proposed algorithm is not only superior to other comparison algorithms in numerical evaluation indicators but also closer to the original image in terms of visual effects.


Introduction
Dictionary learning is an important model of image and communication signal processing [1]. It is a powerful tool for signal and image data classification, compression, denoising, repair, and even superresolution [2]. With the improvement of object detection technology, dictionary learning has become more and more important in the field of perception, image remote sensing, and meteorological and military reconnaissance [3].
In recent years, the nonparametric Bayesian methods have attracted much attention in dictionary learning. Compared with the traditional comprehensive dictionary learning algorithm represented by the method of optimal directions (MOD) [4] and K-singular value decomposition (K-SVD) [1,4,5], the existing dictionary learning algorithms have shown significant advantages.
is superiority is mainly reflected in three aspects: (1) the number of dictionary atoms, signal sparsity, and regularization parameter values can be automatically inferred during the dictionary learning process so that there is no need to preset the initial values of these parameters and avoid improper initialization.
(2) ere is a strict theory to prove the convergence of the dictionary algorithm and the optimality of the solution, but K-SVD and MOD still lack the support of theoretical proof.
(3) Dictionary learning is based on the probabilistic graph model, which has a clear and open structure and is suitable for introducing various regularized probabilistic prior constraints [6].
In the field of signal reconstruction, many excellent algorithms have been proposed by scholars at home and abroad. Zhou et.al introduced nonparametric Bayesian image denoising and compression reconstruction and proposed the beta process factor analysis algorithm (BPFA) [7], which is a successful attempt in the field of dictionary learning, and have achieved good application effect. However, the BPFA algorithm does not take into account the image similarity and variability of similarity and variability of global structural, so there is still space of promotion to improve the reconstruction characteristics in the dictionary learning. Subsequent improved algorithms mainly focus on improving the structural characteristics of nonparametric Bayesian dictionary learning, for example, Dirichlet processbeta process factor analysis and subsequent improved algorithms mainly focus on improving the structural characteristics of nonparametric Bayesian dictionary learning, and algorithms such as Dirichlet process-beta process factor analysis and dependent hierarchical beta process (DHBP) [8]. ese algorithms introduce spatial correlation to nonparametric Bayesian modeling. e sparse representation of images under dictionary learning can reflect a certain similarity in adjacent space, which significantly improves the application performance of the dictionary in denoising, compression perception, etc., but the computation complexity is quite high and the running time is too long [9].
Nevertheless, the existing NPB-DL algorithm generally models the data samples in the training set as independent distribution, and the data structure information contained in the correlation between the samples is ignored [10], which restricts the further improvement of data fitting ability in NPB-DL dictionary. With its excellent clustering/classification capabilities, NPB is naturally suitable for mining the structural similarity in the sample data. e DP-BPFA algorithm and PSBP-BPFA algorithm are typical representatives of this work. e DP-BPFA algorithm performs spatial clustering of data through the Dirichlet process, thus realizing the mining of nonlocal similarity of data structures. However, it ignores the local similarity of data structures and cannot guarantee the spatial smoothness of the clustering results.
e PSBP-BPFA algorithm introduces the spatial location information of the data into the Gaussian kernel function and then builds a multiprobit regression model and applies it to the stick-breaking construction of the prior probability of the clustering label in the form of a cumulative probability density function. erefore, the local similarity constraint of the structure is introduced into the spatial clustering, and the spatial smoothness of the clustering is taken into account to a certain extent. e probability model of the PSBP-BPFA algorithm does not have conjugation and analytical closed form [11], so it is impossible to use the variational Bayesian (VB) method for efficient deterministic inference, while the traditional sampling strategy for inferential learning of regression coefficients costs a lot of computation cost. In addition, the algorithm also lacks a selection mechanism for the width of kernel. If an additional layer of probability model is added to the width, it will continue to increase the computational cost.
Since structural similarity and variability have important influence on sparse signal representation, literature [12] proposed a multidictionary learning algorithm on the basis of Dirichlet process for multisource data, which solved the sparseness problem well in the use of structural similarity. However, multidictionary learning is suitable for situations where the data source is not a single source. For the clustering effect of the internal structure of single source data, its applicability and robustness are inadequate. Under the traditional structure dictionary, the sparse representation of the image has blocking artifact, which is not conducive to improve the ability of dictionary to express the structural features of signals. Since the support sets of the sparse representation of similar image patches are usually similar, this priori is conducive to the joint sparse reconstruction under the multiobservation vector model, thus reducing the number of observations and improving the reconstruction performance, which helps to improve the ability of the dictionary to express the signal structure characteristics, and the support set of the sparse representation of similar image blocks is usually similar, which is conducive to the joint under the multiobservation vector model. It can be seen from the abovementioned analysis that dictionary learning is based on the similarity clustering of image patches, and the reconstruction ability will be improved by introducing the block structure characteristics into the sparse representation as the regularization constraint of dictionary learning. erefore, nonparametric Bayesian dictionary learning keeps its advantages over traditional structure dictionary because of its adaptability to image structure [13].
In order to make full use of nonlocal and local similarity and improve the efficiency and adaptability of the NPB-DL algorithm, this paper proposes a signal reconstruction algorithm based on dictionary learning algorithm combined with structure similarity clustering. Nonparametric Bayesian for Dirichlet process is firstly introduced into the prior probability modeling of clustering labels, and then, Dirichlet prior distribution is applied to the prior probability of cluster labels, and the MRF is used to parameterize it so as to ensure the analyticity and conjugation of the probability model. Experimental results show that the proposed algorithm is not only superior to other comparison algorithms in numerical evaluation indicators but also closer to the original image in terms of visual effects.

Nonparametric Bayesian for Dirichlet Process
Bayesian method plays an important role for probability statistics in the data analysis, and it is also a reliable method to solve uncertain problems. Bayesian method is usually used to model a certain distribution of the observed data and obtain the prior information of the sample parameters from the actual samples dataset. However, these prior assumptions cannot fully describe the characteristics of the unknown sample. erefore, the nonparametric Bayesian modeling method has the parameter model of infinite dimension parameter space to represent the prior knowledge [7,[13][14][15].
In order to infer from the observed data, it is necessary to model the data generation mechanism. Usually, due to the lack of clear knowledge of data generation mechanism, only very general assumptions can be made, leaving a large part of unspecified mechanisms. In other words, the distribution of data is not determined by a limited number of parameters.
is kind of nonparametric model can prevent from serious error specification in the process of the data generation mechanism, so the knowledge space with tiny details is distributed in infinite space and is not available for nonparametric problems. e commonly used nonparametric Bayesian modeling methods include Dirichlet process, beta process, and their hybrid model. is paper is mainly based on the nonparametric Bayesian model of Dirichlet process to improve the representation ability of dictionary learning and enhance the accuracy of signal reconstruction [16,17].
Dirichlet process can be regarded as the standardization process of completely random measures, and a unified framework is established. It can be considered that the framework is useful for understanding the commonly used prior behavior and developing new models. In fact, even if completely random measures are very complex probabilistic tools, their use in nonparametric Bayesian reasoning can lead to intuitive posterior structures. DP can be defined as follows: given the measurement space (Θ, B), it is assumed that G 0 is a probability measure of (Θ, B). e distribution g of Dirichlet process is a random probability measure based on (Θ, B). (Ψ 1 , . . . , Ψ r ) is defined as any finite partition for Θ.
erefore, the random vector (G(Ψ 1 ), . . . , G(Ψ r )) is denoted as follows: where α 0 is a weight coefficient with positive number. e concentration parameter α 0 controls the degree of dispersion of the distribution. e sampling position θ i becomes more concentrated with the increase of α 0 and more dispersed with the decrease of α 0 . When the value of α tends to infinity, G 0 becomes infinitely concentrated. e positions of each sampling point are almost different, and the weight of each position is almost close to 0. When α approaches 0, G 0 will be very discrete. ere is only one sampling point on the G 0 distribution, and the weight is close to 1. As shown in Figure 1, the atoms of G distribution become more and more concentrated with the increase of α value. e discreteness of G 0 is also called the centralization of G 0 .

Probabilistic Modeling Based on Clustering Structure Similarity
All the M × M pixel patches in the image are processed by column stack to form a training set ( If the unknown number of atom d k in the dictionary is K -order truncation and its value is a large enough positive integer, the K is usually set as 256 or 512, so the dictionary is written P×K . In other words, as for any data sample x i , it can be expressed as the following equation [17]: where s i ∈ R K×1 is the weight vector; the binary vector z i ∈ R K×1 is the sparse mode indicator vector; Θ is the product with element-by-element; and ε i ∈ R P×1 is denoted as the residual. e sparse pattern indicator vector z i obeys the Bernoulli distribution parameterized by the probability of using atoms π i ∈ R K×1 , as shown in equation (3), where z ik and π ik are the first k − th elements of z i and π i , respectively.
Spatial clustering of all data samples in the training set through the NPB probability model can realize the use of nonlocal similarity structures.
e initial value of the number of clusters is truncated into a larger positive integer, and samples belonging to the same cluster have the same probability of atom usage. is process can be expressed as the Dirichlet process; it can be described in where G i is the prior measure of cluster x i ; G 0 is the initial measurement; and α 0 is the scale parameter. According to the discrete property of Dirichlet process, equation (3) can be further expressed as the discrete weighting form of point measure in e atom usage probability π * l corresponds to a point measure and is obtained from the sampling process of base measure G 0 . e mixed weight β il is a function of scale parameter α 0 , which represents that the sample π i belongs to cluster π i and the atom usage probability is π i � π * l . erefore, the beta distribution of equation (6) is used as the concrete form of point-measurement, where where parameters λ i1, λ i2, . . . , λ iL, are all functions of α 0 . It can be seen from the abovementioned analysis that on the basis of the Dirichlet mixed model of equations (4)- (7), a cluster label is assigned to each sample x i , and it is assumed that the multidistribution of equation (7) is obeyed, and the corresponding distribution parameters β i1, β i2, . . . , β iL, are the mixed weight of the model. π i corresponding to x i can be expressed as equation (9). It can be seen that no matter how large the number of samples in the training set is, the values of π i is always limited. In fact, with the adaptive inference of the final cluster number, the number of inferential learning π i will be significantly less than L.
e abovementioned analysis is based on clustering to realize the utilization of nonlocal similarity of data structure.
In order to take into account the local similarity, this paper establishes MRF in the prior probability distribution of cluster labels to improve the spatial smoothness of clustering. e specific method is to take the pixel patch corresponding to any data sample x i as the center, the corresponding samples of 33 neighborhood pixel patches are stacked as the dataset Ω i , and the cluster labels of the samples in the set are smoothed in spatial domain to obtain the MRF Mathematical Problems in Engineering 3 factor G il , as shown in equation (10), and then, the prior parameter λ i1, λ i2, , . . . , λ iL, of mixed weights β i1, β i2, , . . . , β iL, can be denoted as the product of α 0 and G i1, G i2, , . . . , G iL, , namely λ il � a 0 G il .
e larger the G il value, the more concentrated the clustering is in the neighborhood data system, the greater the corresponding hybrid weight β il , and the higher the probability that the sample belongs to the main cluster x i , which makes the data sample and most of the neighborhood samples belong to the same cluster with a higher probability, thus reflecting the local similarity of the structure and improving the spatial smoothness of the clustering results. Compared with the PSBP algorithm, the first advantage of the probability model is the analyticity and conjugation, which can be used for more efficient deterministic inference. In addition, MRF is constructed by smoothing the neighborhood clustering labels so that the local similarity constraint on the clustering prior probability is only generated by a small number of samples in the neighborhood and does not need to calculate the kernel function value for all samples one by one like PSBP, which is also conducive to saving calculation expenses. e atoms d k , weights s i , and residuals ε i in equation (2)

Group Sparse Representation for Dictionary Learning
In Section 3, we mainly discussed probabilistic modeling based on clustering structure similarity. As we all know, there are a lot of similar redundant components in the image, and its sparse representation in the traditional structure dictionary also has similarity, which is reflected in the similarity of support sets [18]. us, the global similarity measure is introduced into the dictionary learning framework to improve the ability of the dictionary to express the image structure features [19]. is paper adopts clustering structure similarity method which is a simple and effective unsupervised clustering method. Under the condition that the data are not classified or labeled, the corresponding clustering centers are randomly selected according to the preset number of clusters. By calculating the similarity measure between each image patch and each cluster center, the cluster attribution of image patches is determined. en, the average value of all image patches in each cluster is calculated. e abovementioned process is executed iteratively until each cluster center has no obvious deviation from the cluster center of the previous iteration or reaches the upper limit of the predetermined number of iterations. However, the purpose of signal reconstruction is to recover the clear image x i from the degraded image y i . erefore, the optimization problem is transformed into the following objective function: where α G represents the concatenation of sparse matrices x G k of all similar patches in an image x; D G is the concatenation of the sparse dictionary D G k obtained by adaptive learning of a group of similar patches; D G ∘ α G represents the reconstructed image; ∘ represents an abstract operation that is the multiplication of the sparse basis and the sparse coefficient. e objective function of dictionary learning in equation (11) has two optimization variables, the dictionary, and the coefficient matrix. According to the optimization theory, other variables are mostly fixed and one of the variables is updated in turn during the iterative operation. erefore, the learning strategy is also used to first fix the dictionary and update the coefficient matrix; then, the coefficient matrix is fixed so as to update the dictionary [20].

Experimental Results and Analysis
is paper focuses on the performance of function optimization and image denoising to evaluate the effectiveness of the proposed reconstruction algorithm. Gray-scale images such as Barbara, Lena, Boath, House, and Peppers in the field of image processing are used as experimental objects, and MATLAB for simulation analysis is used [21]. e experimental environment of this experiment: Pentium-IV processor with 2.4 GHz, 2 GB memory, Windows 7, and MATLAB 7.10 programming environment.

Parameter Setting.
In the dictionary learning stage, 105 pairs of 9 × 9 pixel image patch pairs are randomly selected for dictionary learning, and the maximum overlap between patches is 4. Singular value decomposition is used to initialize dictionary elements and parameter values. e initial values of hyperparameter are c � d � e � 10 − 6 , c 0 � 2, and η 0 � 0.55. e dictionary size is 2048. e first 1500 samples are optimized and the last 1500 samples approximate the dictionary and the posterior distribution of the parameters. In the experiment, the size of image patch S × S is set to 9 × 9; the size of nonlocal search window is set to 31 × 31; the number of similar blocks K is set to 16; the number of clustering classes C is set to 40; the number of iterations is set to 9; and the residual compensation coefficient is set to 0.03. In the reconstruction phase, the number of iterations is 40. In this paper, only gray image is reweighted on the basis of nonparametric Bayesian model; since color image is composed of three channels of gray image, its processing flow is the same. e resolution of all images is the same, which is 512 × 512; the processing uses sliding-window method to divide the image into 200000 patches. e image patches with overlapping resolution of 8 pixels × 8 pixels and the gray values of all image patches are processed by column stack to form the dictionary dataset. According to the flow of the algorithm proposed in this paper, the number of atoms in the dictionary is set to 512.

Image Quality Assessment.
Objective quantitative evaluation is the quantitative analysis of reconstruction results using the algorithm model, which has the characteristics of objectification, standardization, accuracy, quantification, and simplification [22]. e selected quantitative index belongs to the full reference image quality evaluation method. e pixel statistics, information entropy, structural information, and other parameters between the processed images of reconstruction algorithms and the high-quality images are evaluated objectively [23]. In our experiment, the peak signal-to-noise (PSNR) and structural similarity index measurement (SSIM) are used to quantitatively describe the performance of different comparison algorithms. In addition, the average running time is used to measure the efficiency of the comparison algorithms. PSNR is used to count the difference between the evaluated image and the reference image and describes the reconstruction performance of the algorithm. e mathematical expression is written as follows: SSIM is designed to measure the structural clarity of the image, whose value range is 0∼1. e larger the value is, the better the effect of image reconstruction will be. e equation is denoted as follows:

Quantitative Analysis for Image Reconstruction.
In order to facilitate the comparative analysis, the experiment is carried out from the function optimization and image denoising based on dictionary learning algorithms [24]. e involved comparison algorithms in the experiment include BPFA [5], DP-BPFA [8], PSBP-BPFA [12], SSIM-BPFA [13], and K-SVD [6]. Before the comparison experiment, the clustering effect of our proposed algorithm in this paper is given in the dictionary learning process, where three images of Lena, Peppers, and Mandrill were selected for comparative testing, as shown in Figure 2. e clustering is based on the difference of the probability of using atoms, and the image patches of the same cluster are obtained by taking the same pixel value, where L is the number of clusters inferred by our proposed algorithm. e number of atoms selected from left to right are 4, 8, and 12. It can be seen that the more the atoms, the better the clustering effect. When the number of atoms is 12 in dictionary, a lot of artifacts appear in the clustering results. When the number of atoms is 8, the effect is the best. is experiment adds Gaussian white noise with standard deviation σ 2 � 5, 15, 25, 20, and 35 to the natural image and uses the abovementioned comparison reconstruction Mathematical Problems in Engineering algorithms and the proposed algorithm to perform the image reconstruction [25][26][27][28][29][30][31][32]. For the experimental results, this paper also compares these reconstruction results from the subjective and objective aspects. e subjective evaluation mainly selects the house with more smooth areas and the Barbara image, with rich texture information [26,28], and analyzes the performance of the reconstruction method from the visual effect of the image; the objective evaluation uses the PSNR and SSIM to measure the performance of the reconstruction effect from the objective indicators. Figure 3 shows the reconstruction effect of different comparison algorithms on different noisy images. e overall denoising PSNR of the six algorithms from low to high is BPFA algorithm, K-SVD algorithm, DP-BPFA algorithm, PSBP-BPFA algorithm, SSIM-BPFA algorithm, and our proposed algorithm in this paper. e difference or gap between K-SVD and BPFA is very small. ey have not used the structure feature of the sample data; the gap between DP-BPFA and PSBP-BPFA is relatively small; they use the structural information in the data, but this texture structural information is not fully utilized.
e SSIM-BPFA algorithm introduces the block structure feature of sparse representation on the basis of data clustering preprocessing, and its data fitting accuracy is significantly improved compared with the previous four algorithms [29].
From the overall reconstruction effect in Figure 3, it can be seen that the reconstruction image shown in Figure 3(c) has severe speckle noise; the reconstruction image shown in Figure 3(d) has severe scratches; Figure 3(e) is the denoised image of PSBP-BPFA algorithm; Figure 3(f ) has slight scratches on the edge of the image; the denoised image shown in Figure 3(g) has slight speckle noise; the denoising effect of the denoising image in Figure 3(h) shown is better, especially the smooth area of the image. From the abovementioned analysis, it can be seen that for the house image with a noise level of 35, the proposed reconstruction algorithm has a better reconstruction effect in the smooth area of the image, and the detail retention ability has obvious advantages over other methods.
In order to further analyze the effect of these methods on the preservation of the details of the house image, Figure 4 is an enlarged image of the partial details. Figure 4 shows the reconstruction results of the enlarged area of all the comparison algorithms. Due to space constraints, we only chose   the best representative algorithm for comparison. It can be seen from the Barbara detail map that DP-BPFA and PSBP-BPFA have speckle noise [30,31]; the detail image of figure DP-BPFA is severely scratched; the image texture information in PSBP-BPFA is relatively smooth and lost more information; for the left eye in Barbara, the image of SSIM-BPFA has slight scratches, but the structure is better preserved; our proposed algorithm has better reconstruction effects. e SSIM and PSNR results for different standard algorithms are shown in Table 1. Experimental results show that the proposed algorithm obtains more reliable reconstruction results for natural images. Although the proposed algorithm achieves higher reconstruction probability and accuracy than traditional algorithms, its essence is to use correlation as a similarity measure to guide the choice of atoms, which will undoubtedly be restricted by RIP. It is worth exploring to find  other more effective similarity measures, such as Euclidean distance and Chebyshev distance [32]. Compared with the SSIM-BPFA algorithm, the advantage of our proposed algorithm in this paper is that it is more adaptive, and the clustering effect is more matched with dictionary learning, so it reflects the advantages of reconstruction. Since the signal-to-noise ratio in denoising applications has no significant impact on the SSIM value and algorithm running time, especially the SSIM values of several algorithms are close to 1, the experimental results of these two aspects will be reflected in the image reconstruction experiment to observe the impact of missing rates on them [33]. Figure 5 shows the experimental results of interpolation restoration of some images under 80% random pixel loss. Figure 5(b)) is the damaged image after pixel missing. e remaining results are DP-BPFA, PSBP-BPFA, SSIM-BPFA, and our proposed algorithm in this paper, the PSNR value of our proposed algorithm in this paper is higher than other algorithms in general, and the visual effect is relatively the best. Even if the image data loss rate is from 10% to 90%, the average PSNR and SSIM results of the comparison algorithm for repair are lower than the proposed algorithm in this paper, where the experimental results are calculated from the average value after 50 independent experiments. Figure 5 shows that the PSNR of all algorithms decreases when the data missing rate increases. e K-SVD algorithm shows advantages over BPFA and DP-BPFA algorithms under the condition of low missing rates. e PSNR of the K-SVD algorithm under the condition of high missing rate is the lowest among all algorithms, which shows that the proposed algorithm is more robust than the K-SVD algorithm. erefore, the proposed algorithm in this paper has better adaptability to reflect the accuracy advantage in the image reconstruction.

Conclusion
is paper proposes a signal reconstruction algorithm based on adaptive dictionary learning, which overcomes the shortcomings of traditional sparse-based methods that are too restrictive and not robust enough for natural images. In addition, it also greatly improves the problem that the sparse model is easy to lose the global structure under the reconstruction framework. In this paper, an alternate optimization algorithm is designed to solve the nonparametric model, and experiments have proved that our proposed algorithm has good convergence and stability. Experimental results show that the proposed algorithm is not only superior to other comparison algorithms in numerical evaluation indicators but also closer to the original image in terms of visual effects. Although our proposed algorithm has made some progress in reconstruction accuracy, it still has space for improvement. For example, designing evolutionary operators with richer prior information or exploring more concise coefficient solving methods will help to further improve computational efficiency and accuracy.

Data Availability
e labeled dataset used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare no conflicts of interest.