Poissonian Image Deconvolution via Sparse and Redundant Representations and Framelet Regularization

Poissonian image deconvolution is a key issue in various applications, such as astronomical imaging, medical imaging, and electronicmicroscope imaging. A large amount of literature on this subject is analysis-basedmethods.Thesemethods assign various forwardmeasurements of the image.Meanwhile, synthesis-basedmethods are anotherwell-known class ofmethods.Thesemethods seek a reconstruction of the image. In this paper, we propose an approach that combines analysis with synthesis methods. The method is proposed to address Poissonian image deconvolution problem by minimizing the energy functional, which is composed of a sparse representation prior over a learned dictionary, the data fidelity term, and framelet based analysis prior constraint as the regularization term.Theminimization problem can be efficiently solved by the split Bregman technique. Experiments demonstrate that our approach achieves better results than many state-of-the-art methods, in terms of both restoration accuracy and visual perception.


Introduction
Poissonian image deconvolution appears in various applications such as astronomical imaging [1], medical imaging [2], and electronic microscope imaging [3].It aims to reconstruct a high quality image  from the degraded image .Mathematically, the process of Poissonian image deconvolution can be generally modeled by where  ∈ R × denotes the point spread function (PSF),  ∈ R  denotes the unknown image to be estimated, P denotes the Poisson noise process,  ∈ R  denotes the blurred noisy image, and  is the length of image vector which is stacked by columns.
It is known that Poissonian image deconvolution is a typical ill-posed inverse problem.In general, the solution of (1) is not unique.Prior knowledge of image, including analysis-based and synthesis-based priors, can be used to address this problem.For an overview of the two classes of priors, we refer to [4].Analysis-based priors are frequently used as regularization term in energy functional () where () is the regularization constraint term.Two main analysis-based methods have been proposed to solve problem (2): the total variation (TV) [5,6] based methods and wavelet frames-based methods.TV-based methods have shown good performance on blurred images for discontinuous solution and edge-preserving advantages.Many authors have combined the TV regularization term with variant methods to solve problem (2).For example, Sawatzky et al. combined expectation-maximization algorithm with TV regularization [7] in positron emission tomography.Setzer et al. considered using TV regularization term with split Bregman techniques [8].TV-based methods can remove noise in flat region effectively.However, they lose details in texture region.Furthermore, the inherent problem of TV-based methods is that they favor a piecewise constant solution, which causes staircase effects in flat region.

Mathematical Problems in Engineering
To overcome disadvantages of TV-based methods, several authors are interested in frame-based methods that can be regarded as analysis regularization methods as well.It is assumed that majority of nature images have sparse approximation under some redundant tight frame systems including wavelet transform, Gabor transform, curvelets, and framelets [9].Chen and Cheng considered a hybrid variational model [10] that describes a cartoon part by TV and a detailed part by the sparse representation over the wavelet basis.Figueiredo and Bioucas-Dias proposed an alternating direction method [11] using frame-based analysis regularizer.Fang et al. presented blind Poissonian images deconvolution with framelet regularization [12].He considered frameletbased analysis prior and combination of framelet and total variation analysis priors in [13] as well.
Dictionary learning [14][15][16][17][18][19][20] based sparse representation has been extensively studied, and significant progress has been made in the development of modern dictionaries learning for synthesis-based sparse representation of image.Natural images contain repeated patterns such as texture region, flat region, and smooth region.Hence, they can be described as linear combinations of a few atoms from a dictionary.Dictionary learning for synthesis model in image denoising was proposed by Elad and Aharon.They presented the method via sparse and redundant representations over learned dictionaries, called K-SVD [16].This method is performed to remove the additive noise.Huang et al. attempted to use the K-SVD method [17] for multiplicative noise removal which followed.It was proven that K-SVD method could perform well just by transforming the multiplicative noise to additive noise in logarithmic domain.Similarly, Poisson noise can be transformed to additive Gaussian noise with unitary variance by Anscombe root transformation (ART) [18] and then removed by K-SVD method [15].In fact, the variance stabilization often has been questioned for the poor numerical results.Therefore, the denoising algorithm should be specifically designed for Poisson noise.Ma et al. proposed dictionary learning for Poisson noise removal [21].
We still have work to do, on dictionary learning for Poisson noise removal, because TV regularizer and dictionary learning (refer to [21]) do not match well.TV regularizer can reduce artifacts in smooth regions caused by patchbased priors of dictionary learning; however, it smooths the fine details in texture region.Motivated by analysis and synthesis based methods, we combine analysis with synthesis method for Poissonian image deconvolution.We choose Bspline framelet as the analysis regularization term, because framelet transform has the ability of multiple-resolution analysis.Different framelet masks reflect different orders of difference operators, which can adaptively capture multiscale edge structures in images.Besides, synthesis-based method is integrated into the proposed method using a sparse representation prior over a learned dictionary.At last, split Bergman method is used to address optimization problem.Bergman variables make the proposed method stable and regularization parameters do not need to be updated in Bregman iteration process.
The rest of this paper is organized as follows.The K-SVD algorithm and B-spline framelet are briefly reviewed in Section 2. The Poissionan image deconvolution model is proposed in Section 3. Numerical experiments are given to demonstrate the qualities of the restored images in Section 4. Conclusions are given in Section 5.

Review of Prior Works
In this section, we review K-SVD algorithm and B-spline framelet briefly.K-SVD algorithm is a classical sparse representation method, and it will be used in our proposed model.Framelet draws many attentions by its multiscale characteristic.

K-SVD Algorithm.
We consider the image patch  of size √ × √ and define dictionary  ∈ R × , where  > .
For each image patch,  can be represented sparsely over the dictionary; that is, the solution of is very sparse.The notation ‖‖ 0 stands for the count of the nonzero entries in .Considering the image patch  contaminated by an additive zero-mean white Gaussian noise with standard deviation , the MAP estimator for denoising this image patch is built by solving If each image patch  of image  can be described by ( 4), the MAP estimator of image  will be described by α , X = arg min (2) dictionary update stage: for each column  = 1, 2, . . .,  in , and   ̸ = 0, compute the representation error and update the dictionary column d by SVD decomposition of the representation error; (iii) Given all α , update : the simple quadratic term has a closed-form solution of the form

Analysis Operator-B-Spline
Framelet.In this subsection, we will briefly introduce the theory of frame and B-spline framelet [22].A countable subset of where ⟨⋅, ⋅⟩ is the inner product of  2 (R),  is a signal, and  and  are positive constants.If  = , we call it tight frame.That is to say,  can be perfectly reconstructed by Furthermore, wavelet frame is constructed by wavelet transform using time and scale parameters sampling.It is defined as If the wavelet frame satisfies the tight frame condition, functions  ∈ Ψ are called framelets which belong to the wavelet tight frame (Ψ).To construct a set of framelets, one starts from a refinement mask ℎ of a refinable function  (also called a scaling function) satisfying where φ is the Fourier transform of .A wavelet tight frame can be constructed by an appropriate set of framelets Ψ = { 1 ,  2 , . . .,   } that can be described in the Fourier domain as B-spline tight framelet is constructed by piecewise linear B-spline function  and refinement mask ℎ 0 = cos 2 ().The corresponding low-pass filter is Two framelets  1 ,  2 can be defined by the framelet masks ℎ 1 = −( √ 2/2) sin() and ℎ 2 = sin 2 (/2), and the corresponding high-pass filters are Subsequently, the framelet decomposition of  can be easily obtained.The decomposition process which also can be called wavelet transform is defined as where  is the transform matrix.In our paper,  is called the analysis operator associated with 1-tight (Parseval) frame.We have    = , in discrete domain, and  and   denote the discrete wavelet tight frame decomposition and reconstruction, respectively.More details about the properties of frame and framelet can be found in [23].

Sparse and Redundant Representation on Dictionary Learning and Framelet
where ,  are the positive regularization parameters.Among these parameters,  is an important parameter.It can reduce the workload of parameter setting drastically.The effect can be seen in parameter setting analysis at the end of this section.‖ ⋅ ‖ 1 is the  1 -norm that sums the absolute values of a vector.The sparse solution of (1) can be approximated by  1 -norm regularizer.The  1 -norm becomes popular in recent years.It is a more robust measurement which gives suitable penalty on flat region or edge.Besides, it fully embodies the sparsity.Most of all, it is easy to be computed.The notation ‖‖ 1 stands for the analysis-based regularizer.The framelet transform  is generated by the piecewise linear B-splines framelet filter masks for its simplicity and effectiveness in image deblurring.

Mathematical Problems in Engineering
Using augmented Lagrangian method, the split Bregman iteration for (19) is defined as The alternating split Bregman algorithm is proposed to minimize the (21) alternatingly with respect to  and : It is important to note that the penalty parameter 1/(2) should not be too large.If the parameter becomes very large, the minimization subproblems will become increasingly illconditioned, and it will lead to numerical problems [25].
If we use the alternating split Bergman algorithm directly, it will cause a little problem.Leting  1 =  and  2 = , we can obtain the minimization problem about variables , , ,  ( Obviously, it is a quadratic term.We can obtain the linear equation of  by its closed form The value of term about variable  is large, and other elements are so small that can be ignored.Thus, û is about the same as the following form: In [16], it has proved that learning dictionary using K-SVD can deal with the additive noise.However, it cannot deal with the Poisson noise directly.Here, we add the special constraint  3 =  to address this difficult problem.The special constraint comes from the idea which combines the split Bregman algorithm with an alternating minimization algorithm; refer to [26].It is shown that this special alternating optimization algorithm simplifies the complex problem.Moreover, it solves the actual problem that K-SVD algorithm is only performed on additive noise denoising or multiplicative noise denoising, and multiplicative noise can be transformed to the additive noise in logarithm domain [17].(29)

Proposed
According to its closed form, we can obtain the linear equation of  (+1) The convolution of  can be easily performed in FFT domain.
(5) Given α , ,  (+1) ,  ()  3 , the last subproblem with respect to  3 is min This is also a linear equation problem; similar to (33), we can obtain ,  (+1) , the Bregman auxiliary variables  1 ,  2 ,  3 are updated as follows: Besides, the dictionary learning is also an important procedure on sparse and redundant representation.We update the dictionary  at the last step.The  is updated in an outer loop for saving computational time as suggested in [21].Therefore, the loop times are divided into inner loop for main process and outside loop only for dictionary updating.
Taking into account the above equations, the proposed Poissonian image deconvolution algorithm is summarized as shown in Algorithm 1.
One of the stopping criterions is calculated for each iteration by the "normalized step difference energy" (NSDE): , where  () and  (−1) denote the image vector at the th and ( − 1)th iteration, respectively.The other stopping criterion is  ≤   , where   is the maximum inner loop times.Each of the stopping criterion is satisfied; the inner loop will be terminated.In our paper, we set NSDE ≤ 0.00001,   = 40, and   = 10.Besides, some conditions should be satisfied to obtain a sound deconvolution result.For example, the restored image  should be normalized, and it should be nonnegative in the process of iteration as we set  = max(, 0).
The proposed method involves many penalty parameters.Parameters choosing plays an important role in efficiency and stability of the algorithm.There are some skills to determinate these parameters.(1) In many literatures, , ,  can be equal to each other.To eliminate numerical problems, the three parameters should not be too small.(2) The value of  is the larger the better in practice.In our paper, we set  ∈ [400, 10000].We can see that  is an important parameters group in the process of framelet regularization.The value of group  belongs to empirical range [3,10].The last parameters group is  which belongs to empirical range [1,5].

Experimental Results
In this section, in order to evaluate the performance of the proposed Poissonian image deconvolution method, we test and compare it with framelet regularization based method for Poissonian image deconvolution using Bregman iteration (PIDSB-FA) [13] method and dictionary learning approach and total variation regularization (TV-LD) [21] method.The ratio of peak signal and noise (PSNR) is frequently used to measure the restored image quality:    degraded by the blur kernel and contaminated by Poisson noise.Gaussian blur kernel and uniform blur kernel are used to test the performance of deblurring.We choose 9 × 9 Gaussian blur kernel with standard deviation 1 and 5 × 5 uniform blur kernel.Three Poisson noise levels are considered,  max = 600, 1000, 2000, where  max denotes the peak intensity of Poisson noise.The lower peak intensity corresponds to the stronger Poisson noise.In general, the simulated process has three steps.At first, the original image is scaled by  max / max , where  max denotes the maximal value of the image.Subsequently, blurring process is added by convolving the normalized image with blur kernel.At last, Poisson noise is added to the burred image.
The range of the parameters is given at the end of Section 3. In this section, we offer the detailed parameters setting.The value of parameters pair  is 10 for all blur kernels and peak intensities of Poisson noise.We set  = 3 for Gaussian blur kernel,  = 3000,  = 8000,  = 10000 for Gaussian blur kernel, and Poisson noise with  max = 600, 1000, 2000, respectively.We take  = 10000,  = 1, for uniform blur kernel and all peak intensities of Poisson noise.
In all experiments, we set the parameters of K-SVD [16] as follows: iteration number is 30, the average error passing the threshold in OMP is  = ,  = 1.15,  = 5, the size of dictionaries is 16 × 256, and the size of image patches is 4 × 4. Large patch size will produce redundant information and small patch size will lead to useful information missing.Besides, the larger image patch size is, the longer time will be taken [27].The final dictionary learned at the last iteration of the proposed method for Example 2 is shown in (Figure 3).
To illustrate the influence of the parameter  on the performance of the proposed method, we note down the PSNR value of restored image when each parameter  is changing in (Figure 2).The plot tests and verifies the fact that parameter 1/ is not too high.Besides, the parameter receives good performance in a wide range.
Restoration of brain is shown in Figure 4.The brain image is blurred by uniform kernel of size 5×5 and contaminated by Poisson noise with  max = 600.Restoration of house is shown in Figure 5.The house image is blurred by Gaussian kernel of size 9 × 9 and contaminated by Poisson noise with  max = 2000.The image restored by PIDSB-FA method contains much details, no matter useful or useless.LD-TV reduces the useless details; however, it also smooths the useful details.The proposed method gives the sound restored results.It overcomes the problems which are caused by PIDSB-FA and LD-TV methods.
The PSNR values of degraded images and restored images by the three methods are compared in Table 1.The improvement in terms of PSNR values obtained by PIDSB-FA is about the same as LD-TV.Experiment results describe that both analysis operator using piecewise linear B-splines framelet filter and synthesis operator using dictionary learning are good operators for image restoration.The proposed method combines the two operators.It can be found from Table 1 that the proposed method has achieved the highest PSNR on medical and nature images.Obviously, it remains the advantages of two operators.
Example 2. The restoration of the Lena image is blurred by customized blur ([0 1 1 1 1 0 0; 1 1 0 0 0 1 0; 0 0 0 0 1 1 0; 0 0 1 1 0 0 0; 0 1 1 0 0 1 1; 0 1 0 0 1 1 0; 0 1 1 1 1 0 0]/22) and then contaminated by Poisson noise with  max = 2000.The parameters setting for proposed method are  = 10000,  = 3, and  = 1.Restoration of the Lena image by three methods is described in Figure 6.Lena image is a widely used image for its rich repetition pattern like texture region and strong sparse representation like large flat region and smooth region.The restored image by PIDSB-FA method preserves the rich detains in texture region such as decorations of the hat and also draws the artifacts into flat region such as Lena's cheek.The restored image by LD-TV is relatively more smooth in flat region; however, it also misses some fine details in texture region.Meanwhile, the proposed method performs well on highlighting the texture region and smoothing the flat region.On the other hand, the proposed method receives better PSNR values than the other two methods.
The above experiments describe that the proposed method gives better restoration of various images under different kinds of blurring kernels and Poisson noise with multilevel peak intensities.Besides, the stability is also the chief concerned matter in the proposed method.Figure 7 shows that the NSDE and PSNR value of proposed method is changing with iterations on the brain image with Gaussian blur kernel and  max = 2000 of Poisson noise.The plot of NSDE describes the convergence, and PSNR value of restored image increases progressively until being stable.

Conclusions
In this study, we proposed and implemented a Poissonian deconvolution method via sparse and redundant representations and framelet.The proposed method includes three terms: the Poisson data term, the synthesis term, and

Figure 3 :
Figure 3: Dictionary learned by the proposed method for Example 2.

Figure 6 :
Figure 6: Restoration of the Lena image.(a) Blurred by using the customized kernel and then corrupted by Poisson noise (PSNR = 26.8674dB).(b) Restored image by the PIDSB-FA method (PSNR = 30.62dB).(c) Restored image by the LD-TV method (PSNR = 30.66dB).(d) Restored image by the proposed method (31.74 dB).

Figure 7 :
Figure 7: NSDE and PSNR values versus the number of iterations of the proposed method.
3.1.Proposed Model.Inspired by K-SVD for dictionary learning and the split Bregman method, we propose a minimization model based on the sparse representation of the image over a certain unknown dictionary and analysis prior.The proposed model can be described by min , ⟨1,  −  log ⟩ + ‖‖ 1 + ∑ −                      1 , 1 ,  2 Supposing that , ,  1 ,  2 are known, the solution with respect to  can be transformed to min  ∑          − − Algorithm.The difficulties in seeking variables are nonquadratic data term and nonseparable  1 -norm term.The alternating optimization algorithm can be used to solve these difficulties by introducing auxiliary variables  1 = ,  2 = ,  3 = .The minimization problem becomes min ,, 1 , 2 , 3 1 s.t. 1 = ,  2 = ,  3 = .