^{2}Regularization with Nonconvex Sparseness-Inducing Penalty for Image Restoration

In order to restore the high quality image, we propose a compound regularization method which combines a new higher-order extension of total variation (TV+TV^{2}) and a nonconvex sparseness-inducing penalty. Considering the presence of varying directional features in images, we employ the shearlet transform to preserve the abundant geometrical information of the image. The nonconvex sparseness-inducing penalty approach increases robustness to noise and image nonsparsity. In what follows, we present the numerical solution of the proposed model by employing the split Bregman iteration and a novel

Image restoration is one of the most classical problems in image processing. During the formation, transmission, or recording process, images may be degraded by noise and blur. Thus, an important problem in image processing is the restoration of the original true image from an observed image. Mathematically, it amounts to estimate an image

The recovery of

The choice of the regularization function ^{2} regularization [^{2} regularization can compete with total generalized variation (TGV) [^{2} is simple and efficiently numerically solvable.

On the other hand, problems of taking

To restore high quality image, in the regularization approach (

In this paper, we consider a new compound regularization, that is, linear combination of TV+TV^{2} regularization and sparseness-inducing ^{2} regularization can effectively attenuate the staircase effect of TV regularization. The nonconvex sparseness-inducing penalty approach increases robustness to noise and image nonsparsity.

In this section, we will briefly review TV+TV^{2} regularization and shearlet transform to make this paper self-contained.

In image restoration model (

To reduce the staircase effect of TV regularization, Chambolle and Lions [^{2} regularization, which is total variation of the gradient. In [^{2} regularization.

Following the idea of the inf-convolution regularization, Chan et al. proposed the following regularization [

Another attempt to combine first- and second-order regularization originates from Chan et al. [

It is particularly worth mentioning that, in [

The TGV regularization generalizes the inf-convolution regularization in the sense that it coincides with the inf-convolution regularization when

More recently, in [^{2} regularization. The idea of TV+TV^{2} regularization is to regularize with a fairly large weight

In [^{2} regularization offers solutions whose quality is not far off from the ones produced by TGV regularization. Moreover, the computational effort needed for its numerical solution is not much more than the one needed for solving TV regularization [^{2} regularization.

In addition, it is well-known that TV regularization is not optimal for the restoration of textured regions. In recent years directional representation systems were proposed to cope with curved singularities in images. In particular, curvelets and shearlets provide an optimally sparse approximation in the class of piecewise smooth functions with

The classical wavelet transform is unable to provide additional information about the geometry of the set of singularities of image. This is due to the fact that this transform is isotropic. As a result, it has a very limited ability to resolve edges and other distributed discontinuities which usually are very important for image. Shearlet transform is a directional representation system that provides more geometrical information for multidimensional functions such as images [

Let

The continuous shearlet transform of function

The shearlet transform is invertible if the function

To obtain a discrete shearlet transform, we consider digital image as function sampled on the grid

We select shearlet in this paper due to its directional sensitivity, availability of efficient implementation. Shearlet transform is by far the most optimal sparsifying transform for piecewise smooth image containing rich geometric information such as edges, corners, and spikes. It combines the power of multiscale methods with the ability of extracting geometry of image.

In this subsection, we present our new model as follows that employs TV+TV^{2} regularization with a nonconvex sparseness-inducing penalty based on shearlet transform:

To facilitate the discussion of our algorithm, without the loss of generality, we represent a gray image as

Analogously we define the second-order discrete differential operators

One can easily check that

For a

By choosing periodic boundary conditions, the action of each of the discrete differential operators can be regarded as a circular convolution of u and allows the use of fast Fourier transform, which our algorithm is relying on.

In the following we use what is frequently called the split Bregman iteration to solve (

Here, we define

Note that the problems (

It is difficult to solve the minimization problem (

We now describe how we solve each of the four subproblems (

Subproblem 1 can be solved through its optimality condition. The optimality condition reads as follows:

The

Now by solving the above optimality condition, we get

Subproblems 2 and 3 are similar and the solutions can be explicitly expressed in the form of shrinkage

Now we present the

Next, we show that there is a function

By letting

Ma et al. [

For fixed

For fixed

The resulting algorithm is summarized in Algorithm

Initialisation:

(

(

(

(

(

(

(

In this section, we will perform some image restoration experiments using our proposed TV+TV^{2}+^{2} method). All computations of the present paper are carried out in Matlab 7.0. The results are obtained by running the Matlab codes on an Intel(R) Core (TM) i3 CPU (3.20 GHz, 3.19 GHz) computer with RAM of 1.92 GB.

The quality of the restoration results with different methods is compared quantitatively by using the peak signal-to-noise ratio (PSNR), the mean-squared error (MSE), and the structural similarity index (SSIM). They are defined as follows:

In our denoising implementation, the linear operator ^{2} restoration model [

We test our method on the cropped Lena image of size ^{2} regularization method (taking parameters

Comparison of Gaussian noise removal. First column from up to down: original image, noisy image (PSNR = 58.1704, MSE = 0.0998, and SSIM = 0.4210), results of the TV method (PSNR = 63.3717, MSE = 0.0521, and SSIM = 0.8224), results of the TV+TV^{2} method (PSNR = 63.9644, MSE = 0.0298, and SSIM = 0.8512), and results of our proposed TV+TV^{2}+

Denoising performance comparisons on the Lena image with different method.

Iterations versus PSNR (dB)

Iterations versus SSIM

In our deblurring implementation, ^{2} model. As the previous experiments, in these deblurring experiments, we still take

For visual appreciation, the corresponding restoration results are reported in Figures ^{2} solution produces the smoother results and loses some details of the image. The result of our method achieves better visual effect and quantitative index than other methods.

Comparison of the blurry and noisy image restoration. First column from up to down: original image, degraded image (PSNR = 59.0771, ISNR = 0.0, MSE = 0.0804, and SSIM = 0.4112), results of the TV method (PSNR = 62.2991, MSE = 0.0401, ISNR = 3.2220, and SSIM = 0.6907), results of the TV+TV^{2} method (PSNR = 62.8713, MSE = 0.0344, ISNR = 3.7942, and SSIM = 0.8149), and results of our proposed TV+TV^{2}+

Deblurring performance comparisons on the Goldhill image with different method.

Iterations versus ISNR (dB)

Iterations versus SSIM

Now we show the performance of the proposed method on the incomplete spectral data. Here, the linear operator ^{2} method.

The corresponding reconstruction results are reported in Figures ^{2}, the results look very similar and it is difficult to observe any significant visual differences. However, the proposed method demonstrates the distinct advantage in the reconstruction of the Brain image.

Comparison of Shepp-Logan phantom image reconstruction from 9.36% DCT data. First column from up to down: original image, back projection image (PSNR = 56.9194, MSE = 0.1322, and SSIM = 0.2790), results of the TV method (PSNR = 65.1352, MSE = 0.0274, and SSIM = 0.9768), results of the TV+TV^{2} method (PSNR = 65.3658, MSE = 0.0189, and SSIM = 0.9820), and results of our proposed method (PSNR = 66.6062, MSE = 0.0142, and SSIM = 0.9896). Second column: close-up image in the red box. Third column: 3D close-up intensity profile in the red box. Fourth column: SSIM map.

Comparison of reconstruction of Brain image from 9.36% DCT data. First column from up to down: original image, back projection image (PSNR = 58.0251, MSE = 0.1025, and SSIM = 0.3919), results of the TV method (PSNR = 59.8861, MSE = 0.0668, and SSIM = 0.5376), results of the TV+TV^{2} method (PSNR = 62.5794, MSE = 0.0395, and SSIM = 0.7519), and results of our proposed TV+TV^{2}+

Reconstruction performance (Iterations versus MSE) comparisons on two test images.

Shepp-Logan phantom image

Brain image

We present a TV+TV^{2}+^{2} regularization combines convex functions of the total variation and the total variation of the first derivatives. It leads to a significant reduction of the staircase effects commonly seen in traditional TV based image processing algorithms. The introduction of the nonconvex

Future researches aim at investigating the theoretical rules on selecting the regularization parameters and the nonconvex parameter

The authors declare that there is no conflict of interests regarding the publication of this paper.

The authors would like to thank the National Natural Science Foundation of China (no. 61271452), the Key Project of Chinese Ministry of Education (no. 2011155), and the Chongqing Natural Science Foundation (no. 2011jjA40029) for supporting this work. They thank Rick Chartrand and Weihong Guo for providing them with very useful suggestions that improved the presentation of the paper.