MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi 10.1155/2017/9370984 9370984 Research Article Multiplicative Noise Removal Based on the Linear Alternating Direction Method for a Hybrid Variational Model http://orcid.org/0000-0003-2118-2299 Hao Yan 1 http://orcid.org/0000-0003-2660-8797 Xu Jianlou 1 Zhang Fengyun 2 Zhang Xiaobo 3 Memmolo Pasquale 1 School of Mathematics and Statistics Henan University of Science and Technology Luoyang 471023 China haust.edu.cn 2 School of Information Engineering Shandong Youth University of Political Science Jinan 250103 China cyu.edu.cn 3 Institute of Graphics and Image Processing Xianyang Normal University Xianyang 712000 China xysfxy.cn 2017 1552017 2017 06 01 2017 07 03 2017 1552017 2017 Copyright © 2017 Yan Hao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

To preserve the edge, multiplicative noise removal models based on the total variation regularization have been widely studied, but they suffer from the staircase effect. In this paper, to preserve the edge and reduce the staircase effect, we develop a hybrid variational model based on the variable splitting method for multiplicative noise removal; the new model is a strictly convex objective function which contains the total variation regularization and a modified regularization term. We use the linear alternative direction method to find the minimal solution and also give the convergence proof of the proposed algorithm. Experimental results verify that the proposed model can obtain the better results for removing the multiplicative noise compared with the recent method.

National Natural Science Foundation of China U1504603 61301229 61401383 key scientific research project of Colleges and Universities in Henan province 15A110020
1. Introduction

Image denoising is one of the fundamental problems in image processing and computer vision fields. A real recorded image may be distorted by some unexpected random noise; for example, Synthetic Aperture Radar (SAR) [1, 2] and Ultrasound images  are often strongly corrupted by the multiplicative noise. Therefore, it is very important for removing the multiplicative noise. In this paper, we focus on the issue of multiplicative noise removal as follows:(1)f0=fφ,where f0>0 is an observed image and f>0 is the original image. φ is the multiplicative noise which follows a Gamma Law with mean one and its probability density function is given by(2)gφ=MMΓMφM-1exp-Mφ,where M is the number of looks and Γ· is a Gamma function.

There are many research works in the field of denoising. For example, Rosa-Zurera et al.  and Sveinsson and Benediktsson  used wavelet method to reduce the speckle noise. Ramamoorthy et al.  gave an efficient method for speckle reduction about Ultrasound liver images. To remove the multiplicative noise, Aubet and Aujol  used maximum a posteriori estimator to establish a nonconvex variational model (AA model) as follows:(3)minffTV+λΩlogf+f0fdx,where fTV=Ωfdx is the total variation (TV) regularization term and λ>0 is a regularization parameter.

Because the AA model is nonconvex, it does not have the global minimal solution. Later, Shi and Osher  used the logarithm transformation and derived a globally convex model (SO) (4)minξξTV+λΩf0exp-ξ+ξdx,where ξ=logf, the second term is the data fidelity term, and λ>0 is a regularization parameter. Using a relaxed inverse scale space flow method, the authors obtained an excellent denoising effect although it took a long time. To improve the speed of operation, a new variational model was introduced by Huang et al.  through the variable splitting. The corresponding minimization problem was given by(5)minz,ξξTV+λΩf0exp-z+zdx+μz-ξ22.Equation (5) was solved by using an alternating minimization method, and two corresponding subproblems could be solved by Newton’s method and dual method , respectively. Alternative iterative algorithm ensures that the solution of the model is unique, and the iterative sequence also converges to the optimal solution of it. Similarly to the SO model, Bioucas and Figueiredo  proposed a new speckle reduction scheme by combining operator splitting with augmented Lagrangian method. Taking the I-divergence as the data-fitting term and the TV regularizer, Steidl and Teuber  introduced a variational restoration model. Inspired by the connection of the augmented Lagrangian algorithm and the primal-dual hybrid gradient algorithm, Chen et al.  developed an improved primal-dual algorithm for multiplicative noise removal. Durand et al.  proposed a hybrid method composed of l1 data-fitting for the curvelet frame coefficients and the total variation. Combining the weighted TV with the data term in (4), a nonconvex sparse regularization variational model was proposed . Using the constrained TV norm, Hao et al.  put forward a dual method and its acceleration. For other methods for multiplicative noise removal refer to .

However, it is well known that the TV model suffers from the so-called staircase effect. In order to reduce the staircase effect, some methods were introduced . Inspired by the methods in , we separate the second-order TV regularization term into two low-order regularization terms by using variable splitting. To solve the proposed model, we design a linear alternation direction method to find the minimizer of such an objective function. Our experimental results show that the proposed method has some good performances for multiplicative noise removal.

The outline of this paper is as follows. In Section 2, we introduce a hybrid variational model and develop an alternative direction algorithm. In Section 3, we give the convergence analysis of the proposed algorithm. In Section 4, we do some numerical experiments to demonstrate the proposed algorithm. Finally, the conclusions are given in Section 5.

2. The Proposed Model and Algorithm 2.1. The Proposed Model

In , the authors proposed one high order variational model, which successfully alleviated the staircase effect for the additive Gaussian noise. Inspired by the splitting idea , we introduce an auxiliary variable in the regularization term  and divide the second-order derivative term into two low-order terms. The aim is that it not only can lower the order of image but also can alleviate the staircase effect. Meanwhile the proposed model contains the TV term which can preserve the edges. That is to say, we propose a hybrid variational model based on the variable splitting for multiplicative noise removal as follows:(6)minv,ξEv,ξ=Ωvdx+λ2Ωξ-v2dx+αΩξdx+βΩf0exp-ξ+ξdx,where ξ=logf and v is a vector field of the image ξ. λ>0, α>0, and β>0 are the regularization parameters.

The proposed model has the following advantages. Firstly, the proposed model is strictly convex and it has a unique minimal solution, which is different from the models [4, 12, 15]. Secondly, when λ tends to be infinite, v tends to ξ, and the first two terms turn into the second-order TV, which has been proposed by the authors in . In their work, it has been declared that the second-order TV can remove the additive noise and alleviate the staircase effect better than TV in the smoothing regions. So the proposed model can remove the large noise and preserve the structures of the restored image better through the parameters λ and α. Thirdly,  alsostudied the high order variational model for multiplicative noise removal; however it requires the restored images to belong to the more complex function space (refers to ).

2.2. The Proposed Algorithm

To solve the proposed model (6), we use the following alternating direction method. The proposed model can be written into the following two minimization subproblems:(7)vk+1=argminvΩvdx+λ2Ωξk-v2dx(8)ξk+1=argminξλ2Ωξ-vk+12dx+αΩξdx+βΩf0exp-ξ+ξdx.

From (7) and (8), it is very interesting to see that the iteration method is similar to the ideas in  which contain the following two steps: one is to obtain a smooth vector field, and the other is to recover the image using the smoothed vector field of the first step. In their work, because of using two low-order variational models to restore image, the second-step method  can preserve the edges and details better than ; meanwhile, it has been proved to reduce the staircase effect well. The difference between  and the proposed iterative method is that the vector field and the reconstruction image in the proposed iteration algorithm are intertwined, while theirs are disjoined, so the proposed iterative method can remove noise and preserve the edges better.

For (7), the vector fields v can be solved by Chambolle’s dual method , which is given by (9)vk+1=ξk-1λdivq,where divq is the divergence of the matrix vector and q=q11,q12q21,q22, ξk=ξxk,ξykT, divq=divp~1,divp~2T, p~1=q11,q12, p~2=q21,q22, ξxk,ξyk are the first-order forward difference and backward difference, respectively. p~1, p~2 can be solved by the fixed point iteration: let p~1k,0=p~2k,0=0 and l=1, and we iterate(10)p~1k,l=p~1k,l-1+τdivp~1k,l-1-λξxk1+τdivp~1k,l-1-λξxk,p~2k,l=p~2k,l-1+τdivp~2k,l-1-λξyk1+τdivp~2k,l-1-λξyk,where τ is the time step and the two superscripts k and l are used to denote the outer-loop and inner-loop.

For (8), we use the split Bregman method  to solve it, which is often used to solve the optimization problem with L1 regularization term. To apply the split Bregman method to (8), we introduce an auxiliary variable η, let η=ξ, and add a quadratic penalty function, and those lead to the following unconstrained problem:(11)minη,ξλ2Ωη-vk+12dx+μ2η-ξ22+αη1+βΩf0exp-ξ+ξdx,where μ>0 is a penalty parameter. The split Bregman algorithm is as follows:(12)ξk+1=argminξβΩf0exp-ξ+ξdx+μ2ηk-ξ+bk22,ηk+1=argminηλ2Ωη-vk+12dx+μ2η-ξk+1+bk22+αη1=argminηαη1+λ+μ2η-1λ+μλvk+1+μξk+1-bk22=1λ+μTλvk+1+μξk+1-bk,α,bk+1=bk+ηk+1-ξk+1,where T denotes the thresholding operator defined by(13)Tx,γ=xxmaxx-γ,0,γ>0.

For the first equation in (12), we can see that it has no closed-form solution. To solve it easily, inspired by the linear idea in , we expand it into the second-order Taylor formula at uk and replace the Hessian matrix of it with 1/δI, δ>0 is a constant, and then the first equation in (12) can be simplified as follows:(14)ξk+1=argminξβ1-f0exp-ξk+μdivηk+bk-ξk,ξ-ξk+12δξ-ξk22.We can easily obtain the following closed solution:(15)ξk+1=ξk-δβ1-f0exp-ξk-μdivξk-ηk-bk.

In the end, the complete algorithm for solving the proposed model (6) can be summarized in the following steps.

Algorithm 1.

The linear alternating direction method for solving the proposed model (6)

Initialization. v0=0,ξ0=logf0,η0=0, and b0=0.

Step  1. Compute vk+1 by (9).

Step  2. Compute ξk+1 by (15).

Step  3. Compute ηk+1 by the second formula of (12).

Step  4. Compute bk+1 by the third formula of (12).

Step  5. Until the stop condition is satisfied, fk+1=expξk+1.

3. Convergence Analysis of the Proposed Algorithm

From the iterative schemes of the proposed model, we can obtain the following results.

Theorem 2.

Equation (6) is a strictly convex model, and it has a unique solution.

Proof.

For Equation (6), we can easily see that the first and third terms are convex. So we only need to prove that the second term and the last term are strictly convex. We let(16)J1v,ξ=Ωξ-v2dx,J2v,ξ=Ωf0exp-ξ+ξdx,Jv,ξ=λ2J1v,ξ+βJ2v,ξ.We firstly assume that gt=f0exp-t+t, and we easily get gt=f0exp-t>0, so g is strictly convex for t, and, for any γ0,1, v1,ξ1, and v2,ξ2, we have (17)J2γv1,ξ1+1-γv2,ξ2<γJ2v1,ξ1+1-γJ2v2,ξ2,ξ1ξ2.Since, for any γ0,1, v1,ξ1, and v2,ξ2, we get(18)J1γv1,ξ1+1-γv2,ξ2=J1γv1+1-γv2,γξ1+1-γξ2=Ωγξ1-v1+1-γξ2-v22dx=γ1-γΩ11-γξ1-v12+1γξ2-v22-ξ1-v1-ξ2-v22dxγJ1v1,ξ1+1-γJ1v2,ξ2,from (17) and (18), we have(19)Jγv1,ξ1+1-γv2,ξ2=λ2J1γv1+1-γv2,γξ1+1-γξ2+βJ2γv1+1-γv2,γξ1+1-γξ2<γλ2J1v1,ξ1+βJ2v1,ξ1+1-γλ2J1v2,ξ2+βJ2v2,ξ2<γJv1,ξ1+1-γJv2,ξ2.That is, the proposed model Equation (6) is strictly convex and so it has a unique solution.

Theorem 3.

Let ξk be given by (7) and (8), and we have limkξk+1-ξk=0 and limkvk+1-vk=0.

Proof.

Set(20)E1v=Ωvdx,E2v,ξ=λ2Ωξ-v2dxE3ξ=αΩξdx+βΩf0exp-ξ+ξdx,and then(21)Ev,ξ=E1v+E2v,ξ+E3ξ.So(22)Evk,ξk-Evk,ξk+1=E2vk,ξk-E2vk,ξk+1+E3ξk-E3ξk+1.Because E2 is convex, we have E2vk,ξk-E2vk,ξk+1ξk-ξk+1TE2/ξ. By the two-order Taylor expansion, we obtain(23)E3ξk-E3ξk+1=ξk-ξk+1TE3ξξk+1+12ξk-ξk+1T2E3ξ2ξk-ξk+1.So we have (24)Evk,ξk-Evk,ξk+1ξk-ξk+1TE2ξ+E3ξ+12ξk-ξk+1T2E3ξ2ξk-ξk+1.In addition, E/ξ=E2/ξ+E3/ξ, ξk+1  is the minimal solution of Evk,ξ, and Evk+1,ξk+1Evk,ξk+1, so we get (25)Eξ=E2ξ+E3ξ=0,Evk,ξk-Evk+1,ξk+1Evk,ξk-Evk,ξk+1.Since E3 is strictly convex and coercive, we can use a matrix τIτ>0 to replace the Hessian matrix of 2E3/ξ2 and obtain(26)Evk,ξk-Evk+1,ξk+1Evk,ξk-Evk,ξk+1τ2ξk-ξk+12.Summing the above inequality from 0 to , we have limkξk-ξk+1=0. Similarly, we have limkvk+1-vk=0.

4. Experimental Results

In this section, numerical results are presented to demonstrate the performance of the proposed algorithm. All simulations are performed in MATLAB 7.8 on an Intel Core 3.10-Ghz PC. The restoration results are compared with those obtained by the AA model , Huang et al. model , and Han et al. model , which are applied to the same noisy versions. To simplify the parameter-tuning process, we normalize the size of all images to be 256×256, and we also normalize the range of gray intensity of each image to be [1,256] so that the logarithm can be calculated on the image. In the tests, each pixel of an original image is degraded by the Gamma noise with mean one, and the noise level is controlled by the value of M in (2), we test all of the following images with three noise levels M3,  5,  9, and Peak Signal-to-Noise Ratio (PSNR) is used to measure the quality of the restored image, which is defined as follows:(27)PSNR=10log10Nmaxf2f-f22,where f and f denote the restored image and the noise-free image, respectively, and N is the size of image.

For fair comparison, some parameters are recommended by authors, and some parameters are adjusted so that every compared method can get the best results. The stop condition is that all algorithms can obtain their best value of PSNR or the max iterative number is 300. The parameter λ and the discrete time step in AA are chosen λ=0.01, Δt=0.5. The parameters in Huang et al.  and Han et al.  are recommended respectively. In the proposed algorithm, we firstly fix δ=0.05, μ=1 and then search other parameters. We find that when the parameters λ0.1,10, α1,3, β2,8, the proposed model can obtain better results for the following test images. From (6), we can see that when λ is larger and larger, the vector v tends to the gradient of log image, then the first two terms turn to the second-order term which can remove the large noise well, and, similarly, when α is very big, the gradient of the log image turns to be zero, which denotes that the restored image will be very smooth, so we take β=5 and tune another two parameters λ,α according to different images and different noise levels. In general, when the noise level is large, we take the bigger λ,α values.

The PSNR values and the running time of different algorithms are listed in Table 1. The best PSNR values are shown in bold. By inspection of Table 1, we can find that the proposed algorithm achieves the highest value of PSNR for most denoising results. Even for an unsuccessful case, our algorithm yields the comparable PSNR comparing with the best values obtained from Han et al. . About the running time, we can observe that different algorithm needs different time to restore the different images; that is to say, it is hard for us to judge which algorithm is fast or slow. For example, although AA model uses the gradient descent method, the computation speed is very fast due to the bigger time step. Reference  utilizes the fast algorithm to solve its model, but it took a longer time than other algorithms, and the reason for this phenomenon may be related to the recommended parameters which can obtain the better PSNR.

The denoising results for different algorithms corresponding to three noise levels.

Image M AA model   The new model
PSNR Time PSNR Time PSNR Time PSNR Time
Boat 3 21.28 12.23 21.94 24.74 21.92 1.61 22.48 6.83
5 22.28 8.06 23.06 13.94 23.15 2.05 23.32 3.78
9 23.57 5.54 24.45 6.21 24.56 6.38 24.68 4.37

Cameraman 3 21.40 11.93 22.34 19.39 23.01 13.68 23.01 16.59
5 22.44 8.25 23.57 9.87 24.04 1.98 24.22 17.26
9 23.89 25.96 25.19 9.91 25.70 13.09 25.64 17.00

House 3 22.73 14.22 22.73 27.64 23.22 27.84 23.89 16.59
5 24.21 10.43 24.54 24.54 24.70 27.01 25.47 17.02
9 25.74 7.05 26.01 25.07 26.25 21.90 26.70 16.86

Lena 3 21.42 11.80 22.79 22.09 23.03 1.82 23.22 3.18
5 22.93 8.02 24.21 11.21 24.41 1.03 24.53 3.79
9 24.33 6.87 25.88 10.61 25.94 4.64 26.03 2.72

Pepper 3 21.40 11.63 23.02 28.83 22.85 1.3 23.69 2.44
5 22.94 8.45 24.56 14.07 24.43 1.55 24.89 2.81
9 24.75 5.89 26.19 14.09 25.92 1.58 26.31 2.14

Toys 3 25.05 9.61 26.03 26.88 26.34 1.69 26.80 2.39
5 26.40 6.70 27.45 16.15 27.60 1.11 27.71 3.20
9 27.87 4.84 29.03 23.24 29.09 1.53 29.22 3.60

Fields 3 24.54 15.17 24.01 24.86 24.32 4.75 24.92 6.75
5 25.49 10.79 25.94 24.51 25.67 6.23 26.33 3.06
9 26.58 7.14 27.02 24.81 26.83 2.82 27.43 2.90

Nimes 3 23.90 4.40 24.59 26.00 24.79 1.22 24.84 1.21
5 25.02 2.60 25.44 1.45 25.82 1.55 25.85 1.01
9 26.57 1.63 26.93 1.60 27.12 2.18 27.40 1.01

Synthetic image 3 27.64 6.61 28.21 25.32 28.87 23.47 28.92 16.57
5 29.12 4.92 29.80 27.34 30.40 28.65 30.52 17.00
9 31.03 3.18 31.71 16.50 32.36 16.75 32.44 17.28

In order to evaluate the performances of different algorithms, we compute the respective average values for three noise levels in Table 2. From Table 2, we can observe that PSNR values for the proposed algorithm averagely exceed about 0.3 db over Han et al. , 0.4 db over Huang’s model , and 1.3 db over the AA model ; the average time our proposed algorithm spends is less than the other three methods when noise levels are 3 and 9; when noise level is 5, the running time of the proposed algorithm is almost the same as AA. Consequently, we believe that the proposed model can averagely perform better than the other three models.

The respective average value for the above different noise levels.

M AA model   The new model
PSNR Time PSNR Time PSNR Time PSNR Time
3 23.26 10.84 23.96 25.08 24.26 8.60 24.64 8.06
5 24.54 7.58 25.40 15.90 25.58 7.91 25.87 7.66
9 26.04 7.57 26.93 14.67 27.09 7.87 27.32 7.54

To make a visual comparison of the restoration images, we also give the restored results of three test images with three different noise levels in Figure 1. Figure 1(a) is the clean test image. They are Boat, House, and Nimes, respectively. Figure 1(b) is the corresponding noise image. From the left to the right, the noise level is indicated by M = 3, 5, and 9, respectively. The respective denoising results are in Figure 2. Figure 2(a) is the denoising result using the AA method . Figure 2(b) is the denoising result using Huang’s method . Figure 2(c) is the restored result using Han’s method . Figure 2(d) is the result using our method. From Figure 2, we can see that the proposed model obtains the better visual resolution than the other three methods.

Three clean and three noisy images. (a) Clean images; (b) noisy images. From the left to the right, the noise level is indicated by M = 3, 5, and 9, respectively.

Denoising results of different methods. From (a) to (d), the results are obtained by the AA model  and the Huang et al.  and Han et al.  and the proposed model, respectively.

To illustrate the advantage of TV in (6), we take Lena and House image with noise level M=9 as examples. When (6) does not contain TV term, we take α=0, and then other parameters are changed in [0.01,20] so that we can obtain the best results. The experiment results are in Figure 3. Figures 3(a) and 3(b) are the results for α=0. Figures 3(c) and 3(d) are the results for α0. Figures 3(e) and 3(f) are the PSNR cures for Lena and House images. From Figures 3(e) and 3(f), we can see that the proposed algorithm can obtain the better results when α0.

The experiment results for α=0 or not. (a) and (b) are the results for α=0; (c) and (d) are the results for α0; (e) is the evolution curves of PSNR with the iteration times for Lena image; (f) is the evolution curves of PSNR with the iteration times for House image.

Finally, we take House image as an example to compare the new method with the recent model  for noise level M=9. Reference  adopts the relaxed alternation direction method and primal-dual method. The PSNR and error cures are shown in Figure 4. From Figure 4, we can see that the two methods can obtain the almost same PSNR with the iteration times increasing; however the new method is faster than  to achieve the best PSNR. In addition, the new method is strictly convex and the convergence proof is given, but  does not give the convergence proof.

The curves for PSNR and error. (a) The evolution curves of PSNR with the iteration times; (b) the evolution curves of error with the iteration times.

5. Conclusions

In this paper, we have studied a hybrid variational model for multiplicative noise removal problem. The proposed model is strictly convex and has a unique solution. To solve the model, a linear alternating minimization algorithm is employed, which turns the original problem into two subproblems. For the two subproblems, we adopt the dual method and split Bregman method to solve them, respectively. By the second-order Taylor formula, we get the closed solution. In addition, we also give the convergence analysis for the proposed algorithm. Experimental results have shown that the proposed method is more effective than the other three methods.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Science Foundation of China (nos. U1504603, 61301229, and 61401383) and the key scientific research project of Colleges and Universities in Henan province (no. 15A110020).

Rosa-Zurera M. Cobreces-Alvarez A. M. Nieto-Borge J. C. Jarabo-Amores M. P. Mata-Moya D. Wavelet denoising with edge detection for speckle reduction in SAR images Proceedings of the 15th European Signal Processing Conference 2007 1098 1102 Sveinsson J. Benediktsson J. Speckle reduction and enhancement of SAR images in the wavelet domain 1 Proceedings of the International Geoscience and Remote Sensing Symposium 1996 63 66 Ramamoorthy S. Subramanian R. Gandhi D. An efficient method for speckle reduction in Ultrasound liver images for e-Health applications Lecture Notes in Comput. Science 2014 8337 311 321 10.1007/978-3-319-04483-5_32 Aubert G. Aujol J.-F. A variational approach to removing multiplicative noise SIAM Journal on Applied Mathematics 2008 68 4 925 946 10.1137/060671814 MR2390974 Shi J. Osher S. A nonlinear inverse scale space method for a convex multiplicative noise model SIAM Journal on Imaging Sciences 2008 1 3 294 321 10.1137/070689954 MR2486034 Huang Y. Ng M. Wen Y. A new total variation method for multiplicative noise removal SIAM Journal on Imaging Sciences 2009 2 1 20 40 Chambolle A. An algorithm for total variation minimization and applications Journal of Mathematical Imaging and Vision 2004 20 1-2 89 97 10.1023/B:JMIV.0000011320.81911.38 MR2049783 Bioucas-Dias J. M. Figueiredo M. A. Multiplicative noise removal using variable splitting and constrained optimization IEEE Transactions on Image Processing 2010 19 7 1720 1730 10.1109/TIP.2010.2045029 MR2790463 Steidl G. Teuber T. Removing multiplicative noise by Douglas-Rachford splitting methods Journal of Mathematical Imaging and Vision 2010 36 2 168 184 10.1007/s10851-009-0179-5 MR2585850 Chen D.-Q. Du X.-P. Zhou Y. Primal-dual algorithm based on Gauss-Seidel scheme with application to multiplicative noise removal Journal of Computational and Applied Mathematics 2016 292 609 622 10.1016/j.cam.2015.04.003 MR3392416 Durand S. Fadili J. Nikolova M. Multiplicative noise removal using l1 fidelity on frame coefficients Journal of Mathematical Imaging and Vision 2010 36 3 201 226 10.1007/s10851-009-0180-z 2-s2.0-77949273852 Han Y. Feng X.-C. Baciu G. Wang W.-W. Nonconvex sparse regularizer based speckle noise removal Pattern Recognition 2013 46 3 989 1001 10.1016/j.patcog.2012.10.010 2-s2.0-84870255946 Hao Y. Xu J. An effective dual method for multiplicative noise removal Journal of Visual Communication and Image Representation 2014 25 2 306 312 10.1016/j.jvcir.2013.11.004 2-s2.0-84891295040 Feng W. Lei H. Gao Y. Speckle reduction via higher order total variation approach IEEE Transactions on Image Processing 2014 23 4 1831 1843 10.1109/TIP.2014.2308432 MR3191335 Hao Y. Xu J. Li S. Zhang X. A variational model based on split Bregman method for multiplicative noise removal AEU—International Journal of Electronics and Communications 2015 69 9 1291 1296 10.1016/j.aeue.2015.05.009 2-s2.0-84937513053 Woo H. Yun S. Alternating minimization algorithm for speckle reduction with a shifting technique IEEE Transactions on Image Processing 2012 21 4 1701 1714 10.1109/TIP.2011.2176345 MR2959482 Chen D.-Q. Cheng L.-Z. Fast linearized alternating direction minimization algorithm with adaptive parameter selection for multiplicative noise removal Journal of Computational and Applied Mathematics 2014 257 29 45 10.1016/j.cam.2013.08.012 MR3107404 Chen D.-Q. Zhou Y. Multiplicative denoising based on linearized alternating direction method using discrepancy function constraint Journal of Scientific Computing 2014 60 3 483 504 10.1007/s10915-013-9803-z MR3239796 Ren H. Qin L. Zhu X. Speckle reduction and cartoon-texture decomposition of ophthalmic optical coherence tomography images by variational image decomposition Optik 2016 127 19 7809 7821 10.1016/j.ijleo.2016.05.088 2-s2.0-84973316100 Wu Y. Feng X. Speckle noise reduction via nonconvex high total variation approach Mathematical Problems in Engineering 2015 2015 11 10.1155/2015/627417 MR3317036 Bianco V. Memmolo P. Paturzo M. Finizio A. Javidi B. Ferraro P. Quasi noise-free digital holography Light: Science & Applications 2016 5 e16142 10.1038/lsa.2016.142 Bianco V. Memmolo P. Paturzo M. Ferraro P. On-speckle suppression in IR digital holography Optics Letters 2016 41 5226 5229 Lysaker M. Lundervold A. Tai X. C. Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time IEEE Transactions on Image Processing 2003 12 12 1579 1589 10.1109/TIP.2003.819229 Zbl1286.94020 Lysaker M. Osher S. Tai X.-C. Noise removal using smoothed normals and surface fitting IEEE Transactions on Image Processing 2004 13 10 1345 1357 10.1109/TIP.2004.834662 MR2093522 Xu J. Hao Y. Song H. A modified LOT model for image denoising Multimedia Tools and Applications 2017 76 6 8131 8144 10.1007/s11042-016-3451-x Goldstein T. Osher S. The split Bregman method for L1 regularized problems SIAM Journal on Imaging Sciences 2009 2 2 323 343 10.1137/080725891 MR2496060 Wu C. Tai X.-C. Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models SIAM Journal on Imaging Sciences 2010 3 3 300 339 10.1137/090767558 MR2679430 Zbl1206.90245 2-s2.0-78650834156 Chen C. Xu G. A new linearized split bregman iterative algorithm for image reconstruction in sparse-view X-ray computed tomography Computers and Mathematics with Applications 2016 71 1537 1559 Xu J. Feng X. Hao Y. A coupled variational model for image denoising using a duality strategy and split Bregman Multidimensional Systems and Signal Processing 2014 25 1 83 94 10.1007/s11045-012-0190-7 2-s2.0-84891889809 Setzer S. Steidl G. Teuber T. Deblurring Poissonian images by split Bregman techniques Journal of Visual Communication and Image Representation 2010 21 3 193 199 10.1016/j.jvcir.2009.10.006 2-s2.0-77950043710 Woo H. Yun S. Proximal linearized alternating direction method for multiplicative denoising SIAM Journal on Scientific Computing 2013 35 2 B336 B358 10.1137/11083811X MR3033091