To preserve the edge, multiplicative noise removal models based on the total variation regularization have been widely studied, but they suffer from the staircase effect. In this paper, to preserve the edge and reduce the staircase effect, we develop a hybrid variational model based on the variable splitting method for multiplicative noise removal; the new model is a strictly convex objective function which contains the total variation regularization and a modified regularization term. We use the linear alternative direction method to find the minimal solution and also give the convergence proof of the proposed algorithm. Experimental results verify that the proposed model can obtain the better results for removing the multiplicative noise compared with the recent method.
National Natural Science Foundation of ChinaU15046036130122961401383key scientific research project of Colleges and Universities in Henan province15A1100201. Introduction
Image denoising is one of the fundamental problems in image processing and computer vision fields. A real recorded image may be distorted by some unexpected random noise; for example, Synthetic Aperture Radar (SAR) [1, 2] and Ultrasound images [3] are often strongly corrupted by the multiplicative noise. Therefore, it is very important for removing the multiplicative noise. In this paper, we focus on the issue of multiplicative noise removal as follows:(1)f0=fφ,where f0>0 is an observed image and f>0 is the original image. φ is the multiplicative noise which follows a Gamma Law with mean one and its probability density function is given by(2)gφ=MMΓMφM-1exp-Mφ,where M is the number of looks and Γ· is a Gamma function.
There are many research works in the field of denoising. For example, Rosa-Zurera et al. [1] and Sveinsson and Benediktsson [2] used wavelet method to reduce the speckle noise. Ramamoorthy et al. [3] gave an efficient method for speckle reduction about Ultrasound liver images. To remove the multiplicative noise, Aubet and Aujol [4] used maximum a posteriori estimator to establish a nonconvex variational model (AA model) as follows:(3)minffTV+λ∫Ωlogf+f0fdx,where fTV=∫Ω∇fdx is the total variation (TV) regularization term and λ>0 is a regularization parameter.
Because the AA model is nonconvex, it does not have the global minimal solution. Later, Shi and Osher [5] used the logarithm transformation and derived a globally convex model (SO) (4)minξξTV+λ∫Ωf0exp-ξ+ξdx,where ξ=logf, the second term is the data fidelity term, and λ>0 is a regularization parameter. Using a relaxed inverse scale space flow method, the authors obtained an excellent denoising effect although it took a long time. To improve the speed of operation, a new variational model was introduced by Huang et al. [6] through the variable splitting. The corresponding minimization problem was given by(5)minz,ξξTV+λ∫Ωf0exp-z+zdx+μz-ξ22.Equation (5) was solved by using an alternating minimization method, and two corresponding subproblems could be solved by Newton’s method and dual method [7], respectively. Alternative iterative algorithm ensures that the solution of the model is unique, and the iterative sequence also converges to the optimal solution of it. Similarly to the SO model, Bioucas and Figueiredo [8] proposed a new speckle reduction scheme by combining operator splitting with augmented Lagrangian method. Taking the I-divergence as the data-fitting term and the TV regularizer, Steidl and Teuber [9] introduced a variational restoration model. Inspired by the connection of the augmented Lagrangian algorithm and the primal-dual hybrid gradient algorithm, Chen et al. [10] developed an improved primal-dual algorithm for multiplicative noise removal. Durand et al. [11] proposed a hybrid method composed of l1 data-fitting for the curvelet frame coefficients and the total variation. Combining the weighted TV with the data term in (4), a nonconvex sparse regularization variational model was proposed [12]. Using the constrained TV norm, Hao et al. [13] put forward a dual method and its acceleration. For other methods for multiplicative noise removal refer to [14–22].
However, it is well known that the TV model suffers from the so-called staircase effect. In order to reduce the staircase effect, some methods were introduced [23–25]. Inspired by the methods in [6], we separate the second-order TV regularization term into two low-order regularization terms by using variable splitting. To solve the proposed model, we design a linear alternation direction method to find the minimizer of such an objective function. Our experimental results show that the proposed method has some good performances for multiplicative noise removal.
The outline of this paper is as follows. In Section 2, we introduce a hybrid variational model and develop an alternative direction algorithm. In Section 3, we give the convergence analysis of the proposed algorithm. In Section 4, we do some numerical experiments to demonstrate the proposed algorithm. Finally, the conclusions are given in Section 5.
2. The Proposed Model and Algorithm2.1. The Proposed Model
In [23], the authors proposed one high order variational model, which successfully alleviated the staircase effect for the additive Gaussian noise. Inspired by the splitting idea [6], we introduce an auxiliary variable in the regularization term [23] and divide the second-order derivative term into two low-order terms. The aim is that it not only can lower the order of image but also can alleviate the staircase effect. Meanwhile the proposed model contains the TV term which can preserve the edges. That is to say, we propose a hybrid variational model based on the variable splitting for multiplicative noise removal as follows:(6)minv,ξEv,ξ=∫Ω∇vdx+λ2∫Ω∇ξ-v2dx+α∫Ω∇ξdx+β∫Ωf0exp-ξ+ξdx,where ξ=logf and v is a vector field of the image ξ. λ>0, α>0, and β>0 are the regularization parameters.
The proposed model has the following advantages. Firstly, the proposed model is strictly convex and it has a unique minimal solution, which is different from the models [4, 12, 15]. Secondly, when λ tends to be infinite, v tends to ∇ξ, and the first two terms turn into the second-order TV, which has been proposed by the authors in [23]. In their work, it has been declared that the second-order TV can remove the additive noise and alleviate the staircase effect better than TV in the smoothing regions. So the proposed model can remove the large noise and preserve the structures of the restored image better through the parameters λ and α. Thirdly, [20] alsostudied the high order variational model for multiplicative noise removal; however it requires the restored images to belong to the more complex function space (refers to [20]).
2.2. The Proposed Algorithm
To solve the proposed model (6), we use the following alternating direction method. The proposed model can be written into the following two minimization subproblems:(7)vk+1=argminv∫Ω∇vdx+λ2∫Ω∇ξk-v2dx(8)ξk+1=argminξλ2∫Ω∇ξ-vk+12dx+α∫Ω∇ξdx+β∫Ωf0exp-ξ+ξdx.
From (7) and (8), it is very interesting to see that the iteration method is similar to the ideas in [24] which contain the following two steps: one is to obtain a smooth vector field, and the other is to recover the image using the smoothed vector field of the first step. In their work, because of using two low-order variational models to restore image, the second-step method [24] can preserve the edges and details better than [23]; meanwhile, it has been proved to reduce the staircase effect well. The difference between [24] and the proposed iterative method is that the vector field and the reconstruction image in the proposed iteration algorithm are intertwined, while theirs are disjoined, so the proposed iterative method can remove noise and preserve the edges better.
For (7), the vector fields v can be solved by Chambolle’s dual method [7], which is given by (9)vk+1=∇ξk-1λdivq,where divq is the divergence of the matrix vector and q=q11,q12q21,q22, ∇ξk=ξxk,ξykT, divq=divp~1,divp~2T, p~1=q11,q12, p~2=q21,q22, ξxk,ξyk are the first-order forward difference and backward difference, respectively. p~1, p~2 can be solved by the fixed point iteration: let p~1k,0=p~2k,0=0 and l=1, and we iterate(10)p~1k,l=p~1k,l-1+τ∇divp~1k,l-1-λξxk1+τ∇divp~1k,l-1-λξxk,p~2k,l=p~2k,l-1+τ∇divp~2k,l-1-λξyk1+τ∇divp~2k,l-1-λξyk,where τ is the time step and the two superscripts k and l are used to denote the outer-loop and inner-loop.
For (8), we use the split Bregman method [26–30] to solve it, which is often used to solve the optimization problem with L1 regularization term. To apply the split Bregman method to (8), we introduce an auxiliary variable η, let η=∇ξ, and add a quadratic penalty function, and those lead to the following unconstrained problem:(11)minη,ξλ2∫Ωη-vk+12dx+μ2η-∇ξ22+αη1+β∫Ωf0exp-ξ+ξdx,where μ>0 is a penalty parameter. The split Bregman algorithm is as follows:(12)ξk+1=argminξβ∫Ωf0exp-ξ+ξdx+μ2ηk-∇ξ+bk22,ηk+1=argminηλ2∫Ωη-vk+12dx+μ2η-∇ξk+1+bk22+αη1=argminηαη1+λ+μ2η-1λ+μλvk+1+μ∇ξk+1-bk22=1λ+μTλvk+1+μ∇ξk+1-bk,α,bk+1=bk+ηk+1-∇ξk+1,where T denotes the thresholding operator defined by(13)Tx,γ=xxmaxx-γ,0,γ>0.
For the first equation in (12), we can see that it has no closed-form solution. To solve it easily, inspired by the linear idea in [31], we expand it into the second-order Taylor formula at uk and replace the Hessian matrix of it with 1/δI, δ>0 is a constant, and then the first equation in (12) can be simplified as follows:(14)ξk+1=argminξβ1-f0exp-ξk+μdivηk+bk-∇ξk,ξ-ξk+12δξ-ξk22.We can easily obtain the following closed solution:(15)ξk+1=ξk-δβ1-f0exp-ξk-μdiv∇ξk-ηk-bk.
In the end, the complete algorithm for solving the proposed model (6) can be summarized in the following steps.
Algorithm 1.
The linear alternating direction method for solving the proposed model (6)
Initialization. v0=0,ξ0=logf0,η0=0, and b0=0.
Step1. Compute vk+1 by (9).
Step2. Compute ξk+1 by (15).
Step3. Compute ηk+1 by the second formula of (12).
Step4. Compute bk+1 by the third formula of (12).
Step5. Until the stop condition is satisfied, fk+1=expξk+1.
3. Convergence Analysis of the Proposed Algorithm
From the iterative schemes of the proposed model, we can obtain the following results.
Theorem 2.
Equation (6) is a strictly convex model, and it has a unique solution.
Proof.
For Equation (6), we can easily see that the first and third terms are convex. So we only need to prove that the second term and the last term are strictly convex. We let(16)J1v,ξ=∫Ω∇ξ-v2dx,J2v,ξ=∫Ωf0exp-ξ+ξdx,Jv,ξ=λ2J1v,ξ+βJ2v,ξ.We firstly assume that gt=f0exp-t+t, and we easily get g′′t=f0exp-t>0, so g is strictly convex for t, and, for any γ∈0,1, v1,ξ1, and v2,ξ2, we have (17)J2γv1,ξ1+1-γv2,ξ2<γJ2v1,ξ1+1-γJ2v2,ξ2,ξ1≠ξ2.Since, for any γ∈0,1, v1,ξ1, and v2,ξ2, we get(18)J1γv1,ξ1+1-γv2,ξ2=J1γv1+1-γv2,γξ1+1-γξ2=∫Ωγ∇ξ1-v1+1-γ∇ξ2-v22dx=γ1-γ∫Ω11-γ∇ξ1-v12+1γ∇ξ2-v22-∇ξ1-v1-∇ξ2-v22dx≤γJ1v1,ξ1+1-γJ1v2,ξ2,from (17) and (18), we have(19)Jγv1,ξ1+1-γv2,ξ2=λ2J1γv1+1-γv2,γξ1+1-γξ2+βJ2γv1+1-γv2,γξ1+1-γξ2<γλ2J1v1,ξ1+βJ2v1,ξ1+1-γλ2J1v2,ξ2+βJ2v2,ξ2<γJv1,ξ1+1-γJv2,ξ2.That is, the proposed model Equation (6) is strictly convex and so it has a unique solution.
Theorem 3.
Let ξk be given by (7) and (8), and we have limk→∞ξk+1-ξk=0 and limk→∞vk+1-vk=0.
Proof.
Set(20)E1v=∫Ω∇vdx,E2v,ξ=λ2∫Ω∇ξ-v2dxE3ξ=α∫Ω∇ξdx+β∫Ωf0exp-ξ+ξdx,and then(21)Ev,ξ=E1v+E2v,ξ+E3ξ.So(22)Evk,ξk-Evk,ξk+1=E2vk,ξk-E2vk,ξk+1+E3ξk-E3ξk+1.Because E2 is convex, we have E2vk,ξk-E2vk,ξk+1≥ξk-ξk+1T∂E2/∂ξ. By the two-order Taylor expansion, we obtain(23)E3ξk-E3ξk+1=ξk-ξk+1T∂E3∂ξξk+1+12ξk-ξk+1T∂2E3∂ξ2ξk-ξk+1.So we have (24)Evk,ξk-Evk,ξk+1≥ξk-ξk+1T∂E2∂ξ+∂E3∂ξ+12ξk-ξk+1T∂2E3∂ξ2ξk-ξk+1.In addition, ∂E/∂ξ=∂E2/∂ξ+∂E3/∂ξ, ξk+1is the minimal solution of Evk,ξ, and Evk+1,ξk+1≤Evk,ξk+1, so we get (25)∂E∂ξ=∂E2∂ξ+∂E3∂ξ=0,Evk,ξk-Evk+1,ξk+1≥Evk,ξk-Evk,ξk+1.Since E3 is strictly convex and coercive, we can use a matrix τIτ>0 to replace the Hessian matrix of ∂2E3/∂ξ2 and obtain(26)Evk,ξk-Evk+1,ξk+1≥Evk,ξk-Evk,ξk+1≥τ2ξk-ξk+12.Summing the above inequality from 0 to ∞, we have limk→∞ξk-ξk+1=0. Similarly, we have limk→∞vk+1-vk=0.
4. Experimental Results
In this section, numerical results are presented to demonstrate the performance of the proposed algorithm. All simulations are performed in MATLAB 7.8 on an Intel Core 3.10-Ghz PC. The restoration results are compared with those obtained by the AA model [4], Huang et al. model [6], and Han et al. model [12], which are applied to the same noisy versions. To simplify the parameter-tuning process, we normalize the size of all images to be 256×256, and we also normalize the range of gray intensity of each image to be [1,256] so that the logarithm can be calculated on the image. In the tests, each pixel of an original image is degraded by the Gamma noise with mean one, and the noise level is controlled by the value of M in (2), we test all of the following images with three noise levels M∈3,5,9, and Peak Signal-to-Noise Ratio (PSNR) is used to measure the quality of the restored image, which is defined as follows:(27)PSNR=10log10Nmaxf2f∗-f22,where f∗ and f denote the restored image and the noise-free image, respectively, and N is the size of image.
For fair comparison, some parameters are recommended by authors, and some parameters are adjusted so that every compared method can get the best results. The stop condition is that all algorithms can obtain their best value of PSNR or the max iterative number is 300. The parameter λ and the discrete time step in AA are chosen λ=0.01, Δt=0.5. The parameters in Huang et al. [6] and Han et al. [12] are recommended respectively. In the proposed algorithm, we firstly fix δ=0.05, μ=1 and then search other parameters. We find that when the parameters λ∈0.1,10, α∈1,3, β∈2,8, the proposed model can obtain better results for the following test images. From (6), we can see that when λ is larger and larger, the vector v tends to the gradient of log image, then the first two terms turn to the second-order term which can remove the large noise well, and, similarly, when α is very big, the gradient of the log image turns to be zero, which denotes that the restored image will be very smooth, so we take β=5 and tune another two parameters λ,α according to different images and different noise levels. In general, when the noise level is large, we take the bigger λ,α values.
The PSNR values and the running time of different algorithms are listed in Table 1. The best PSNR values are shown in bold. By inspection of Table 1, we can find that the proposed algorithm achieves the highest value of PSNR for most denoising results. Even for an unsuccessful case, our algorithm yields the comparable PSNR comparing with the best values obtained from Han et al. [12]. About the running time, we can observe that different algorithm needs different time to restore the different images; that is to say, it is hard for us to judge which algorithm is fast or slow. For example, although AA model uses the gradient descent method, the computation speed is very fast due to the bigger time step. Reference [6] utilizes the fast algorithm to solve its model, but it took a longer time than other algorithms, and the reason for this phenomenon may be related to the recommended parameters which can obtain the better PSNR.
The denoising results for different algorithms corresponding to three noise levels.
Image
M
AA model
[6]
[12]
The new model
PSNR
Time
PSNR
Time
PSNR
Time
PSNR
Time
Boat
3
21.28
12.23
21.94
24.74
21.92
1.61
22.48
6.83
5
22.28
8.06
23.06
13.94
23.15
2.05
23.32
3.78
9
23.57
5.54
24.45
6.21
24.56
6.38
24.68
4.37
Cameraman
3
21.40
11.93
22.34
19.39
23.01
13.68
23.01
16.59
5
22.44
8.25
23.57
9.87
24.04
1.98
24.22
17.26
9
23.89
25.96
25.19
9.91
25.70
13.09
25.64
17.00
House
3
22.73
14.22
22.73
27.64
23.22
27.84
23.89
16.59
5
24.21
10.43
24.54
24.54
24.70
27.01
25.47
17.02
9
25.74
7.05
26.01
25.07
26.25
21.90
26.70
16.86
Lena
3
21.42
11.80
22.79
22.09
23.03
1.82
23.22
3.18
5
22.93
8.02
24.21
11.21
24.41
1.03
24.53
3.79
9
24.33
6.87
25.88
10.61
25.94
4.64
26.03
2.72
Pepper
3
21.40
11.63
23.02
28.83
22.85
1.3
23.69
2.44
5
22.94
8.45
24.56
14.07
24.43
1.55
24.89
2.81
9
24.75
5.89
26.19
14.09
25.92
1.58
26.31
2.14
Toys
3
25.05
9.61
26.03
26.88
26.34
1.69
26.80
2.39
5
26.40
6.70
27.45
16.15
27.60
1.11
27.71
3.20
9
27.87
4.84
29.03
23.24
29.09
1.53
29.22
3.60
Fields
3
24.54
15.17
24.01
24.86
24.32
4.75
24.92
6.75
5
25.49
10.79
25.94
24.51
25.67
6.23
26.33
3.06
9
26.58
7.14
27.02
24.81
26.83
2.82
27.43
2.90
Nimes
3
23.90
4.40
24.59
26.00
24.79
1.22
24.84
1.21
5
25.02
2.60
25.44
1.45
25.82
1.55
25.85
1.01
9
26.57
1.63
26.93
1.60
27.12
2.18
27.40
1.01
Synthetic image
3
27.64
6.61
28.21
25.32
28.87
23.47
28.92
16.57
5
29.12
4.92
29.80
27.34
30.40
28.65
30.52
17.00
9
31.03
3.18
31.71
16.50
32.36
16.75
32.44
17.28
In order to evaluate the performances of different algorithms, we compute the respective average values for three noise levels in Table 2. From Table 2, we can observe that PSNR values for the proposed algorithm averagely exceed about 0.3 db over Han et al. [12], 0.4 db over Huang’s model [6], and 1.3 db over the AA model [4]; the average time our proposed algorithm spends is less than the other three methods when noise levels are 3 and 9; when noise level is 5, the running time of the proposed algorithm is almost the same as AA. Consequently, we believe that the proposed model can averagely perform better than the other three models.
The respective average value for the above different noise levels.
M
AA model
[6]
[12]
The new model
PSNR
Time
PSNR
Time
PSNR
Time
PSNR
Time
3
23.26
10.84
23.96
25.08
24.26
8.60
24.64
8.06
5
24.54
7.58
25.40
15.90
25.58
7.91
25.87
7.66
9
26.04
7.57
26.93
14.67
27.09
7.87
27.32
7.54
To make a visual comparison of the restoration images, we also give the restored results of three test images with three different noise levels in Figure 1. Figure 1(a) is the clean test image. They are Boat, House, and Nimes, respectively. Figure 1(b) is the corresponding noise image. From the left to the right, the noise level is indicated by M = 3, 5, and 9, respectively. The respective denoising results are in Figure 2. Figure 2(a) is the denoising result using the AA method [4]. Figure 2(b) is the denoising result using Huang’s method [6]. Figure 2(c) is the restored result using Han’s method [12]. Figure 2(d) is the result using our method. From Figure 2, we can see that the proposed model obtains the better visual resolution than the other three methods.
Three clean and three noisy images. (a) Clean images; (b) noisy images. From the left to the right, the noise level is indicated by M = 3, 5, and 9, respectively.
Denoising results of different methods. From (a) to (d), the results are obtained by the AA model [4] and the Huang et al. [6] and Han et al. [12] and the proposed model, respectively.
To illustrate the advantage of TV in (6), we take Lena and House image with noise level M=9 as examples. When (6) does not contain TV term, we take α=0, and then other parameters are changed in [0.01,20] so that we can obtain the best results. The experiment results are in Figure 3. Figures 3(a) and 3(b) are the results for α=0. Figures 3(c) and 3(d) are the results for α≠0. Figures 3(e) and 3(f) are the PSNR cures for Lena and House images. From Figures 3(e) and 3(f), we can see that the proposed algorithm can obtain the better results when α≠0.
The experiment results for α=0 or not. (a) and (b) are the results for α=0; (c) and (d) are the results for α≠0; (e) is the evolution curves of PSNR with the iteration times for Lena image; (f) is the evolution curves of PSNR with the iteration times for House image.
Finally, we take House image as an example to compare the new method with the recent model [15] for noise level M=9. Reference [15] adopts the relaxed alternation direction method and primal-dual method. The PSNR and error cures are shown in Figure 4. From Figure 4, we can see that the two methods can obtain the almost same PSNR with the iteration times increasing; however the new method is faster than [15] to achieve the best PSNR. In addition, the new method is strictly convex and the convergence proof is given, but [15] does not give the convergence proof.
The curves for PSNR and error. (a) The evolution curves of PSNR with the iteration times; (b) the evolution curves of error with the iteration times.
5. Conclusions
In this paper, we have studied a hybrid variational model for multiplicative noise removal problem. The proposed model is strictly convex and has a unique solution. To solve the model, a linear alternating minimization algorithm is employed, which turns the original problem into two subproblems. For the two subproblems, we adopt the dual method and split Bregman method to solve them, respectively. By the second-order Taylor formula, we get the closed solution. In addition, we also give the convergence analysis for the proposed algorithm. Experimental results have shown that the proposed method is more effective than the other three methods.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by the National Science Foundation of China (nos. U1504603, 61301229, and 61401383) and the key scientific research project of Colleges and Universities in Henan province (no. 15A110020).
Rosa-ZureraM.Cobreces-AlvarezA. M.Nieto-BorgeJ. C.Jarabo-AmoresM. P.Mata-MoyaD.Wavelet denoising with edge detection for speckle reduction in SAR imagesProceedings of the 15th European Signal Processing Conference200710981102SveinssonJ.BenediktssonJ.Speckle reduction and enhancement of SAR images in the wavelet domain1Proceedings of the International Geoscience and Remote Sensing Symposium19966366RamamoorthyS.SubramanianR.GandhiD.An efficient method for speckle reduction in Ultrasound liver images for e-Health applicationsAubertG.AujolJ.-F.A variational approach to removing multiplicative noiseShiJ.OsherS.A nonlinear inverse scale space method for a convex multiplicative noise modelHuangY.NgM.WenY.A new total variation method for multiplicative noise removalChambolleA.An algorithm for total variation minimization and applicationsBioucas-DiasJ. M.FigueiredoM. A.Multiplicative noise removal using variable splitting and constrained optimizationSteidlG.TeuberT.Removing multiplicative noise by Douglas-Rachford splitting methodsChenD.-Q.DuX.-P.ZhouY.Primal-dual algorithm based on Gauss-Seidel scheme with application to multiplicative noise removalDurandS.FadiliJ.NikolovaM.Multiplicative noise removal using l_{1} fidelity on frame coefficientsHanY.FengX.-C.BaciuG.WangW.-W.Nonconvex sparse regularizer based speckle noise removalHaoY.XuJ.An effective dual method for multiplicative noise removalFengW.LeiH.GaoY.Speckle reduction via higher order total variation approachHaoY.XuJ.LiS.ZhangX.A variational model based on split Bregman method for multiplicative noise removalWooH.YunS.Alternating minimization algorithm for speckle reduction with a shifting techniqueChenD.-Q.ChengL.-Z.Fast linearized alternating direction minimization algorithm with adaptive parameter selection for multiplicative noise removalChenD.-Q.ZhouY.Multiplicative denoising based on linearized alternating direction method using discrepancy function constraintRenH.QinL.ZhuX.Speckle reduction and cartoon-texture decomposition of ophthalmic optical coherence tomography images by variational image decompositionWuY.FengX.Speckle noise reduction via nonconvex high total variation approachBiancoV.MemmoloP.PaturzoM.FinizioA.JavidiB.FerraroP.Quasi noise-free digital holographyBiancoV.MemmoloP.PaturzoM.FerraroP.On-speckle suppression in IR digital holographyLysakerM.LundervoldA.TaiX. C.Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and timeLysakerM.OsherS.TaiX.-C.Noise removal using smoothed normals and surface fittingXuJ.HaoY.SongH.A modified LOT model for image denoisingGoldsteinT.OsherS.The split Bregman method for L1 regularized problemsWuC.TaiX.-C.Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order modelsChenC.XuG.A new linearized split bregman iterative algorithm for image reconstruction in sparse-view X-ray computed tomographyXuJ.FengX.HaoY.A coupled variational model for image denoising using a duality strategy and split BregmanSetzerS.SteidlG.TeuberT.Deblurring Poissonian images by split Bregman techniquesWooH.YunS.Proximal linearized alternating direction method for multiplicative denoising