JAM Journal of Applied Mathematics 1687-0042 1110-757X Hindawi Publishing Corporation 238561 10.1155/2013/238561 238561 Research Article Split Bregman Iteration Algorithm for Image Deblurring Using Fourth-Order Total Bounded Variation Regularization Model Xu Yi Huang Ting-Zhu Liu Jun Lv Xiao-Guang Chen Ke School of Mathematical Sciences/Institute of Computational Science University of Electronic Science and Technology of China Chengdu Sichuan 611731 China uestc.edu.cn 2013 24 4 2013 2013 28 12 2012 07 04 2013 2013 Copyright © 2013 Yi Xu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a fourth-order total bounded variation regularization model which could reduce undesirable effects effectively. Based on this model, we introduce an improved split Bregman iteration algorithm to obtain the optimum solution. The convergence property of our algorithm is provided. Numerical experiments show the more excellent visual quality of the proposed model compared with the second-order total bounded variation model which is proposed by Liu and Huang (2010).

1. Introduction

Image restoration problem is one of the earliest and most classic linear inverse problems . In this class of problems, a noisy indirect observation f of an original image u is modeled as (1)f=Au+n, where A is a bounded linear operator representing the convolution and n denotes the additive noise.

Equation (1) is a typically ill-posed inverse problem; that is, a small change in f will lead to huge deviation in the solution u. Hence, to keep the numerical stability, the regularization method known as adding a regularization term to the energy function has been developed. The original scheme introduced by Tikhonov and Arsenin  is given by (2)minuΩ|Lu|2+λ2Au-f22, where the nonnegative function Ω|Lu|2 which regularizes the solution by enforcing certain prior constrains on original image is known as regularization/penalty and the Au-f22 which measures the violation of the relation between u and its observation f is known as fidelity. The scale λ is called the regularization parameter; it compromises fidelity with penalty. By this Tikhonov regularization method, we can compute stable approximations to the original solution. However, this smoothness penalty model does not preserve the edge, sparsity pattern, and texture well because of its isotropic smoothing properties. To overcome this shortcoming, Rudin et al.  proposed the total variation (TV) based regularization scheme (the ROF model) (3)minuΩ|Lu|+λ2Au-f22, where Ω2 denotes a bounded subset with Lipschitz boundary, u𝕃1(Ω), and Lu represents the distributional derivative of u. Many computational methods  for solving (3) sprang up in recent years. For the moment we may think of the nonsmooth penalty term Ω|Lu| as the 𝕎1,1(Ω) seminorm. More precisely, (4)Ω|Lu|=sup{ΩudivφdxφCC1(Ω,d),|φ|1}. As a result, the bounded variation (BV) seminorm is endowed with uBV=u𝕃1+Ω|Lu|. Then the Banach space 𝔹𝕍(Ω) is essentially an extension of 𝕎1,1(Ω). This model has been extremely successful in a wide variety of image restoration problems, such as image denoising [5, 12], signal processing [13, 14], image deblurring , image decomposition, and texture extraction . However, this model also usually produces staircase effects and new edges that do not exist in the true image.

In [17, 18], the authors concentrate especially on the full norm uBV+ρu22 as the regularization term, compared with the TV regularization, which is preferable due to its ability to preserve edges in the original image during the reconstruction process. Then specify the original problem to the following variation model : (5)minuΩ|Lu|+α2u22+β2Au-f22,over0u𝕂(Ω)𝕏(Ω), where α0,β>0,𝕂 is a closed, convex subset of 𝕃2(Ω),and𝕏=𝕃2(Ω)𝔹𝕍(Ω). The space 𝕏 endowed with the norm u𝕏=u𝕃2+u𝔹𝕍 is a Banach space. A quadratic regularization term is utilized in model (5) comparing with model (3), it serves two advantages. One advantage is that, for α>0, it provides a coercive term for the subspace of constant function which is in the kernel of the L-operator (in this model L represents the gradient operator). The other advantage of the quadratic regularization term is that it provides the probability to discriminate the structure of stability results from that of the nonquadratic 𝔹𝕍 term.

As mentioned in , this technique preserves edges well, but the obtained images for this model are often piecewise constant. To prevent the staircase effect, we penalize jumps more; this can be achieved by taking the second derivation into account. So in this paper we present an improved model; that is, the second-order diffusive term is replaced by fourth-order diffusive term in the model (5); new model substantially reduces the staircase effect, while preserving sharp jump discontinuities (edges). Here we rewrite the proposed model: (6)minuΩ|2u|+α2u22+β2Au-f22,over0u𝕃2(Ω)𝔹𝕍2(Ω), where the Frobenius norm of the Hessian matrix 2u is (see ) (7)|2u|=(uxx2+uyy2+uxy2+uyx2)1/2, and the 𝔹𝕍2 is defined as follows .

Definition 1.

Let Ω2 be an open subset with Lipschitz boundary. 𝔹𝕍2(Ω) is a subspace of functions u𝕃1(Ω) such that the following equation is satisfied: (8)Ω|2u|  0000=sup{Ωudiv2ψdxψCc2(Ω,2×2),|ψ|1}0000<, where (9)div2ψ=h,k=12khψhk,|ψ(x)|=h,k=12(ψhk)2.Cc2(Ω) stands for the set of functions in C2(Ω) with compact support in Ω.

The proof of the existence, uniqueness, convergence, and stability of our proposed model (6) can be founded in .

The organization of the rest of paper is as follows. In Section 2, we give a detailed description of the Bregman iteration method. In Section 3, we elaborate on the analysis of the extended split Bregman iteration method for the proposed model. In Section 4, the convergence analysis is displayed. Numerical experiments intended for demonstrating the effectiveness of our model are provided in Section 5. Finally, concluding remarks are given in Section 6.

2. Bregman-Related Algorithms

Bregman iteration is a concept that originated in function analysis for finding extrema of convex function , which was initially introduced and studied by Osher et al. for image processing . Now we will show the general formulation of Bregman iteration technique.

2.1. Bregman Iteration

In , Goldstein and Osher considered the generalized constrained minimizations of the form (10)minuJ(u)subjecttoH(u)=0, where J and H defined in n are convex functions. The associated unconstrained problem is (11)minuJ(u)+λH(u), where λ is the positive parameter which should be chosen extremely large.

Definition 2.

The Bregman distance of function J between u and v is (12)DJp(u,v)=J(u)-J(v)-p,u-v,pJ(v), where ·,· stands for duality product and p is in the subdifferential of J at v with (13)J(v)={pBV(Ω)*J(u)J(v)+p,u-v}. We assume that H is differentiable then problem (11) can be iteratively solved by  (14)uk+1=argminuDJpk(u,uk)+λH(u),pk+1=pk-λH(uk+1). The convergence analysis of this Bregman iterative scheme was provide in . The computational performance of Bregman iteration relays on how fast the subproblem (15)argminuDJpk(u,uk)+λH(u) can be solved. Let H(u,f)=(1/2)Au-f22, where A is a linear operator. As show in [25, 27], iteration (14) can be reformulated as a simplified form (16)uk+1=argminuJ(u)+λH(u,fk),fk+1=fk+(f-Auk+1),f0=f. This Bregman iteration which was proposed by Osher et al. for TV based image denoising  has two advantages ; the first one is that this method converges very quickly, especially for problem where J(u) contains 𝕃1-regularization term. The second advantage is that the parameter λ in (14) remains constant; so for the purpose of fast convergence, we can choose λ which minimizes the condition number of the subproblem. Due to the highefficiency and robustness of Bregman iteration method, it has been widely used for image reconstruction .

2.2. Split Bregman

Goldstein and Osher  proposed the split Bregman iteration based on the split formulation provided in  to attack the general 𝕃1-regularized optimization problem (11). In a recent paper , this method is used to solve general variational models for image restoration. The crucial point of the split Bregman method is that it could separate the 𝕃1 and 𝕃2 portions in (11). They converted (11) into the constrained optimization problem (17)minud1+λH(u)suchthatd=ϕ(u), where H(u) and ϕ(u) stand for the convex functions. Then transform it into an unconstrained problem (18)minud1+λH(u)+γ2d-ϕ(u)22. This problem is similar to (11); so they enforced the simplified Bregman iterative algorithm to problem (18) (19)(uk+1,dk+1)=minu,dd1+λH(u)+γ2d-ϕ(u)-bk22,bk+1=bk+(ϕ(uk+1)-dk+1). This is called two-phase split Bregman iterative algorithm.

Then pay attention to the subproblem (20)(uk+1,dk+1)=minu,dd1+λH(u)+γ2d-ϕ(u)-bk22. Due to the “split” of the 𝕃1 and 𝕃2 components of this function, the minimization problem was performed by iteratively minimizing with respect to u and d separately

Step  1:    uk+1=minuλH(u)+γ2dk-ϕ(u)-bk22,

Step  2:    dk+1=minud1+γ2d-ϕ(uk+1)-bk22.

The speed of the split Bregman method is relayed on how quickly the two steps can be solved. A wide variety optimization techniques can be used to solve Step 1, for instance, the Fourier transform method and conjugate gradient method. Step 2 could be attacked with the extremely fast shrinkage formula, namely, (21)djk+1=shrink(ϕ(u)j+bjk,1γ), where (22)shrink(x,τ)=x|x|*max(|x|-τ,0).

3. The Proposed Algorithm

Due to , L1 estimation procedures are more appropriate for image restoration, and TV norm is essentially L1 norm of derivatives. However, the fourth-order total variation term Ω|2u| in model (6) is continuous expression; by utilizing the similar method of discretizing total variation in , we can deduce the discrete fourth-order total variation (let D represent the operator 2): (23)Ω|Du|=Du1.

So the proposed model (6) can be rewritten as follows: (24)minuDu1+α2u22+β2Au-f22.

Then we perform the split Bregman iteration to solve problem (24); this yields a constrained problem (25)minud1+α2u22+β2Au-f22,s.t.d=Du. Obviously, problem (25) is equivalent to the following unconstraint problem: (26)minud1+α2u22+β2Au-f22+λ2d-Du22. Concretely, the extended split Bregman iterative for solving (26) is depicted as (27)uk+1=argminuα2u22+β2Au-f22+λ2Du-dk+bk22,dk+1=argminud1+λ2d-Duk+1-bk22, with the update formula for bk(28)bk+1=bk+(Duk+1-dk+1).

For more precisely, we derive our algorithm as follows.

Algorithm 3.

We have the following steps.

Set input value u0=0,d0=b0=0. Set n=0.

Update uk+1 from (29)uk+1=(αI+βATA+λDTD)-1×(βATf+λAT(dk-bk)).

Update dk+1 from (30)dk+1=shrink(Duk+1+bk,1λ).

Update bk+1 from (31)bk+1=bk+(Duk+1-dk+1).

If stopping criterion holds, output the uk+1. Otherwise, set n:=n+1 and go to Step 2.

4. Convergence Analysis

In this section, we concentrate on the rigorous convergence proof of our iterative algorithm. Our analysis below is similar to that in [33, 34], where the authors presented the analysis of the unconstrained split Bregman iteration in detail.

We note that the subproblems (27) are convex and differentiable; so the first-order optimality conditions for uk+1 and dk+1 are obtained as follows: (32)0=αuk+1+βAT(Auk+1-f)+λDT(Duk+1-dk+bk),0=pk+1+λ(dk+1-Duk+1-bk),bk+1=bk+(Duk+1-dk+1), where pkdk1. The condition (32) will be used for analyzing the convergence property of our algorithm.

Theorem 4.

Assume that there exists a unique solution u* of problem (26) and α>0,β>0,  andλ>0. Then the sequence {uk} generated by extended split Bregman iteration (27)–(40) holds: (33)limkα2uk22+β2Auk-f22+Duk1=α2u*22+β2Au*-f22+Du*1,limkuk-u*2=0.

Proof.

Reference  has shown that problem (26) exists a unique u* so the first order optimality condition holds (34)0=DTp*+αu*+βAT(Au*-f), where p*d*1 and d*=Du*; let b*=p*/λ; then (32) is equivalent to (35)0=αu*+βAT(Au*-f)+λDT(Du*-d*+b*),0=p*+λ(d*-Du*-b*),b*=b*+(Du*-d*). This illustrates that u*,d*, and b* are a fixed points of (27).

Let uk,dk, andbk denote the sequence generated by algorithm (27), and uek=uk-u*,dek=dk-d*, and bek=bk-b* represent the errors, respectively. Then subtracting every equation in (32) from the corresponding equations in (35), we give the result (36)0=αuek+1+βAT(Auek+1)+λDT(Duek+1-dek+bek),0=(pk+1-p*)+λ(dek+1-Duek+1-bek),bek+1=bek+(Duek+1-dek+1), where pk+1dk+11; we take the inner product of the first and second equations in (36) with respect to uek and dek separately; then followed by the same manipulations applied to the third equation of (36), we obtain (37)0=αuek+122+βAuek+122+λDuek+122+λDTbek-DTdek,uek+1,0=pk+1-p*,dek+1+λdek+122-λDuek+1+bek,dek+1,bek+122-bek22=2bek,Duek+1-dek+1+Duek+1-dek+122. Obviously, the third equation of (37) can be rewritten as (38)λ2bek22-bek+122=-λ2Duek+1-dek+122-λbek,Duek+1-dek+1. Combining the first two equations of (37) with (38), we have (39)λ2(bek22-bek+122)+λ2(dek22-dek+122)0000=λ2Duek+1-dek22+pk+1-p*,dk+1-d*000000+αuek+122+βAuek+122. By summing the equation bilaterally from k=0 to N, we get (40)λ2(be022-beN+122)+λ2(de022-deN+122)0000=λ2k=0NDuek+1-dek22+k=0Npk+1-p*,dk+1-d*000000+αk=0Nuek+122+βk=0NAuek+122. Because pk+1dk+11,p*d*1, and ·1 is convex, then we have (41)pk+1-p*,dk+1-d*0k. The fact that all terms involved in (40) are nonnegative leads to the following expression: (42)λ2(be022+de022)λ2k=0NDuek+1-dek22+αk=0Nuek+122+βk=0NAuek+122+k=0Npk+1-p*,dk+1-d*. With the assumption α>0,β>0,andλ>0 and letting N, inequality (42) suggests the following conclusions: (43)k=0+uek+122<+,k=0+Auek+122<+,k=0+Duek+1-dek22<+,k=0+pk+1-p*,dk+1-d*<+. And hence (44)limk+uk+1-u*2=0,(45)limk+(Auk+1-f)-(Au*-f)2=0,(46)limk+Duk+1-dk2=0,(47)limk+pk-p*,dk-d*=0. The Bregman distance satisfies (48)DJp(u,v)+DJq(v,u)=q-p,u-v,0000000000pJ(v),qJ(u). Associating this equation with (47), we get (49)limk+dk1-d*1-p*,dk-d*=0. Again, together with (46) and the continuous property of ·1, the following expression hold: (50)limk+Duk1-Du*1000000-p*,Duk-Du*=0,p*Du*1. With the similar skills, we have (51)limk+Auk-f22-Au*-f22000000-2AT(Au*-f),uk-u*=0,limk+uk22-u*22=limk+2u*,uk-u*. Combining (50)-(51), we obtain (52)limk+(Duk1+α2uk22+β2Auk-f22)000000-(Du*1+α2u*22+β2Au*-f22)000000-DTp*+αu*+βAT(Au*-f),uk-u*=0. Finally, from (44) and (52), the main results are shown as follows: (53)limk+(Duk1+α2uk22+β2Auk-f22)0000=Du*1+α2u*22+β2Au*-f22,limk+uk-u*2=0. This proves Theorem 4.

5. Numerical Experiments

In this section, a number of experiments are performed to demonstrate the effectiveness and efficiency of our proposed split Bregman iteration (PSBI) algorithm for the fourth-order diffusive model (6), which will be compared with the split Bregman iteration (SBI) for second-order diffusive model provided in . All experiments are generated in MATLAB 7.10 environment on a desktop with Windows 7 operating system, 3.00 GHz Intel Pentium(R) D CPU, and 1.00 GB memory.

The performance of all algorithms is measured by the improved signal-to-noise ratio (ISNR) and mean squared error (MSE) defined as (54)ISNR=10log10(f-u02u-u02),MSE=u-u02mn, where f,u0, and u denote the degraded, original, and recovered images, and m and n are the sizes of image, respectively. The higher ISNR or the lower MSE, the better quality of the deblurred images. Moreover, the stopping criterion of all algorithms measured by the difference between the consecutive iterations of the deblurred image satisfies the following inequality: (55)uk+1-uk2uk+125×10-4.

Three classical grayscale images with resolution of 256×256 pixels in Figure 1 are used for synthetic degradations in our experiments. The blur kernels used for blurring are Gaussian blur (size of 7×7 pixels, variance of 3), linear motion blur (length of 10 pixels and direction of 30 degrees), out-of-focus blur (size of 10 × 10 pixels, defocus radius of 4) and uniform blur (size of 7×7 pixels). Periodic boundary conditions are considered in the following experiments.

Images used for synthetic degradations: (a) “Cameraman,” (b) “Boat,” (c) “Lena.”

In the first experiment, the original image “Cameraman” with complex background is blurred by Gaussian blur, out-of-focus blur, linear motion blur, and uniform blur, respectively. The blurred images are showed in Figures 2(a)2(d), Figures 2(e)2(h) show the deblurred results corresponding to the split Bregman iteration method for all the blur cases, and the selected parameter values corresponding to the SBI method are the same with those the authors given out in . In order to give explicit comparison, we show the restored images obtained by our PSBI method in Figures 2(i)2(l), and the selected parameter values are α=1e+5,β=1e+12,and  λ=2.1e+6, respectively. Through comparing the “sky” in those deblurred images we can see that the PSBI method recovers more details (removes the stair effect) than SBI method. However, due to the fourth-order diffusive term, our proposed method costs a little more time than the SBI method. Table 1 gives the CPU times, ISNR values, and MSE values of the SBI method and PSBI method for all the blur kernels mentioned above, which shows that our algorithm works well for images with complex background.

Computational cost, ISNR value, and MSE value for different deblur cases.

Blur type CPU time (s) ISNR (dB) MSE (dB)
SBI PSBI Blurred SBI PSBI Blurred SBI PSBI
Motion 3.1512 4.8672 8.6206 21.4132 31.2089 443.8974 36.8159 2.9369
Out-of-focus 2.9952 4.3992 8.0647 18.7751 28.8994 501.1467 50.5383 4.9883
Uniform 3.0264 4.8828 8.5342 17.4389 25.1493 449.3335 68.5647 11.7966
Gaussian 3.1200 7.3164 9.1581 16.5027 21.9641 394.0131 84.6321 24.4465

Deblurring of “Cameraman." (a)–(d) Gaussian blurred image, linear motion blurred image, out-of-focus blurred image and uniform blurred image, respectively. (e)–(h) Images deblurred by SBI. (i)–(l) Images deblurred by PSBI.

In the next test, we use Gaussian blur and out-of-focus blur to degrade the “Boat” image and then run the two algorithms many times to obtain the best results. In PSBI method, the selected parameters and iterations are α=1.1e+5,β=1e+12,λ=2e+6,andite=46 for Gaussian blur and α=1e+5,β=1e+12,λ=2.15e+6,and  ite=25 for out-of-focus blur, respectively. Figure 3 shows the recovered results of the two methods. And for the purpose of better illustrations, we provide a close-up of image region in Figure 4. We can see from Figure 4 that the restored images estimated by PSBI are better when compared to the ones deblurred by SBI; for example, see the “letter” in the stern of “Boat.” Meanwhile, Table 2 illustrates the ISNR values obtained by PSBI are higher than those obtained by SBI while the MSE values are lower. So, it is clear that our method can effectively reduce the “staircase” which always appeals in the two-order diffusive model.

Image deblurring using SBI and PSBI for “Boat.”

Blur type CPU time (s) ISNR (dB) MSE (dB)
SBI PSBI Blurred SBI PSBI Blurred SBI PSBI
Out-of-focus 2.9796 4.6488 7.2471 17.8352 28.6621 381.1651 40.9274 3.4444
Gaussian 3.2448 7.3008 8.3851 16.0370 22.4141 297.3680 61.4972 14.4226

Comparison with SBI method. First column: out-of-focus blurred image and Gaussian blurred image. Second column: images deblurred by SBI. Third column: images deblurred by PSBI.

Close-ups of selected section of Figure 3. First column: out-of-focus blurred image and Gaussian blurred image. Second column: images deblurred by SBI. Third column: images deblurred by PSBI.

Figures 5 and 6 show the comparison between PSBI and SBI algorithm for image “Lena.” Firstly we blur the “Lena” by linear motion blur and uniform blur and then select the deblurred result that looks best after carefully tuning the parameters; for linear motion blur, the parameter values and iterations are α=1e+05,β=1.15e+12,λ=2.2e+6,  and  ite=28, and for uniform blur, they are α=1e+05,β=1e+12,λ=2.1e+6,  and  ite=25. It can be seen from Figures 5 and 6 that many staircase effects in the smooth regions can be detected obviously in the images restored by the SBI model while seldom appear in our results, and the upper-left corners in image “Lena” deblurred by PSBI are more visual comparing with that obtained by SBI model. Table 3 shows that the restored images obtained by our model have higher ISNR values and lower MSE values than the second-order model.

Image deblurring using SBI and PSBI for “Lena.”

Blur type CPU time (s) ISNR (dB) MSE (dB)
SBI PSBI Blurred SBI PSBI Blurred SBI PSBI
Motion 3.1824 4.4460 8.5843 17.6570 27.3921 300.0612 46.4993 5.1319
Uniform 2.9952 4.8672 8.7268 15.7435 25.7129 289.5325 71.4055 7.2947

Comparison with SBI method. First column: motion blurred image and uniform blurred image. Second column: images deblurred by SBI. Third column: images deblurred by PSBI.

Close-ups of selected section of Figure 5. First column: motion blurred image and uniform blurred image. Second column: images deblurred by SBI. Third column: images deblurred by PSBI.

6. Conclusion

In this paper, we propose the fourth-order total bounded variation regularization based image deblurring model and exploit the split Bregman iteration method to solve this new model. The convergence analysis of our algorithm is provided. Numerical experiments show that our algorithm works well for images with complex background or simple background. In our synthetic experiments, the fourth-order diffusive model yields better results than the second-order diffusive model. It is believed that the proposed model can be extended to further applications in image processing and computer vision.

Acknowledgments

The authors would like to thank the referees very much for their valuable comments and suggestions.

Vogel C. R. Computational Methods for Inverse Problems 2002 23 Philadelphia, Pa, USA Society for Industrial and Applied Mathematics (SIAM) xvi+183 Frontiers in Applied Mathematics 10.1137/1.9780898717570 MR1928831 ZBL1210.62194 Bertero M. Boccacci P. Introduction to Inverse Problems in Imaging 1998 Bristol, UK Institute of Physics Publishing xii+351 10.1887/0750304359 MR1640759 ZBL0924.65136 Andrews H. Hunt B. Digital Image Restoration 1977 Englewood Cliffs, NJ, USA Prentice Hall Tikhonov A. N. Arsenin V. Y. Solutions of Ill-Posed Problems 1977 Washington, DC, USA V. H. Winston & Sons xiii+258 MR0455365 Rudin L. I. Osher S. Fatemi E. Nonlinear total variation based noise removal algorithms Physica D 1992 60 1–4 259 268 2-s2.0-44049111982 10.1016/0167-2789(92)90242-F ZBL0780.49028 Li Y. Santosa F. An affine scaling algorithm for minimizing total variation in image enhancement 1994 Ithaca, NY, USA Cornell University Acar R. Vogel C. R. Analysis of bounded variation penalty methods for ill-posed problems Inverse Problems 1994 10 6 1217 1229 MR1306801 ZBL0809.35151 Chambolle A. Lions P.-L. Image recovery via total variation minimization and related problems Numerische Mathematik 1997 76 2 167 188 10.1007/s002110050258 MR1440119 ZBL0874.68299 Chan T. F. Mulet P. On the convergence of the lagged diffusivity fixed point method in total variation image restoration SIAM Journal on Numerical Analysis 1999 36 2 354 367 10.1137/S0036142997327075 MR1668254 Vogel C. R. Oman M. E. Iterative methods for total variation denoising SIAM Journal on Scientific Computing 1996 17 1 227 238 10.1137/0917016 MR1375276 ZBL0847.65083 Zhang J. Chen K. Yu B. An iterative Lagrange multiplier method for constrained total-variation-based image denoising SIAM Journal on Numerical Analysis 2012 50 3 983 1003 10.1137/110829209 MR2970731 Vogel C. R. Oman M. E. Iterative methods for total variation denoising SIAM Journal on Scientific Computing 1996 17 1 227 238 10.1137/0917016 MR1375276 ZBL0847.65083 Candes E. J. Tao T. Decoding by linear programming IEEE Transactions on Information Theory 2005 51 12 4203 4215 10.1109/TIT.2005.858979 MR2243152 Donoho D. L. Compressed sensing IEEE Transactions on Information Theory 2006 52 4 1289 1306 10.1109/TIT.2006.871582 MR2241189 ZBL1163.94399 Chan T. Shen J. Theory and Computation of Variational Image Deblurring 2007 IMS Lecture Notes Chan T. F. Esedo\=glu S. Nikolova M. Algorithms for finding global minimizers of image segmentation and denoising models SIAM Journal on Applied Mathematics 2006 66 5 1632 1648 10.1137/040615286 MR2246072 ZBL1117.94002 Liu X. Huang L. Split Bregman iteration algorithm for total bounded variation regularization based image deblurring Journal of Mathematical Analysis and Applications 2010 372 2 486 495 10.1016/j.jmaa.2010.07.013 MR2678877 ZBL1202.94062 Chavent G. Kunisch K. Regularization of linear least squares problems by total bounded variation ESAIM Control, Optimisation and Calculus of Variations 1997 2 359 376 10.1051/cocv:1997113 MR1483764 ZBL0890.49010 Chan T. Marquina A. Mulet P. High-order total variation-based image restoration SIAM Journal on Scientific Computing 2000 22 2 503 516 10.1137/S1064827598344169 MR1780611 ZBL0968.68175 Chan T. F. Esedoglu S. Park F. E. A fourth order dual method for staircase reduction in texture extraction and image restoration problems CAM Report 2005 UCLA You Y.-L. Kaveh M. Fourth-order partial differential equations for noise removal IEEE Transactions on Image Processing 2000 9 10 1723 1730 10.1109/83.869184 MR1807566 ZBL0962.94011 Chen H.-z. Song J.-p. Tai X.-C. A dual algorithm for minimization of the LLT model Advances in Computational Mathematics 2009 31 1–3 115 130 10.1007/s10444-008-9097-0 MR2511576 ZBL1170.94006 Pang Z.-F. Yang Y.-F. Semismooth Newton method for minimization of the LLT model Inverse Problems and Imaging 2009 3 4 677 691 10.3934/ipi.2009.3.677 MR2557924 ZBL1186.65082 Brègman L. M. A relaxation method of finding a common point of convex sets and its application to the solution of problems in convex programming USSR Computational Mathematics and Mathematical Physics 1967 7 3 200 217 MR0215617 Osher S. Burger M. Goldfarb D. Xu J. Yin W. An iterative regularization method for total variation-based image restoration Multiscale Modeling & Simulation 2005 4 2 460 489 10.1137/040605412 MR2162864 ZBL1090.94003 Goldstein T. Osher S. The split Bregman method for 11 regularized problems SIAM Journal on Imaging Sciences 2009 2 2 323 343 10.1137/080725891 MR2496060 Yin W. Osher S. Goldfarb D. Darbon J. Bregman iterative algorithms for 11-minimization with applications to compressed sensing SIAM Journal on Imaging Sciences 2008 1 1 143 168 10.1137/070703983 MR2475828 Darbon J. Osher S. Fast Discrete Optimization for Sparse Approximations and Deconvolutions 2007 UCLA Wang Y. Yang J. Yin W. Zhang Y. A new alternating minimization algorithm for total variation image reconstruction SIAM Journal on Imaging Sciences 2008 1 3 248 272 10.1137/080724265 MR2486032 ZBL1187.68665 Wang Y. Yin W. Zhang Y. A fast algorithm for image deblurring with total variation regularization CAAM Technical Report 2007 TR07-10 Houston, Tex, USA Rice University Cai J.-F. Dong B. Osher S. Shen Z. Image restoration: total variation, wavelet frames, and beyond Journal of the American Mathematical Society 2012 25 4 1033 1089 10.1090/S0894-0347-2012-00740-1 MR2947945 Moisan L. How to discretize the Total Variation of an image Proceedings in Applied Mathematics and Mechanics 2007 7 1 1041907 1041908 10.1002/pamm.200700424 Cai J.-F. Osher S. Shen Z. Split Bregman methods and frame based image restoration Multiscale Modeling & Simulation 2009 8 2 337 369 10.1137/090753504 MR2581025 Setzer S. Split Bregman algorithm, Douglas-Rachford splitting and frame shrinkage Computer Science 2009 5567 464 476