In this paper, a new model for image restoration under Poisson noise based on a high-order total bounded variation is proposed. Existence and uniqueness of its solution are proved. To find the global optimal solution of our strongly convex model, a split Bregman algorithm is introduced. Furthermore, a rigorous convergence theory of the proposed algorithm is established. Experimental results are provided to demonstrate the effectiveness and efficiency of the proposed method over the classic total bounded variation-based model.
Education Department of Jiangxi ProvinceGJJ171015China Scholarship Council201708360066National Natural Science Foundation of China6186501211701260Natural Science Foundation of Jiangxi Province20151BAB2070101. Introduction
Being signal-dependent, Poisson noise is inevitable in various applications such as positron emission tomography [1], electronic microscopy [2], and astronomical imaging [3]. In particular, the degree of Poisson noise depends on the peak value of pixel intensity values, in the sense that the smaller peak value yields the higher intensity of Poisson noise. Moreover, the Poisson noise magnitude in an image increases with the pixel intensity of the region of this image. Since Poisson noise is different from the Gaussian noise, the restoration models and related algorithms proposed for Gaussian white noise are not effective in removing Poisson noise.
A traditional method for Poisson denoising was proposed by Richardson and Lucy (RL) [4, 5]. However, the RL algorithm converges slowly with noise remained in restored images. A standard approach to remove the noise involves incorporating a regularization term, such as Tikhonov regularization [6], total variation (TV) regularization [7], and high-order regularization [8]. In particular, the TV-based Poisson denoising model [9] can be described as(1)minu∇u1+β∫Ωu-flogudx,where Ω is a bounded open domain in R2 and β>0 is a regularization parameter. For this model (1), a number of fast and efficient numerical methods have been proposed, for instance, the dual algorithm [10], the augmented Lagrangian method [11, 12], the split Bregman algorithm [13], the multilevel algorithm [14], and a semi-implicit gradient descent method [15]. These algorithms have also been successfully applied to remove noisy images corrupted by Gaussian, impulse, and multiplicative noises [16–21]. In [11, 13], the following TV-based Poisson restoration model has also been studied:(2)minu∇u1+β∫ΩKu-flogKudx.For solving this model, in [11], the authors proposed an augmented Lagrangian method called PIDAL. However, it needs to solve a TV denoising subproblem and thus needs an inner iteration loop. Furthermore, the nonnegative property of the sequence uk cannot be guaranteed. To deal with these problems, Setzer et al. [13] proposed a split Bregman algorithm called PIDSplit+. Subsequently, Wen et al. [22] proposed primal-dual algorithms for model (2). Their algorithms require less memory and no need to solve any linear systems, and thus are fast. However, they did not state related theories for this Poisson restoration model in their papers.
Replacing the TV regularization term in (2) by ∇u1+α/2u22 [23], Liu and Huang [24] proposed the total bounded variation-based Poisson restoration model as follows:(3)minu∈BVΩ,logKu∈L1Ω∇u1+α2u22+β∫ΩKu-flogKudx,where α≥0,β>0 are parameters, and K is a bounded linear operator representing the convolution kernel. Owing to the strongly convex property of α/2u22, uniqueness of the solution of this model can be guaranteed. Further, the convergence of the proposed algorithm was proved in [24].
The TV-based restoration model performs very well for preserving edges while removing noise. However, it often causes staircase effects in flat regions. To overcome the staircase effects, some high-order models [25–28] have been introduced in image restoration. For restoring blurred images corrupted by Poisson noise, the hybrid regularizations combining TV with high-order variation were also considered [29, 30]. In this paper, we propose the following model by replacing the TV term in (3) with second order term ∇2u1:(4)minu∈BV2Ω,logKu∈L1ΩEu=∇2u1+α2u22+β∫ΩKu-flogKudx,where BV2(Ω) is a Banach space. Its definition and related properties were introduced in [17]. For this model, we prove existence and uniqueness of its solution. We use a split Bregman algorithm to compute the resulting minimization problem by introducing new auxiliary variables. Besides, we prove the convergence of our proposed algorithm. The experimental results show that our model (4) avoids undesirable staircase effects. Compared with the split Bregman algorithm for solving the total bounded variation-based Poisson restoration model (3), our method is faster.
The organization of this paper is as follows. In Section 2, we show existence and uniqueness of the solution of our proposed model (4). Section 3 presents a split Bregman algorithm for solving this model. We also study the convergence of the proposed algorithm in this section. In Section 4, experiments are carried out to show the efficiency of our proposed algorithm. We end the paper with some conclusions in Section 5.
2. Existence and Uniqueness
In this section, we prove the existence and uniqueness of the solution for proposed model (4).
Theorem 1.
Model (4) has a minimizer in BV2(Ω). Furthermore, if K is injective, the minimizer is unique.
Proof.
Let G(q)=∫Ωh(q)dx with h(q)=q-flogq and q=Ku. For the given f>0, ∇q2h(q)=f/q2>0. Thus h(q) is strictly convex. Then it holds G(q)=G(Ku)≥G(Kf)=∫ΩKf-flogKfdx. So we can choose a minimizing sequence {un}n=1∞ for E(u). By using Hölder’s inequality and Jensen’s inequality, we have(5)Gqn≥qn1-f∞logqn1,and thus qn1 is bounded. Meanwhile, considering that K∈L(BV2(Ω)) and the boundedness property of ∇2u1, we know that {un}n=1∞ is a bounded sequence in BV2(Ω). By the compactness theory, there is u∗∈BV2(Ω) such that a subsequence {unk} converges to u∗ a.e. By the lower semicontinuity of the BV2 norm, ∇2u∗1≤liminfk→∞∇2unk1 and unk2≥u∗2. Further, by using Fatou’s lemma, we get(6)inf∇2u1+α2u22+β∫ΩKu-flogKudx≥liminfk→∞∇2unk1+α2unk22+β∫ΩKunk-flogKunkdx≥∇2u∗1+α2u∗22+β∫ΩKu∗-flogKu∗dx.Thus u∗ is the minimizer of model (4).
Obviously, if K is injective, the high-order total variation term and the fidelity term in (4) are convex. Since α/2u22 is a strongly convex function for a given constant α, E(u) is strongly convex. Therefore, its minimizer is unique.
3. The Split Bregman Algorithm
We describe the split Bregman algorithm (SBA) for solving the proposed model (4), together with convergence analysis. The SBA is an efficient method, which was initially introduced by Goldstein and Osher to solve the general L1-regularized problems [31]. It has been successfully applied to minimization problems that involve nondifferentiable and high-order functionals [19, 24, 32].
3.1. Algorithm Description
It is straightforward that the proposed model (4) is equivalent to the following constrained optimization problem:(7)minu,p,qp1+α2u22+β∫Ωq-flogqdx,s.t.p=∇2u,q=Ku.
To solve this problem, we convert it into an unconstrained problem by applying the penalty method here:(8)minu,p,qp1+α2u22+β∫Ωq-flogqdx+γ12p-∇2u22+γ22q-Ku22,where γ1,γ2>0 are two penalty parameters. Consequently, the SBA for solving (4) can be described as follows:(9)uk+1=argminuα2u22+γ12pk-∇2u-b1k22+γ22qk-Ku-b2k22,pk+1=argminpp1+γ12p-∇2uk+1-b1k22,qk+1=argminqβ∫Ωq-flogqdx+γ22q-Kuk+1-b2k22,b1k+1=b1k+∇2uk+1-pk+1,b2k+1=b2k+Kuk+1-qk+1.
For the u-subproblem, the corresponding Euler-Lagrange equation is(10)αu-γ1∇2Tpk-∇2u-b1k-γ2KTqk-Ku-b2k=0.We assume periodic boundary condition, and hence (10) can be solved efficiently by the FFT, i.e.,(11)uk+1=F-1F∇2Tpk-b1k+γ2/γ1KTqk-b2kα/γ1+F∇2T∇2+γ2/γ1FKTK,where (∇2)T and KT represent the adjoint operators of ∇2 and K, respectively, and F(v) denotes the Fourier transform of v.
There are closed-form solutions for both p and q subproblems. In particular, the solution of p-subproblem is given by the shrinkage operator,(12)pi,jk+1=shrink∇2ui,jk+1+b1ki,j,1γ1=max∇2ui,jk+1+b1ki,j-1γ1,0∇2ui,jk+1+b1ki,j∇2ui,jk+1+b1ki,j, and the solution of q-subproblem is obtained by using the quadratic formula, i.e.,(13)qk+1=Kuk+1+b2k-β/γ22+Kuk+1+b2k-β/γ222+βfγ2.
We summarize the overall algorithm in Algorithm 2.
Algorithm 2 (SBA for solving (4)).
Step 1. Input α,β,γ1,γ2. Initialization: u0=f,p0,q0, and b10,b20. Let k≔0.
Step 2. Compute uk+1 by (11).
Step 3. Compute pk+1 by (12).
Step 4. Compute qk+1 by (13).
Step 5. Update (b1k+1,b2k+1) by the last two equations of (9).
Step 6. Stop if the stopping criterion is satisfied. Otherwise, let k≔k+1 and go to Step 2.
3.2. Convergence Analysis
Since our model (4) is strictly convex, there exists a unique solution as guaranteed by Theorem 1. Below we give a rigorous proof that the proposed algorithm (Algorithm 2) converges to the unique solution. Note that the following analysis is stemmed from [24, 33], where the analysis of the unconstrained split Bregman iteration was presented in detail.
Theorem 3.
Assume that K is injective and α,β,γ1,γ2>0. Then the sequence {uk} generated by Algorithm 2 converges to the unique solution u∗ of model (4).
Proof.
Let {uk,pk,qk,b1k,b2k} be the sequence generated by Algorithm 2. It follows from the first-order optimality conditions at uk+1,pk+1,qk+1 that(14)0=αuk+1-γ1∇2Tpk-∇2uk+1-b1k-γ2KTqk-Kuk+1-b2k,0=gk+1+γ1pk+1-∇2uk+1-b1k,0=β1-fqk+1+γ2qk+1-Kuk+1-b2k,b1k+1=b1k+∇2uk+1-pk+1,b2k+1=b2k+Kuk+1-qk+1.where gk+1∈∂pk+11. On the other hand, the Karush-Kuhn-Tucker (KKT) conditions for (4) read as(15)0=∇2Tg∗+αu∗+βKTh∗,where g∗∈∂p∗1 with p∗=∇2u∗ and h∗=1-f/q∗ with q∗=Ku∗. We further denote b1∗=1/γ1g∗,b2∗=β/γ2h∗ so as to get the following set of conditions:(16)0=αu∗-γ1∇2Tp∗-∇2u∗-b1∗-γ2KTq∗-Ku∗-b2∗,0=g∗+γ1p∗-∇2u∗-b1∗,0=β1-fq∗+γ2q∗-Ku∗-b2∗,b1∗=b1∗+∇2u∗-p∗,b2∗=b2∗+Ku∗-q∗.By comparing (14) and (16), we can conclude that {u∗,p∗,q∗,b1∗,b2∗} is a fixed point of (9). We then introduce new variables to represent the errors, i.e., uek=uk-u∗,pek=pk-p∗,qek=qk-q∗,b1ek=b1k-b1∗ and b2ek=b2k-b2∗. By subtracting each equation in (16) from the corresponding one in (14), we get(17)0=αuek+1-γ1∇2Tpek-∇2uek+1-b1ek-γ2KTqek-Kuek+1-b2ek,0=gk+1-g∗+γ1pek+1-∇2uek+1-b1ek,0=βfqek+1q∗qk+1+γ2qek+1-Kuek+1-b2ek,b1ek+1=b1ek+∇2uek+1-pek+1,b2ek+1=b2ek+Kuek+1-qek+1.Then for (17), taking the inner product of the first three equations with respect to uek+1,pek+1, and qek+1, respectively, and squaring both sides of the last two equations, we obtain(18)0=αuek+122+γ1∇2T∇2uek+1,uek+1-γ1∇2Tpek-b1ek,uek+1+γ2KTKuek+1,uek+1-γ2KTqek-b2ek,uek+1,(19)0=gk+1-g∗,pek+1+γ1pek+122-γ1∇2uek+1+b1ek,pek+1,0=βfqek+122q∗qk+1+γ2qek+122-γ2Kuek+1+b2ek,qek+1,b1ek+122=b1ek22+∇2uek+1-pek+122+2b1ek,∇2uek+1-pek+1,b2ek+122=b2ek22+Kuek+1-qek+122+2b2ek,Kuek+1-qek+1. Obviously, the last two equations of (19) can be rewritten as(20)γ12b1ek22-b1ek+122=-γ12∇2uek+1-pek+122-γ1b1ek,∇2uek+1-pek+1,γ22b2ek22-b2ek+122=-γ22Kuek+1-qek+122-γ2b2ek,Kuek+1-qek+1. By combining (20) with the first three equations in (19), we have(21)γ12b1ek22-b1ek+122+pek22-pek+122+γ22b2ek22-b2ek+122+qek22-qek+122=αuek+122+gk+1-g∗,pek+1+βfqek+122q∗qk+1+γ12∇2uek+1-pek22+γ22Kuek+1-qek22.Summing (21) bilaterally from k=0 to k=N yields(22)γ12b1e022-b1eN+122+pe022-peN+122+γ22b2e022-b2eN+122+qe022-qeN+122=α∑k=0Nuek+122+∑k=0Ngk+1-g∗,pek+1+∑k=0Nβfqek+122q∗qk+1+γ12∑k=0N∇2uek+1-pek22+γ22∑k=0NKuek+1-qek22.Therefore, it is straightforward to have(23)γ12b1e022+pe022+γ22b2e022+qe022≥α∑k=0Nuek+122+∑k=0Ngk+1-g∗,pek+1+∑k=0Nβfqek+122q∗qk+1+γ12∑k=0N∇2uek+1-pek22+γ22∑k=0NKuek+1-qek22.Since ·1 is convex, gk+1∈∂pk+11, and g∗∈∂p∗1, we have(24)gk+1-g∗,pk+1-p∗≥0,∀k.And note that {qk}k=1∞ is a bounded sequence in L1(Ω). Therefore, all terms in (23) are nonnegative. Letting N→∞, inequality (23) suggests the following conclusions:(25)∑k=0+∞uek+122<+∞,∑k=0+∞gk+1-g∗,pek+1<+∞,∑k=0+∞qek+122<+∞,∑k=0+∞∇2uek+1-pek22<+∞,∑k=0+∞Kuek+1-qek22<+∞,which implies that(26)limk→+∞uk-u∗2=0,limk→+∞gk-g∗,pk-p∗=0,limk→+∞qk-q∗2=0,limk→+∞∇2uk+1-pk2=0,limk→+∞Kuk+1-qk2=0.For any convex function J, the Bregman distance satisfies(27)BJpu,v+BJqv,u=q-p,u-v,∀p∈∂Jv,∀q∈∂Ju.This together with the second equality of (26) and the nonnegativity of the Bregman distance implies that(28)limk→+∞pk1-p∗1-g∗,pk-p∗=0.Recall that p∗=∇2u∗. And by the fourth equality of (26), we obtain(29)limk→+∞∇2uk1-∇2u∗1-g∗,∇2uk-∇2u∗=0. In the same way, we also have(30)limk→+∞uk22-u∗22-2u∗,uk-u∗=0,limk→+∞GKuk-GKu∗-h∗,Kuk-Ku∗=0. Combining this with (29) yields(31)limk→+∞∇2uk1+α2uk22+βGKuk-∇2u∗1+α2u∗22+βGKu∗-∇2Tg∗+αu∗+βKTh∗,uk-u∗=0. Finally, by this and equality (15), we get(32)limk→+∞∇2uk1+α2uk22+βGKuk=∇2u∗1+α2u∗22+βGKu∗.
4. Numerical Experiments
We present experimental results to show the performance of our proposed method (4), referred to as BLLTSB, in comparison with the total bounded variation-based Poisson restoration model (3), referred to as BTVSB. All the experiments were conducted in MATLAB environment on a PC with a 3.4GHz CPU processor and 16G RAM. To assess the restoration performance qualitatively, we use the peak signal to noise ratio (PSNR) defined by(33)PSNR=10log1025521/MN∑i=1M∑j=1Nui,j-Ii,j2,where Ii,j and ui,j are the pixel values of the original and restored images, respectively. We also use the structural similarity (SSIM) [34] for quantitative comparison. The stopping criterion in all the experiments is(34)uk+1-uk2uk+12<10-4,or the iteration reaches 500.
We choose Lena (256×256), Train (512×357), and Barbara (512×512) as the test images, as shown in Figure 1, and two types of blurring kernels, Gaussian blur and motion blur (we use the MATLAB commands fspecial (‘gaussian’, [5,5], 3) and fspecial (‘motion’, 10,45) to generate Gaussian blur and motion blur, respectively). The blurry and noisy images are simulated as follows. The original image I is first convolved with the blur kernel K, followed by additive Poisson noise (the Poisson noise is generated by using the MATLAB function poissrnd). Since Poisson noise is data-dependent, the noise level of the observed images depends on the pixel intensity. To test different noise levels, we use poissrnd(B/σ)∗σ, where B is the blurred image and σ is set to be 1 or 5.
Test images. From left to right: Lena, Train, and Barbara.
The choice of parameters significantly affects the convergence rate and other characteristics of the methods. In our experiments, we study the influence of parameters α,β,γ1,γ2 on the BTVSB. Concretely speaking, after taking a reasonable guess for the parameters α,γ1,γ2, we consider different values of β. We observe which value of β could contribute to the biggest value of PSNR, then we set it to be the value of β. And in turn, we use the same strategy for the choice of other parameters. Similar strategy is adopted in our BLLTSB for the parameters. After the stopping criterion is satisfied, the algorithms are terminated for the parameter tuning.
Experiment 1.
In this example, we use the Lena image as the test image. We first consider to restore the image corrupted by the Gaussian blur and Poisson noise with σ=1. For the BTVSB, we set α=10-3,β=61,γ1=4.5,γ2=0.012. For our BLLTSB, we use α=10-3,β=98,γ1=0.05,γ2=0.8. Then the Poisson noise with σ=5 is added to the blurred image. For the BTVSB and the BLLTSB, we choose α=10-3,β=17,γ1=2.8,γ2=0.005 and α=10-3,β=24,γ1=0.06,γ2=0.2, respectively. The numerical results are arranged in Table 1, while the responding visual results are shown in Figure 2.
Numerical results with different blurring kernels and noise levels.
Image
Blur
σ
BTVSB
BLLTSB
PSNR
SSIM
CPU(s)
PSNR
SSIM
CPU(s)
Lena
Gaussian
1
28.91
0.88
10.13
28.69
0.88
1.86
5
26.93
0.83
9.14
26.69
0.83
2.30
Train
Gaussian
1
25.91
0.85
33.14
25.66
0.84
12.59
5
24.59
0.80
38.94
24.31
0.79
20.27
Barbara
Motion
1
23.96
0.76
59.64
23.84
0.75
11.84
(a1) Blurry and noisy Lena image with σ=1; (a2), (a3) restored Lena images by BTVSB and BLLTSB, respectively; (a4), (a5) the plots of PSNR and relative error, respectively; (b1)-(b5) the corresponding case with σ=5.
Experiment 2.
In our second example, the Train image is adopted as the test image. Similar to the above example, the original image is degraded by the same Gaussian blur and Poisson noise with σ=1. The parameters are set to be α=5·10-4,β=72,γ1=1,γ2=0.06 for the BTVSB and α=5·10-4,β=135,γ1=0.03,γ2=0.4 for the BLLTSB. Then we consider more Poisson noise with σ=5. The parameters for the BTVSB are α=10-4,β=18,γ1=1.4,γ2=0.007. And for our proposed BLLTSB, α=10-4,β=25,γ1=0.09,γ2=0.1 are chosen. Numerical results are also listed in Table 1. The visual results are presented in Figure 3.
(a1) Blurry and noisy Train image with σ=1; (a2), (a3) restored Train images by BTVSB and BLLTSB, respectively; (a4), (a5) the plots of PSNR and relative error, respectively; (b1)-(b5) the corresponding case with σ=5.
Since the proposed model (4) has higher order than the previous model (3), the BLLTSB spends more time per iteration than the BTVSB. But our BLLTSB needs less iterations for the same stopping criterion. Thus, it is faster as a whole, as shown in Table 1. From the plots of PSNR and relative error versus CPU time in Figures 2 and 3, we can also conclude that our BLLTSB is faster than the BTVSB. In terms of PSNR and SSIM, the restoration results by these two methods are similar. But from Figures 2 and 3, we can see that the restored images by the BTVSB suffer from staircase effects. Moreover, the higher the noise level, the more obvious the staircase effects. It is easier to observe from the face of Lena and the head of the train.
To further show the advantages of our BLLTSB, we consider to restore an image degraded by the motion blur in the following experiment.
Experiment 3.
In our third example, we test the Barbara image. Different from the above two examples, the original image is corrupted by the motion blur and then Poisson noise with σ=1. Here, parameters for the BTVSB and our BLLTSB are set to be α=10-3,β=60,γ1=7,γ2=0.008 and α=10-3,β=75,γ1=0.08,γ2=0.1, respectively. Results of this experiment are given in Table 1 and Figure 4.
(a1) Blurry and noisy Barbara image with σ=1; (a2), (a3) restored Barbara images by BTVSB and BLLTSB, respectively; (a4), (a5) the plots of PSNR and relative error versus CPU time, respectively.
As shown in Figures 4(a4) and 4(a5), our BLLTSB is faster than BTVSB. It can also be seen from Table 1. Besides, we can easily see from Figures 4(a2) and 4(a3) that our model successfully overcomes the staircase effects and produces relative natural result.
5. Conclusion
In this paper, we have proposed a new model based on high-order total bounded variation for restoring blurred images under Poisson noise. We proved existence and uniqueness of its solution. Furthermore, we proposed a split Bregman algorithm to solve this model and proved its convergence. Compared with the previous method for total bounded variation-based Poissonian image restoration model, experimental results show that our model can overcome the staircase effects and the proposed algorithm is fast.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
We would like to thank Dr. Zhi-Feng Pang of Henan University for his useful suggestion. This work was supported by the Science and Technology Project of Jiangxi Provincial Department of Education (GJJ171015), the China Scholarship Council (201708360066), the NNSF of China Grants 61865012 and 11701260, and the Natural Science Foundation of Jiangxi Province (20151BAB207010).
VardiY.SheppL. A.KaufmanL. A.Statistical model for positron emission tomography19858038982010.1080/01621459.1985.10477119Zbl0561.62094DeyN.Blanc-FéraudL.ZimmerC.3D microscopy deconvolution using Richardson-Lucy algorithm with total variation regularization200471LantériH.TheysC.Restoration of astrophysical images: the case of Poisson data with additive Gaussian noise2005200525002513LucyL. B.An iterative technique for the rectification of observed distributions19747974510.1086/111605RichardsonW. H.Bayesian-based iterative method of image restoration1972621555910.1364/josa.62.000055TikhonovA. N.GoncharskyA.StepanovV. V.YagolaA. G.2013Dordrecht, The NetherlandsKluwer AcademicMR1350538RudinL. I.OsherS.FatemiE.Nonlinear total variation based noise removal algorithms1992601–425926810.1016/0167-2789(92)90242-FZbl0780.490282-s2.0-44049111982ChanT.MarquinaA.MuletP.High-order total variation-based image restoration200022250351610.1137/S1064827598344169MR1780611Zbl0968.681752-s2.0-0035074671LeT.ChartrandR.AsakiT. J.A variational approach to reconstructing images corrupted by Poisson noise200727325726310.1007/s10851-007-0652-yMR23258522-s2.0-34247500347SawatzkyA.BruneC.MüllerJ.BurgerM.Total variation processing of images with poisson statistics20095702Berlin, HeidelbergSpringer533540Lecture Notes in Computer Science10.1007/978-3-642-03767-2_65FigueiredoM. A. T.Bioucas-DiasJ. M.Deconvolution of Poissonian images using variable splitting and augmented Lagrangian optimizationProceedings of the IEEE/SP 15th Workshop on Statistical Signal Processing2009733736FigueiredoM. A.Bioucas-DiasJ. M.Restoration of Poissonian images using alternating direction optimization201019123133314510.1109/TIP.2010.2053941MR2789706Zbl1371.941282-s2.0-78649287919SetzerS.SteidlG.TeuberT.Deblurring Poissonian images by split Bregman techniques20102131931992-s2.0-7795004371010.1016/j.jvcir.2009.10.006ChanR. H.ChenK.Multilevel algorithm for a Poisson noise removal model with total-variation regularization20078481183119810.1080/00207160701450390MR23497332-s2.0-34548307020WangW.HeC.A fast and effective algorithm for a poisson denoising model with total variation201724326927310.1109/LSP.2017.26544802-s2.0-85015200976Bioucas-DiasJ. M.FigueiredoM. A.Multiplicative noise removal using variable splitting and constrained optimization20101971720173010.1109/TIP.2010.2045029MR27904632-s2.0-77953690109ChenH.-Z.SongJ.-P.TaiX.-C.A dual algorithm for minimization of the LLT model2009311–311513010.1007/s10444-008-9097-0MR2511576Zbl1170.940062-s2.0-67349100148DongF. F.ZhangH. L.KongD. X.Nonlocal total variation models for multiplicative noise removal using split Bregman iteration2012553-493995410.1016/j.mcm.2011.09.021MR2887433Zbl1255.940132-s2.0-84855191908WuC.TaiX.-C.Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models20103330033910.1137/090767558MR2679430Zbl1206.902452-s2.0-78650834156WuC.ZhangJ.TaiX.-C.Augmented Lagrangian method for total variation restoration with non-quadratic fidelity20115123726110.3934/ipi.2011.5.237MR2773434Zbl1225.800132-s2.0-79951763545ZhangS.LiJ.LiH.-C.DengC.PlazaA.Spectral-spatial weighted sparse regression for hyperspectral image unmixing20185663265327610.1109/TGRS.2018.27972002-s2.0-85041810394WenY.ChanR. H.ZengT.Primal-dual algorithms for total variation based image restoration under Poisson noise201659114116010.1007/s11425-015-5079-0MR3437000Zbl06601309ChaventG.KunischK.Regularization of linear least squares problems by total bounded variation1997235937610.1051/cocv:1997113MR1483764Zbl0890.49010LiuX.HuangL.Total bounded variation-based Poissonian images recovery by split Bregman iteration201235552052910.1002/mma.1588MR2908764Zbl1252.940142-s2.0-84862780686LysakerM.LundervoldA.TaiX. C.Noise removal using fourth-order partial differential equation with applications to medical magnetic resonance images in space and time200312121579158910.1109/TIP.2003.819229Zbl1286.940202-s2.0-4544278119ZhangJ.ChenR.DengC.WangS.Fast linearized augmented Lagrangian method for Euler's elastica model20171019811510.4208/nmtma.2017.m1611MR3611346Zbl1389.94035ZhangJ.YangY.-F.Nonlinear multigrid method for solving the LLT model2013219104964497610.1016/j.amc.2012.11.060MR3009458ZhuW.ChanT.Image denoising using mean curvature of image surface20125113210.1137/110822268MR2902655Zbl1258.940212-s2.0-84856694193JiangL.HuangJ.LvX-G.LiuJ.Restoring poissonian images by a combined first-order and second-order variation approach201320131127457310.1155/2013/274573MR3100680JiangL.HuangJ.LvX.-G.LiuJ.Alternating direction method for the high-order total variation-based Poisson noise removal problem201569349551610.1007/s11075-014-9908-yMR3360474Zbl1320.650342-s2.0-85027957581GoldsteinT.OsherS.The split Bregman method for L1-regularized problems20092232334310.1137/080725891MR2496060TaiX. C.WuC.Augmented Lagrangian method, dual methods and split Bregman iteration for ROF model2009LNCS 5567502513CaiJ.OsherS.ShenZ.Split bregman methods and frame based image restoration2009823373692-s2.0-7795292118810.1137/090753504WangZ.BovikA. C.SheikhH. R.SimoncelliE. P.Image quality assessment: from error visibility to structural similarity200413460061210.1109/TIP.2003.8198612-s2.0-1942436689