We consider simultaneously estimating the restored image and the spatially dependent regularization parameter which mutually benefit from each other. Based on this idea, we refresh two well-known image denoising models: the LLT model proposed by Lysaker et al. (2003) and the hybrid model proposed by Li et al. (2007). The resulting models have the advantage of better preserving image regions containing textures and fine details while still sufficiently smoothing homogeneous features. To efficiently solve the proposed models, we consider an alternating minimization scheme to resolve the original nonconvex problem into two strictly convex ones. Preliminary convergence properties are also presented. Numerical experiments are reported to demonstrate the effectiveness of the proposed models and the efficiency of our numerical scheme.
1. Introduction
Image denoising is a fundamental problem in image processing and computer vision. In many real-world applications, it forms a significant preliminary step for subsequent image processing operations, such as object recognition, medical image analysis, surveillance, and many more. During acquisition and transmission, images are often corrupted by Gaussian noise. In this problem, the degradation process is modeled as
(1)f=uexact+η,
where f, uexact, and η represent the observed image, the original image, and the additive white Gaussian noise, respectively.
The objective of image denoising is to compute a good estimate of uexact from f. To obtain a reasonable approximated solution from (1), the regularization method, generally used as a numerical technique for stabilizing inverse problems, has been increasingly applied to image denoising over the past decades. A large class of regularization based denoising methods unifies under the following framework:
(2)minuE(u)=F(u)+αR(u),
where u denotes the restored image to be estimated. In the right-hand side of (2), F(u) represents the fidelity term, which measures the closeness of the estimate to the data. The functional R(u) is called the regularization term pushing u to exhibit some a priori expected features. Parameter α in (2) is known as the regularization parameter, which balances the tradeoff between the two terms.
One key topic in regularization methods is the choice of the regularizer. Among many regularization based denoising models, the total variation regularization, proposed by Rudin, Osher, and Fatemi (ROF) [1], has won tremendous success due to its edge-preserving property. In the ROF model, the restoration result u is generated by solving the following minimization problem:
(3)minuE(u)=∫Ω(u-f)2dx+αRROF(u),
where the TV-term RROF(u)=∫Ω|Du| denotes the total variation of u (see [2] for more details) and Ω denotes the image domain. A remarkable aspect of the ROF model is that the TV-term does not penalize the discontinuities in u; see, for example, [3]. This property allows us to restore the edges of the original image. However, the main disadvantage of the ROF model is the so-called staircase effect (smooth regions are transformed into piecewise constant regions), a phenomenon long observed in the literature [4–6]. As a consequence, the restoration result is unsatisfactory to the eye due to the loss of texture details and the generation of artificial edges that do not exist in the true image. To overcome this effect, models based on high-order PDEs have been proposed in the literature; see, for example, [7–11]. For instance, Lysaker, Lundervold, and Tai [8] proposed a fourth-order PDE model (termed as the LLT model) with the form
(4)minuE(u)=∫Ω(u-f)2dx+αRLLT(u),
where RLLT(u)=∫Ω|∇2u|dx is a fourth-order filter. From a theoretical point of view [12], it has been shown that fourth-order PDEs are superior to second-order PDEs in some aspects, including avoiding the staircase effect. However, this type of filters usually blurs the edges of the original image and suffers from the speckle effect in homogeneous regions. Since the ROF model and the LLT model are of both merits and drawbacks, it may be desirable to promote solutions that simultaneously exhibit properties that are enforced by both regularizers. This is the basic idea of [13] proposed by Lysaker and Tai, in which they studied an iterative algorithm based on combining the results generated by (3) and (4). But their method needs to solve two separate PDEs and their combination is not quite intuitive. Li et al. [14] proposed a hybrid model in the following form:
(5)minuE(u)=∫Ω(u-f)2dx+α(∫Ω(1-g)|Du|+∫Ωg|∇2u|dx),
where the function g is chosen as an edge detector 1/(1+γ+k|∇Gσ*f|2) to control the relative weights of the two regularizers (see [14] for details). By their selection of g, the second-order filter plays the dominant role where |∇Gσ*f| is large (regions including sharp features), whereas the fourth-order filter plays the dominant role where |∇Gσ*f| is small (homogeneous regions). Therefore, (5) reaps the benefits of both regularizers. More related works on the combination of the ROF model and the LLT model can be found in [15–17].
Another crucial issue in the regularization process is the suitable selection of the regularization parameter. In regularization models, the regularization parameter controls the relative weights of the data fidelity and regularization terms. However, due to the inhomogeneous distribution of cartoon, textures, and small details in an image (in terms of variance), a global constant parameter may not be suitable for these features of different scales. According to the cartoon pyramid model studied in [18], as the regularization parameter goes from small to large values, the corresponding solutions generated by the ROF model range from undersmoothed (textures are preserved, while noise remains almost unchanged in homogeneous regions) to oversmoothed (noise is reduced well, but significant details are lost). This suggests that a spatially dependent regularization parameter including, instead of a single constraint, a group of constraints adapted to different regions of the image is desired to obtain higher quality results. To this end, denoising methods with a spatially dependent regularization parameter have been extensively studied; see, for example, [18–22]. In these works, the automated selection strategies of the regularization parameter are based on local variance measures [18, 20–22] and the local statistical characteristics of the noise [21, 22].
Note that, in models (3), (4), and (5), the selection of the regularization parameter never accommodates the information of the current restored image. This triggers us to seek a new approach for regularization parameter estimation. Our main motivation is to simultaneously estimate both the restored image and the regularization parameter which mutually benefit from each other during the denoising procedure. We propose a general model with the following form:
(6)minu,gE(u,g)=F(u)+F(g)+R(u,g),
where g denotes a spatially dependent regularization parameter, F(u) and F(g) are, respectively, fidelity terms for the two variables, and the binary function R(u,g) represents a regularizer. The advantages of (6) are presented as follows. First, instead of a global constant regularization parameter, a spatially dependent regularization parameter is more flexible to image features of different scales. Second, since the estimation of the two variables is derived simultaneously, the regularization parameter is changing more reasonablly by exploiting the more accuracy restored image instead of the observed images corrupted by noise. Here we remark that (6) is a general-purpose model which can incorporate various classical methods. Specially we focus on the fourth-order filter in (4) and the hybrid regularizer in (5). The refresh models are named as Models 1 and 2, respectively. Model 1 can suppress the speckle effect caused by the LLT model while less overregularizing textures. Model 2 has the advantage of better restoring textures and homogeneous regions while preserving edges. To overcome the nonconvexity of our models, we utilize an alternating minimization scheme to resolve the original nonconvex problem into two strictly convex ones. Thus, our models can be asymptotically solved.
The outline of the rest of the paper is as follows. In the next section, we give notations and discretizations. In Section 3, we introduce our models with some discussions. The following Section 4 presents the numerical scheme for solving the proposed models. In Section 5, numerical experiments are given to demonstrate the performance of the proposed models. Finally, we conclude the paper in Section 6.
2. Notations and Discretizations
From now on, we will restrict our attention to the discrete setting. We first introduce some notations. Without loss of generality, we assume that all the images in this paper are grayscale and have a square domain. Then we represent an image u as an n×n matrix, where ui,j represents the intensity value of u at pixel (i,j), for i,j=1,…,n. For the sake of simplicity, we assume that the image is periodically extended, and then the FFT can be adopted in our algorithm. It should be pointed out that adaption to other boundary conditions is not difficult in principle. In the rest of this paper, we let ∥·∥2, ∥·∥F, and ∘ denote the 2-norm, the Frobenius norm, and the Hadamard product, respectively. Let X denote the Euclidean space ℝn×n. The usual inner product and Euclidean norm of X are denoted as 〈·,·〉X and ∥·∥X, respectively. Denote by Y the space X×X equipped with the inner product 〈·,·〉Y leading canonically to the norm ∥·∥Y, that is; for p=(p1,p2)∈Y and q=(q1,q2)∈Y,
(7)〈p,q〉Y=〈p1,q1〉X+〈p2,q2〉X,∥p∥Y=〈p,p〉Y.
Moreover, for y=(y1,y2)∈Y, |y| denotes the n×n matrix whose element |y|i,j is equal to ∥yi,j∥2 with yi,j=(yi,j1,yi,j2). We denote the space Y×Y as Z. The definitions of the inner product 〈·,·〉Z, the norm ∥·∥Z, and |z| are analogous to those of Y.
Now we introduce a discretized version of some necessary operators. For u∈X, we introduce forward and backward difference operators as follows:
(8)(D°x+u)i,j={ui,j+1-ui,j,1≤j≤n-1,ui,1-ui,n,j=n,(D°y+u)i,j={ui+1,j-ui,j,1≤i≤n-1,u1,j-un,j,i=n,(D°x-u)i,j={ui,1-ui,n,j=1,ui,j-ui,j-1,2≤j≤n,(D°y-u)i,j={u1,j-un,j,i=1,ui,j-ui-1,j,2≤i≤n.
Second-order difference operators can be expressed by using a recursive application of first-order difference operators; that is, the operator D°xx-+ is defined by
(9)(D°xx-+u)i,j:=(D°x-(D°x+u))i,j.
Other second order difference operators used in this paper such as D°xx+-, D°xy++, D°yx++, D°xy--, D°yx--, D°yy-+, and D°yy+- can be similarly defined. Based on the definitions above, we can introduce the discrete gradient operator. For u∈X, ∇u is a vector of Y, which is given by
(10)∇u=(D°x+u,D°y+u).
The discrete total variation of u is then given by
(11)RROF(u)=∑1≤i,j≤n|∇u|i,j.
To discretize the fourth-order filter in the LLT model (4), we have to introduce the discrete Hessian operator. Here we adopt the definition in [23]. The discrete Hessian operator is a mapping H:X→Z, and for u∈X, Hu is defined by
(12)Hu=(D°xx-+uD°xy++uD°yx++uD°yy-+u).
Then, RLLT(u) can be discretized as
(13)RLLT(u)=∑1≤i,j≤n|Hu|i,j.
We also introduce two important operators: div:Y→X and H*:Z→X, that is, the adjoint operator of -∇ and H, respectively. By analogy with the continuous setting, for u∈X, y∈Y, and z∈Z, we want them to satisfy
(14)〈divy,u〉X=〈y,-∇u〉Y,〈H*z,u〉X=〈z,Hu〉Z.
Then they are formulated as follows:
(15)divy=D°x-y1+D°y-y2,H*z=D°xx+-z11+D°yx--z12+D°xy--z21+D°yy+-z22.
Finally, the composite operators Δ=div·∇ and H*H are also used.
3. The Proposed Models and Discussion3.1. The Proposed Models
Arising from model (6) and exploiting the benefits of the LLT model (4) and the hybrid model (5), we consider and study the following models.
Model 1.
One has
(16)minu,g∈XE1(u,g)=12∥u-f∥X2+α∥g-Mα·1∥X2+∑1≤i,j≤ngi,j2(K(|Hu|))i,j.
Model 2.
One has
(17)minu,g1,g2∈XE2(u,g1,g2)=12∥u-f∥X2+α(∥g1-Mα·1∥X2+∥g2-Mα·1∥X2)+∑1≤i,j≤n(g1)i,j2(K(|∇u|))i,j+(g2)i,j2(K(|Hu|))i,j.
In the above two models, α and M are positive parameters, 1 represents the matrix whose elements are equal to 1, and K:X→X denotes a discrete mean filter. More precisely, we choose an odd integer r and define an r-by-r window centered at pixel (i,j) (with a periodic extension at the boundary); that is,
(18)Ωi,jr={(s,t):min(|s-i|,n-|s-i|)≤r-12,hhhmin(|t-j|,n-|t-j|)≤r-12},
where s,t=1,…,n. Then for u∈X, K(u) is given by
(19)(K(u))i,j=1r2∑(s,t)∈Ωi,jrus,t.
It is immediately clear that if r=1, K coincides with the identity operator. We start our investigations of the proposed models by the following lemma.
Lemma 1.
For any u,v∈X, one has
(20)∑1≤i,j≤nui,j(K(v))i,j=∑1≤i,j≤n(K(u))i,jvi,j.
Proof.
We define the following function k:ℝ2→ℝ:
(21)k(x,y)={1r2ifmin(|x|,n-|x|)≤r-12,min(|y|,n-|y|)≤r-12,0else.
From the definition of k, it is immediate to see that k(x,y)=k(-x,-y). Thus, we have
(22)∑1≤i,j≤nui,j(K(v))i,j=∑1≤i,j≤nui,j(∑1≤s,t≤nk(i-s,j-t)vs,t)=∑1≤s,t≤nvs,t(∑1≤i,j≤nk(s-i,t-j)ui,j)=∑1≤s,t≤n(K(u))s,tvs,t=∑1≤i,j≤n(K(u))i,jvi,j.
According to Lemma 1, we can rewrite the regularizers in our models in another equivalent form. In (16), for example, we rewrite
(23)∑1≤i,j≤ngi,j2(K(|Hu|))i,j=∑1≤i,j≤n(K(g∘g))i,j|Hu|i,j.
Next we study the existence result for the solution of our models.
Theorem 2.
Assume that α>0, M>0, and f∈X. Then both Models 1 and 2 have a global minimizer.
Proof.
First, under our assumption, both models are bounded from below, which implies that the minimization problems are the correct setting.
It is immediate to see that the functions E1:X×X→ℝ and E2:X×X×X→ℝ are proper, coercive, and continuous. Then, the existence of a global minimizer is deduced by Weierstrass’ theorem [24].
We end this subsection with the following proposition which describes the relationship between the variables in a solution of our models.
Proposition 3.
(i) Assume that (u*,g*)∈X×X is a (local) minimizer of Model 1. Then (u*,g*) satisfies
(24)u*=argminuE1(u,g*),g*=argmingE1(u*,g).
(ii) Assume that (u*,g1*,g2*)∈X×X×X is a (local) minimizer of Model 2. Then (u*,g1*,g2*) satisfies
(25)u*=argminuE2(u,g1*,g2*),(g1*,g2*)=argming1,g2E2(u*,g1,g2).
Proof.
(i) Under our assumption, there exists some ϵ>0 such that
(26)E1(u*,g*)≤E1(u,g)∀u,g∈Xwith∥u-u*∥X≤ϵ,∥g-g*∥X≤ϵ.
Then, we have
(27)E1(u*,g*)≤E1(u,g*)∀u∈Xwith∥u-u*∥X≤ϵ,
which implies that u* is a local minimizer of the following minimization problem:
(28)minu∈XE1(u,g*)=12∥u-f∥X2+∑1≤i,j≤n(K(g*∘g*))i,j|Hu|i,j.
According to the nonnegativity of (K(g*∘g*))i,j, we can deduce that the objective function of (28) is strictly convex. Then, u* is the unique global minimizer of (28). Similarly, we can obtain that g* is the unique global minimizer of the following strictly convex minimization problem:
(29)ming∈XE1(u*,g)=α∥g-Mα·1∥X2+∑1≤i,j≤ngi,j2(K(|Hu*|))i,j.
Hence we complete the proof of (i).
(ii) Using the same technique, (ii) can be similarly proved, and we omit the details.
3.2. Discussion on the Proposed Models
In this subsection we discuss the strengths of the proposed models by investigating the numerical behavior of the solution. Proposition 3 implies that there is an interaction between the restored image and the regularization parameter in a solution of our models, which can achieve joint optimality. In other words, both variables benefit from each other. Assume that (u*,g*) and (u*,g1*,g2*) are solutions of Models 1 and 2, respectively. According to Proposition 3, g*, g1*, and g2* can be calculated directly from u*. Since the minimization subproblems with respect to g* and g2* are actually the same, we just consider g1* and g2* for the sake of brevity. They are given by
(30)(g1*)i,j=Mα+(K(|∇u*|))i,j,(g2*)i,j=Mα+(K(|Hu*|))i,j.
We are interested in the numerical behavior of g1* and g2* corresponding to regions of u* with different scales (e.g., texture, flat, and ramp regions). Figure 1(a) shows a 256×256 synthetic image. The corresponding g1* and g2* are shown in Figures 1(b) and 1(c), respectively, where u* is set to be the original image. Form Figure 1, numerical behavior of g1* and g2* is summarized as follows.
For texture regions of u*, g1* and g2* are both small (darker regions indicate smaller value).
For flat regions of u*, g1* and g2* are both large.
For ramp regions of u*, g1* is inversely proportional to the gradient of u*, whereas g2* is large.
The numerical behavior of g1* and g2*; here α=0.05, M=0.05, and r=7: (a) the original image, (b) g1*, and (c) g2*.
Below we pay attention to the restored image u*. As we have mentioned in the Introduction, values of the regularization parameter control the relative weights of the fidelity and regularization terms. More precisely, small values lead to little regularization, whereas large values result in overregularization. Based on the behavior analysis above, in Model 1, textures of u* are not compromised due to the corresponding small values of g*, and the speckle effect caused by the LLT model is removed from flat and ramp regions of u* due to the corresponding large values of g*. Therefore, Model 1 can produce higher quality restoration results than the LLT model. Model 2 inherits the advantages of Model 1 and exhibits some new ones. Edges of u* are well preserved by total variation regularization. At the same time, the staircase effect is suppressed in ramp regions due to the corresponding small values of g1* (the fourth-order filter plays the dominate role). Therefore, Model 2 incorporates the strengths of both regularizers while avoiding their drawbacks. Finally, we remark that, compared with the hybrid model (5), Model 2 is more reasonable. Observe that (5) utilizes a fixed weighting function to combine the two regularizers. However, the weighting function is computed from the noisy image, which leads to inaccuracy. On the other hand, the regularization parameter in Model 2 depends on the restored image, which is less affected by noise.
4. Algorithms
In this section, we formulate the numerical scheme for solving our models. Our basic idea is to use the alternating minimization scheme which is described in Algorithms 1 and 2.
<bold>Algorithm 1: </bold>The alternating minimization method for solving Model <xref ref-type="statement" rid="model1">1</xref>.
Next we would like to show the convergence of our algorithms.
Theorem 4.
(i) Let {(uk,gk)} be the sequence derived from Algorithm 1. Then {(uk,gk)} converges to (u^,g^)∈X×X (up to a subsequence), and for any (u,g)∈X×X, one has
(31)E1(u^,g^)≤E1(u,g^),E1(u^,g^)≤E1(u^,g).
(ii) Let {(uk,g1k,g2k)} be the sequence derived from Algorithm 2. Then {(uk,g1k,g2k)} converges to (u^,g^1,g^2)∈X×X×X (up to a subsequence), and for any (u,g1,g2)∈X×X×X, one has
(32)E2(u^,g^1,g^2)≤E2(u,g^1,g^2),E2(u^,g^1,g^2)≤E2(u^,g1,g2).
Proof.
(i) First, we can easily deduce the following inequality from Algorithm 1:
(33)E1(uk+1,gk+1)≤E1(uk+1,gk)≤E1(uk,gk).
Then the sequence {E1(uk,gk)} is bounded, and there exists a constant N≥0 such that
(34)E1(uk,gk)≤N,∀k∈ℕ.
The above inequality reads as
(35)12∥uk-f∥X2+α∥gk-Mα·1∥X2+∑1≤i,j≤n(gk)i,j2(K(|Huk|))i,j≤N,
which implies that {(uk,gk)} is uniformly bounded in X×X. Then we can find a subsequence {(unk,gnk)}⊂{(uk,gk)} and (u^,g^)∈X×X such that they satisfy
(36)(unk,gnk)→X×X(u^,g^).
On the other hand, for any u∈X, we have
(37)E1(unk+1,gnk+1)≤E1(unk+1,gnk+1)≤E1(unk+1,gnk)≤E1(u,gnk).
Recall that the function E1:X×X→ℝ is continuous; we obtain
(38)E1(u^,g^)≤E1(u,g^),
by letting k tend to infinity. Similarly, for any g∈X, we have
(39)E1(u^,g^)≤E1(u^,g).
Hence we complete the proof of (i).
(ii) Using the same technique, (ii) can be similarly proofed, and we omit the details.
It is obvious that (b) in Algorithm 1 and (b) in Algorithm 2 can be easily solved. To solve (a) in Algorithm 1 and (a) in Algorithm 2, we use the alternating direction method (ADM) [25], which is closely related to the augmented Lagrangian method [23], the Douglas-Rachford splitting algorithm [26], and the split Bregman method [27]. For the sake of brevity, we only present the ADM procedure for solving (a) in Algorithm 2. The ADM procedure for solving (a) in Algorithm 1 can be analogously derived. We rewrite (a) in Algorithm 2 by introducing auxiliary variables p and q as follows:
(40)minu∈X,p∈Y,q∈Z12∥u-f∥X2+∑1≤i,j≤n(K(g1k-1∘g1k-1))i,j|p|i,j&hhhhhhhh+∑1≤i,j≤n(K(g2k-1∘g2k-1))i,j|q|i,jsubjecttohp=∇u,q=Hu.
Then the problem fits the framework of the alternating direction method. By using the Lagrangian multipliers λ1∈Y and λ2∈Z, the augmented Lagrangian function of (40) is given by
(41)L(u,p,q,λ1,λ2)=12∥u-f∥X2+∑1≤i,j≤n(K(g1k-1∘g1k-1))i,j|p|i,j+∑1≤i,j≤n(K(g2k-1∘g2k-1))i,j|q|i,j+〈λ1,p-∇u〉Y+〈λ2,q-Hu〉Z+β2(∥p-∇u∥Y2+∥q-Hu∥Z2),
where β>0 is the penalty parameter for the linear constraints. Then the minimization of (40) is achieved by an iterative process: in each iteration, we minimize the augmented Lagrangian function (41) with respect to u, p, and q, given the other two updated in previous iteration, and then update λ1 and λ2. The computational procedure is presented in Algorithm 3.
Step 3. Given uk,l, pk,l, and qk,l, update λ1k,l and λ2k,l by
λ1k,l=λ1k,l-1+β(pk,l-∇uk,l), (d)
λ2k,l=λ2k,l-1+β(qk,l-Huk,l), (e)
respectively.
end while
Now we make some remarks on the ADM procedure. First, we observe that every step in Algorithm 3 has a closed-form solution, and then the alternating direction method can be efficiently implemented. Moreover, for a convex objective function with linear constraints, the convergence of the alternating direction method is guaranteed; see, for example, [28, 29]. Second, to obtain an exact solution of (a) in Algorithm 2, one needs to let L (the maximum iteration number of the ADM procedure) tend to infinity; that is, uk=uk,∞. Since the regularization parameters are not optimal, in practice, it is not necessary to compute uk exactly. For the sake of computational efficiency, only several ADM iterations are implemented. Third, aiming at faster convergence of the ADM procedure, we initialize each iteration with the auxiliary variables and the Lagrangian multipliers updated in the previous iteration. Numerical examples will be given in Section 5.2 to demonstrate the efficiency of our numerical scheme.
5. Numerical Experiments
In this section, we provide numerical results to illustrate the effectiveness of the proposed models. In Section 5.1, we compare the performance of the proposed models with the ROF model (3), the LLT model (4), and the hybrid model (5). In Section 5.2, we give some criterions on choosing the parameters in our algorithms. All the experiments are performed under Windows 7 and MATLAB R2010a (Version 7.10.0.499) running on a desktop with an Intel Core i3-2130 CPU at 3.40 GHz and 4 GB memory.
5.1. Comparative Results
Four test images shown in Figure 1 are considered for the comparative experiment. The intensity range of the original images is scaled to [0,1]. The degraded images are corrupted by white Gaussian noise with the noise level 0.1, which are shown in Figures 3(a), 4(a), 5(a), and 6(a). The quality of the restored images is assessed using the relative error (ReErr) and the signal-to-noise ratio (SNR). They are defined as
(42)ReErr=∥u-uc∥F∥u∥F,SNR=10log10∥u-u¯∥F∥u-uc∥F,
where u, u¯, and uc are the original image, the mean intensity value of u, and the restored image, respectively.
Now we present implementation details of the comparative experiment. In our algorithms, parameters M and L are fixed and set to be 0.07 and 10, respectively. For parameters α and r, their values are reported in the corresponding figures. For solving models (3), (4), and (5), we use the alternating direction method (ADM) [22]. For a fair comparison, all of the parameters in these three models are optimized to achieve the best restoration with respect to the ReErr and the SNR values. In all the ADM procedures, we set the penalty parameter β to be 1 as a default value for most of digital images with intensity range in [0,1]. We terminate all the algorithms by the following stopping criterion:
(43)∥ui-1-ui∥F∥ui-1∥F≤10-4.
In Figures 3, 4, 5, and 6, we display the resulting images restored by different models. The corresponding relative errors, SNR values (dB), and computational time (in seconds) are listed in Table 1. From these results, we find that our models restore better images than the ROF model (3), the LLT model (4), and the hybrid model (5), both visually and quantitatively. In the images restored by the ROF model, we observe that sharp edges are preserved, but the staircase effect is clearly present (e.g., the homogeneous regions in Figures 5 and 6). For the restored results of the LLT model, the staircase effect is suppressed, but some speckle effect is introduced in homogeneous regions (e.g., the background of Figures 3, 4, 5, and 6). We also note that both models (3) and (4) compromise details and textures due to using the global constant regularization parameter. For the images restored by the hybrid model, visual artifacts are almost suppressed, but details are not recovered well (e.g., the ramp region in Figure 3). This is due to the inaccurate estimation of their weighting function from the observed image. Our methods, on the other hand, restore the homogeneous regions significantly better while preserving more details. To better understand the behavior of our methods, some zoomed-in local results and contour plots are shown in Figures 7, 8, and 9. We can find that the homogeneous regions are almost smooth (e.g., the background of Figure 3, the flat and ramp regions in Figure 7, and the face of Barbara in Figure 9). The contour plots given in Figure 8 also visually illustrate the above observation. At the same time, details and textures are preserved clearly without being overregularized (e.g., the ramp region in Figure 3, the textures in Figure 7, and the textures on the scarf in Figure 9). Furthermore, we find that Model 2 performs better than Model 1 with respect to preserving edges (e.g., the contours around the lips and nose of Lena in Figure 8). In Figure 10, we show the final values of the regularization parameters in our models compared with the weighting functions in the hybrid model. It is clear that Figures 10(a) and 10(e) are noisy and inaccurate, whereas Figures 10(b)–10(d) and 10(f)–10(h) are less affected by noise. Moreover, we observe that the values of the regularization parameters are small in detail and texture regions, and they are large in homogeneous regions. Therefore, our methods are superior with respect to removing noise while preserving details.
Summarized results of the comparative experiment.
Figure 2(a)
Figure 2(b)
Figure 2(c)
Figure 2(d)
ReErr
Noisy image
0.1064
0.1847
0.1913
0.1643
ROF model
0.0522
0.1279
0.0784
0.0925
LLT model
0.0633
0.1273
0.0807
0.0894
Hybrid model
0.0509
0.1224
0.0758
0.0879
Model 1
0.0361
0.1151
0.0751
0.0842
Model 2
0.0298
0.1132
0.0729
0.0827
SNR
Noisy image
9.00
6.43
5.49
6.42
ROF model
15.19
9.62
13.24
11.41
LLT model
13.51
9.66
12.99
11.71
Hybrid model
15.41
10.00
13.53
11.85
Model 1
18.39
10.53
13.61
12.23
Model 2
20.06
10.68
13.87
12.38
Time
ROF model
0.28
1.22
1.26
1.05
LLT model
1.33
4.63
1.73
3.06
Hybrid model
2.23
8.44
4.48
5.46
Model 1
2.87
6.36
5.18
4.65
Model 2
2.50
9.63
5.93
6.08
5.2. Parameter Study
In this section, we present some criterions on choosing the parameters which are necessary to start up our algorithms. In our experiments, the parameter settings are the same as those in Section 5.1, except the testing parameters.
The window size r is used to control the smoothness of the spatially dependent regularization parameter. We test our algorithms for different values of r varying from 1 to 19. Figure 11 shows the plots of the SNR values of the restored images. We see from the plots that a small value of r (leads to a sharp regularization parameter) is suitable for images with sharp features (e.g., r=1 for Figure 2(a) and r=3 for Figure 2(c)), whereas a moderate value of r (leads to a relatively smooth regularization parameter) is needed for images containing textures (e.g., r=7 for Figures 2(b) and 2(d)). For large values of r (r≥9), we find that the changes of the SNR values are not significant. However, if r becomes too large, then the regularization parameter is oversmooth, which compromises image details.
Original images: (a) synthetic image 1 (128×128), (b) synthetic image 2 (256×256), (c) “Lena” (256×256), and (d) “Barbara” (256×256).
Results of different models for the synthetic image 1: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.1 and r=1), and (f) our Model 2 (α=0.18 and r=1).
Results of different models for the synthetic image 2: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.21 and r=7), and (f) our Model 2 (α=0.35 and r=7).
Results of different models for the image “Lena”: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.26 and r=3), and (f) our Model 2 (α=0.36 and r=3).
Results of different models for the image “Barbara”: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.28 and r=7), and (f) Model 2 (α=0.39 and r=7).
Zoomed-in local results corresponding to Figures 4(b)–4(f): (a) the original image, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.
Zoomed-in local results corresponding to Figures 6(b)–6(f): (a) the original image, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.
Contour plots corresponding to Figures 5(b)–5(f): (a) the ideal contour plot, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.
Final values of the regularization parameters in the proposed models and weighting functions in the hybrid model corresponding to Figures 4(d)–4(f) and 6(d)–6(f). (a) g in the hybrid model, (b) g* in Model 1, (c) g1* in Model 2, (d) g2* in Model 2, (e) g in the hybrid model, (f) g* in Model 1, (g) g1* in Model 2, and (h) g2* in Model 2.
SNR for results restored by our models with different window size r: (a) Model 1 and (b) Model 2.
The parameters M and α jointly measure the values of the spatially dependent regularization parameter. The difficulty of tuning these parameters is that they depend not only on the noise level but also on the images. For this reason, trial by error for the two parameters is used. More precisely, we simplify the tuning process by fixing M and searching for the best α of each image.
We also study the behavior of the proposed numerical scheme with respect to the setting of L and our initialization. For the setting of Figure 2(c), Table 2 shows the summarized results of our models for different values of L with our initialization and zero initialization (the auxiliary variables and the Lagrangian multipliers are initialized to zero). We see from Table 2 that when using zero initialization, only sufficiently large L (e.g., L=30 and 40) can provide good restoration result with increasing computational time. Our initialization, on the other hand, produces results as effectively as zero initialization with slightly less computation time. In addition, there is no considerable difference between the relative errors and the SNR values for different values of L. Since a too small or too large L yields comparatively high computational cost, a moderate L (e.g., L=10) is recommended in our algorithms.
Summarized results of the proposed models for different values of L with our initialization and zero initialization.
L
Model 1
Model 2
ReErr
SNR
Time
ReErr
SNR
Time
Our initialization
3
0.0754
13.57
4.66
0.0731
13.85
7.10
6
0.0752
13.60
4.21
0.0730
13.86
5.71
10
0.0751
13.61
5.51
0.0729
13.87
6.19
15
0.0752
13.60
6.27
0.0729
13.86
7.74
Zero initialization
10
0.0780
13.28
6.27
0.0768
13.42
5.87
20
0.0758
13.54
15.93
0.0740
13.74
14.21
30
0.0753
13.58
29.83
0.0733
13.83
23.87
40
0.0753
13.59
49.03
0.0730
13.85
33.28
6. Concluding Remarks
We have proposed two regularization models for image denoising, in which both the restored image and the spatially dependent regularization parameter are estimated simultaneously. By the construction of our models, both variables mutually benefit from each other during the denoising process, which can achieve joint optimality. The numerical behavior of the regularization parameter and the strengths of our models have been discussed in detail. The proposed models, which are nonconvex, can be asymptotically solved by the proposed alternating minimization scheme. Finally, the numerical results indicated that our methods outperform several popular methods with respect to both noise removal and detail preservation.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This research is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), and Sichuan Province Sci. and Tech. Research Project (2012GZX0080).
RudinL.OsherS.FatemiE.Nonlinear total variation based noise removal algorithmsGiustiE.StrongD.ChanT.Edge-preserving and scale-dependent properties of total variation regularizationRingW.Structural properties of solutions to total variation regularization problemsWeickertJ.BuadesA.CollB.MorelJ. M.The staircasing effect in neighborhood filters and its solutionChanT.MarquinaA.MuletP.High-order total variation-based image restorationLysakerM.LundervoldA.TaiX.-C.Noise removal using fourth-order partial differential equations with applications to medical magnetic resonance images in space and timeYouY.-L.KavehM.Fourth-order partial differential equations for noise removalHinterbergerW.ScherzerO.Variational methods on the space of functions of bounded Hessian for convexification and denoisingLysakerM.OsherS.TaiX.-C.Noise removal using smoothed normals and surface fittingGreerJ. B.BertozziA. L.Traveling wave solutions of fourth order PDEs for image processingLysakerM.TaiX.-C.Iterative image restoration combining total variation minimization and a second-order functionalLiF.ShenC.-M.FanJ.-S.ShenC.-L.Image restoration combining a total variational filter and a fourth-order filterYangF.-C.ChenK.YuB.Efficient homotopy solution and a convex combination of ROF and LLT models for image restorationChangQ.-S.TaiX.-C.XingL.A compound algorithm of denoising using second-order and fourth-order partial differential equationsPangZ.-F.YangY.-F.A projected gradient algorithm based on the augmented Lagrangian strategy for image restoration and texture extractionGilboaG.SochenN.ZeeviY. Y.Variational denoising of partly textured images by spatially varying constraintsBertalmioM.CasellesV.RougéB.SoléA.TV based image restoration with local constraintsAlmansaA.BallesterC.CasellesV.HaroG.A TV based restoration model with local constraintsDongY.-Q.HintermüllerM.Rincon-CamachoM. M.Automated regularization parameter selection in multi-scale total variation models for image restorationBrediesK.DongY.-Q.HintermüllerM.Spatially dependent regularization parameter selection in total generalized variation models for image restorationWuC.-L.TaiX.-C.Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order modelsBertsekasD. P.NedicA.OzdaglarA. E.NgM. K.WeissP.YuanX.Solving constrained total-variation image restoration and reconstruction problems via alternating direction methodsEcksteinJ.BertsekasD. P.On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operatorsGoldsteinT.OsherS.The split Bregman method for L1-regularized problemsNgM.WangF.YuanX.Inexact alternating direction methods for image recoveryHeB.YuanX.On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method