AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 729151 10.1155/2013/729151 729151 Research Article New Regularization Models for Image Denoising with a Spatially Dependent Regularization Parameter Ma Tian-Hui Huang Ting-Zhu Zhao Xi-Le Shi Peilin School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu Sichuan 611731 China uestc.edu.cn 2013 28 11 2013 2013 31 07 2013 02 10 2013 2013 Copyright © 2013 Tian-Hui Ma et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We consider simultaneously estimating the restored image and the spatially dependent regularization parameter which mutually benefit from each other. Based on this idea, we refresh two well-known image denoising models: the LLT model proposed by Lysaker et al. (2003) and the hybrid model proposed by Li et al. (2007). The resulting models have the advantage of better preserving image regions containing textures and fine details while still sufficiently smoothing homogeneous features. To efficiently solve the proposed models, we consider an alternating minimization scheme to resolve the original nonconvex problem into two strictly convex ones. Preliminary convergence properties are also presented. Numerical experiments are reported to demonstrate the effectiveness of the proposed models and the efficiency of our numerical scheme.

1. Introduction

Image denoising is a fundamental problem in image processing and computer vision. In many real-world applications, it forms a significant preliminary step for subsequent image processing operations, such as object recognition, medical image analysis, surveillance, and many more. During acquisition and transmission, images are often corrupted by Gaussian noise. In this problem, the degradation process is modeled as (1)f=uexact+η, where f, uexact, and η represent the observed image, the original image, and the additive white Gaussian noise, respectively.

The objective of image denoising is to compute a good estimate of uexact from f. To obtain a reasonable approximated solution from (1), the regularization method, generally used as a numerical technique for stabilizing inverse problems, has been increasingly applied to image denoising over the past decades. A large class of regularization based denoising methods unifies under the following framework: (2)minuE(u)=F(u)+αR(u), where u denotes the restored image to be estimated. In the right-hand side of (2), F(u) represents the fidelity term, which measures the closeness of the estimate to the data. The functional R(u) is called the regularization term pushing u to exhibit some a priori expected features. Parameter α in (2) is known as the regularization parameter, which balances the tradeoff between the two terms.

One key topic in regularization methods is the choice of the regularizer. Among many regularization based denoising models, the total variation regularization, proposed by Rudin, Osher, and Fatemi (ROF) , has won tremendous success due to its edge-preserving property. In the ROF model, the restoration result u is generated by solving the following minimization problem: (3)minuE(u)=Ω(u-f)2dx+αRROF(u), where the TV-term RROF(u)=Ω|Du| denotes the total variation of u (see  for more details) and Ω denotes the image domain. A remarkable aspect of the ROF model is that the TV-term does not penalize the discontinuities in u; see, for example, . This property allows us to restore the edges of the original image. However, the main disadvantage of the ROF model is the so-called staircase effect (smooth regions are transformed into piecewise constant regions), a phenomenon long observed in the literature . As a consequence, the restoration result is unsatisfactory to the eye due to the loss of texture details and the generation of artificial edges that do not exist in the true image. To overcome this effect, models based on high-order PDEs have been proposed in the literature; see, for example, . For instance, Lysaker, Lundervold, and Tai  proposed a fourth-order PDE model (termed as the LLT model) with the form (4)minuE(u)=Ω(u-f)2dx+αRLLT(u), where RLLT(u)=Ω|2u|dx is a fourth-order filter. From a theoretical point of view , it has been shown that fourth-order PDEs are superior to second-order PDEs in some aspects, including avoiding the staircase effect. However, this type of filters usually blurs the edges of the original image and suffers from the speckle effect in homogeneous regions. Since the ROF model and the LLT model are of both merits and drawbacks, it may be desirable to promote solutions that simultaneously exhibit properties that are enforced by both regularizers. This is the basic idea of  proposed by Lysaker and Tai, in which they studied an iterative algorithm based on combining the results generated by (3) and (4). But their method needs to solve two separate PDEs and their combination is not quite intuitive. Li et al.  proposed a hybrid model in the following form: (5)minuE(u)=Ω(u-f)2dx+α(Ω(1-g)|Du|+Ωg|2u|dx), where the function g is chosen as an edge detector 1/(1+γ+k|Gσ*f|2) to control the relative weights of the two regularizers (see  for details). By their selection of g, the second-order filter plays the dominant role where |Gσ*f| is large (regions including sharp features), whereas the fourth-order filter plays the dominant role where |Gσ*f| is small (homogeneous regions). Therefore, (5) reaps the benefits of both regularizers. More related works on the combination of the ROF model and the LLT model can be found in .

Another crucial issue in the regularization process is the suitable selection of the regularization parameter. In regularization models, the regularization parameter controls the relative weights of the data fidelity and regularization terms. However, due to the inhomogeneous distribution of cartoon, textures, and small details in an image (in terms of variance), a global constant parameter may not be suitable for these features of different scales. According to the cartoon pyramid model studied in , as the regularization parameter goes from small to large values, the corresponding solutions generated by the ROF model range from undersmoothed (textures are preserved, while noise remains almost unchanged in homogeneous regions) to oversmoothed (noise is reduced well, but significant details are lost). This suggests that a spatially dependent regularization parameter including, instead of a single constraint, a group of constraints adapted to different regions of the image is desired to obtain higher quality results. To this end, denoising methods with a spatially dependent regularization parameter have been extensively studied; see, for example, . In these works, the automated selection strategies of the regularization parameter are based on local variance measures [18, 2022] and the local statistical characteristics of the noise [21, 22].

Note that, in models (3), (4), and (5), the selection of the regularization parameter never accommodates the information of the current restored image. This triggers us to seek a new approach for regularization parameter estimation. Our main motivation is to simultaneously estimate both the restored image and the regularization parameter which mutually benefit from each other during the denoising procedure. We propose a general model with the following form: (6)minu,gE(u,g)=F(u)+F(g)+R(u,g), where g denotes a spatially dependent regularization parameter, F(u) and F(g) are, respectively, fidelity terms for the two variables, and the binary function R(u,g) represents a regularizer. The advantages of (6) are presented as follows. First, instead of a global constant regularization parameter, a spatially dependent regularization parameter is more flexible to image features of different scales. Second, since the estimation of the two variables is derived simultaneously, the regularization parameter is changing more reasonablly by exploiting the more accuracy restored image instead of the observed images corrupted by noise. Here we remark that (6) is a general-purpose model which can incorporate various classical methods. Specially we focus on the fourth-order filter in (4) and the hybrid regularizer in (5). The refresh models are named as Models 1 and 2, respectively. Model 1 can suppress the speckle effect caused by the LLT model while less overregularizing textures. Model 2 has the advantage of better restoring textures and homogeneous regions while preserving edges. To overcome the nonconvexity of our models, we utilize an alternating minimization scheme to resolve the original nonconvex problem into two strictly convex ones. Thus, our models can be asymptotically solved.

The outline of the rest of the paper is as follows. In the next section, we give notations and discretizations. In Section 3, we introduce our models with some discussions. The following Section 4 presents the numerical scheme for solving the proposed models. In Section 5, numerical experiments are given to demonstrate the performance of the proposed models. Finally, we conclude the paper in Section 6.

2. Notations and Discretizations

From now on, we will restrict our attention to the discrete setting. We first introduce some notations. Without loss of generality, we assume that all the images in this paper are grayscale and have a square domain. Then we represent an image u as an n×n matrix, where ui,j represents the intensity value of u at pixel (i,j), for i,j=1,,n. For the sake of simplicity, we assume that the image is periodically extended, and then the FFT can be adopted in our algorithm. It should be pointed out that adaption to other boundary conditions is not difficult in principle. In the rest of this paper, we let ·2, ·F, and denote the 2-norm, the Frobenius norm, and the Hadamard product, respectively. Let X denote the Euclidean space n×n. The usual inner product and Euclidean norm of X are denoted as ·,·X and ·X, respectively. Denote by Y the space X×X equipped with the inner product ·,·Y leading canonically to the norm ·Y, that is; for p=(p1,p2)Y and q=(q1,q2)Y, (7)p,qY=p1,q1X+p2,q2X,pY=p,pY. Moreover, for y=(y1,y2)Y, |y| denotes the n×n matrix whose element |y|i,j is equal to yi,j2 with yi,j=(yi,j1,yi,j2). We denote the space Y×Y as Z. The definitions of the inner product ·,·Z, the norm ·Z, and |z| are analogous to those of Y.

Now we introduce a discretized version of some necessary operators. For uX, we introduce forward and backward difference operators as follows: (8)(D°x+u)i,j={ui,j+1-ui,j,1jn-1,ui,1-ui,n,j=n,(D°y+u)i,j={ui+1,j-ui,j,1in-1,u1,j-un,j,i=n,(D°x-u)i,j={ui,1-ui,n,j=1,ui,j-ui,j-1,2jn,(D°y-u)i,j={u1,j-un,j,i=1,ui,j-ui-1,j,2in. Second-order difference operators can be expressed by using a recursive application of first-order difference operators; that is, the operator D°xx-+ is defined by (9)(D°xx-+u)i,j:=(D°x-(D°x+u))i,j. Other second order difference operators used in this paper such as D°xx+-, D°xy++, D°yx++, D°xy--, D°yx--, D°yy-+, and D°yy+- can be similarly defined. Based on the definitions above, we can introduce the discrete gradient operator. For uX, u is a vector of Y, which is given by (10)u=(D°x+u,D°y+u). The discrete total variation of u is then given by (11)RROF(u)=1i,jn|u|i,j. To discretize the fourth-order filter in the LLT model (4), we have to introduce the discrete Hessian operator. Here we adopt the definition in . The discrete Hessian operator is a mapping H:XZ, and for uX, Hu is defined by (12)Hu=(D°xx-+uD°xy++uD°yx++uD°yy-+u). Then, RLLT(u) can be discretized as (13)RLLT(u)=1i,jn|Hu|i,j. We also introduce two important operators: div:YX and H*:ZX, that is, the adjoint operator of - and H, respectively. By analogy with the continuous setting, for uX, yY, and zZ, we want them to satisfy (14)divy,uX=y,-uY,H*z,uX=z,HuZ. Then they are formulated as follows: (15)divy=D°x-y1+D°y-y2,H*z=D°xx+-z11+D°yx--z12+D°xy--z21+D°yy+-z22. Finally, the composite operators Δ=div· and H*H are also used.

3. The Proposed Models and Discussion 3.1. The Proposed Models

Arising from model (6) and exploiting the benefits of the LLT model (4) and the hybrid model (5), we consider and study the following models.

Model 1.

One has (16)minu,gXE1(u,g)=12u-fX2+αg-Mα·1X2+1i,jngi,j2(K(|Hu|))i,j.

Model 2.

One has (17)minu,g1,g2XE2(u,g1,g2)=12u-fX2+α(g1-Mα·1X2+g2-Mα·1X2)+1i,jn(g1)i,j2(K(|u|))i,j+(g2)i,j2(K(|Hu|))i,j. In the above two models, α and M are positive parameters, 1 represents the matrix whose elements are equal to 1, and K:XX denotes a discrete mean filter. More precisely, we choose an odd integer r and define an r-by-r window centered at pixel (i,j) (with a periodic extension at the boundary); that is, (18)Ωi,jr={(s,t):min(|s-i|,n-|s-i|)r-12,hhhmin(|t-j|,n-|t-j|)r-12}, where s,t=1,,n. Then for uX, K(u) is given by (19)(K(u))i,j=1r2(s,t)Ωi,jrus,t. It is immediately clear that if r=1, K coincides with the identity operator. We start our investigations of the proposed models by the following lemma.

Lemma 1.

For any u,vX, one has (20)1i,jnui,j(K(v))i,j=1i,jn(K(u))i,jvi,j.

Proof.

We define the following function k:2: (21)k(x,y)={1r2ifmin(|x|,n-|x|)r-12,min(|y|,n-|y|)r-12,0else. From the definition of k, it is immediate to see that k(x,y)=k(-x,-y). Thus, we have (22)1i,jnui,j(K(v))i,j=1i,jnui,j(1s,tnk(i-s,j-t)vs,t)=1s,tnvs,t(1i,jnk(s-i,t-j)ui,j)=1s,tn(K(u))s,tvs,t=1i,jn(K(u))i,jvi,j.

According to Lemma 1, we can rewrite the regularizers in our models in another equivalent form. In (16), for example, we rewrite (23)1i,jngi,j2(K(|Hu|))i,j=1i,jn(K(gg))i,j|Hu|i,j.

Next we study the existence result for the solution of our models.

Theorem 2.

Assume that α>0, M>0, and fX. Then both Models 1 and 2 have a global minimizer.

Proof.

First, under our assumption, both models are bounded from below, which implies that the minimization problems are the correct setting.

It is immediate to see that the functions E1:X×X and E2:X×X×X are proper, coercive, and continuous. Then, the existence of a global minimizer is deduced by Weierstrass’ theorem .

We end this subsection with the following proposition which describes the relationship between the variables in a solution of our models.

Proposition 3.

(i) Assume that (u*,g*)X×X is a (local) minimizer of Model 1. Then (u*,g*) satisfies (24)u*=argminuE1(u,g*),g*=argmingE1(u*,g).

(ii) Assume that (u*,g1*,g2*)X×X×X is a (local) minimizer of Model 2. Then (u*,g1*,g2*) satisfies (25)u*=argminuE2(u,g1*,g2*),(g1*,g2*)=argming1,g2E2(u*,g1,g2).

Proof.

(i) Under our assumption, there exists some ϵ>0 such that (26)E1(u*,g*)E1(u,g)u,gXwithu-u*Xϵ,g-g*Xϵ. Then, we have (27)E1(u*,g*)E1(u,g*)uXwithu-u*Xϵ, which implies that u* is a local minimizer of the following minimization problem: (28)minuXE1(u,g*)=12u-fX2+1i,jn(K(g*g*))i,j|Hu|i,j. According to the nonnegativity of (K(g*g*))i,j, we can deduce that the objective function of (28) is strictly convex. Then, u* is the unique global minimizer of (28). Similarly, we can obtain that g* is the unique global minimizer of the following strictly convex minimization problem: (29)mingXE1(u*,g)=αg-Mα·1X2+1i,jngi,j2(K(|Hu*|))i,j. Hence we complete the proof of (i).

(ii) Using the same technique, (ii) can be similarly proved, and we omit the details.

3.2. Discussion on the Proposed Models

In this subsection we discuss the strengths of the proposed models by investigating the numerical behavior of the solution. Proposition 3 implies that there is an interaction between the restored image and the regularization parameter in a solution of our models, which can achieve joint optimality. In other words, both variables benefit from each other. Assume that (u*,g*) and (u*,g1*,g2*) are solutions of Models 1 and 2, respectively. According to Proposition 3, g*, g1*, and g2* can be calculated directly from u*. Since the minimization subproblems with respect to g* and g2* are actually the same, we just consider g1* and g2* for the sake of brevity. They are given by (30)(g1*)i,j=Mα+(K(|u*|))i,j,(g2*)i,j=Mα+(K(|Hu*|))i,j. We are interested in the numerical behavior of g1* and g2* corresponding to regions of u* with different scales (e.g., texture, flat, and ramp regions). Figure 1(a) shows a 256×256 synthetic image. The corresponding g1* and g2* are shown in Figures 1(b) and 1(c), respectively, where u* is set to be the original image. Form Figure 1, numerical behavior of g1* and g2* is summarized as follows.

For texture regions of u*, g1* and g2* are both small (darker regions indicate smaller value).

For flat regions of u*, g1* and g2* are both large.

For ramp regions of u*, g1* is inversely proportional to the gradient of u*, whereas g2* is large.

The numerical behavior of g1* and g2*; here α=0.05, M=0.05, and r=7: (a) the original image, (b) g1*, and (c) g2*.

Below we pay attention to the restored image u*. As we have mentioned in the Introduction, values of the regularization parameter control the relative weights of the fidelity and regularization terms. More precisely, small values lead to little regularization, whereas large values result in overregularization. Based on the behavior analysis above, in Model 1, textures of u* are not compromised due to the corresponding small values of g*, and the speckle effect caused by the LLT model is removed from flat and ramp regions of u* due to the corresponding large values of g*. Therefore, Model 1 can produce higher quality restoration results than the LLT model. Model 2 inherits the advantages of Model 1 and exhibits some new ones. Edges of u* are well preserved by total variation regularization. At the same time, the staircase effect is suppressed in ramp regions due to the corresponding small values of g1* (the fourth-order filter plays the dominate role). Therefore, Model 2 incorporates the strengths of both regularizers while avoiding their drawbacks. Finally, we remark that, compared with the hybrid model (5), Model 2 is more reasonable. Observe that (5) utilizes a fixed weighting function to combine the two regularizers. However, the weighting function is computed from the noisy image, which leads to inaccuracy. On the other hand, the regularization parameter in Model 2 depends on the restored image, which is less affected by noise.

4. Algorithms

In this section, we formulate the numerical scheme for solving our models. Our basic idea is to use the alternating minimization scheme which is described in Algorithms 1 and 2.

<bold>Algorithm 1: </bold>The alternating minimization method for solving Model <xref ref-type="statement" rid="model1">1</xref>.

Input:  f, α, M, and r.

Output:  uk.

Initialization:  g0=0.

while  not converged do

Step  1. Given gk-1, computing uk by solving

minuXE1(u,gk-1)=12u-fX2+1i,jn(K(gk-1gk-1))i,j|Hu|i,j.   (a)

Step  2. Given uk, computing gk by solving

mingXE1(uk,g)=αg-Mα·1X2+1i,jngi,j2(K(|Huk|))i,j.     (b)

end while

<bold>Algorithm 2: </bold>The alternating minimization method for solving Model <xref ref-type="statement" rid="model2">2</xref>.

Input:  f, α, M, and r.

Output:  uk.

Initialization:  g10=0 and g20=0.

while not converged do

Step  1. Given g1k-1 and g2k-1, computing uk by solving

minuXE2(u,g1k-1,g2k-1)=12u-fX2+1i,jn(K(g1k-1g1k-1))i,j|u|i,j

+1i,jn(K(g2k-1g2k-1))i,j|Hu|i,j.      (a)

Step  2. Given uk, computing g1k and g2k by solving

ming1,g2XE2(uk,g1,g2)=α(g1-Mα·1X2+g2-Mα·1X2)

+1i,jn(g1)i,j2(K(|uk|))i,j+(g2)i,j2(K(|Huk|))i,j.  (b)

end while

Next we would like to show the convergence of our algorithms.

Theorem 4.

(i) Let {(uk,gk)} be the sequence derived from Algorithm 1. Then {(uk,gk)} converges to (u^,g^)X×X (up to a subsequence), and for any (u,g)X×X, one has (31)E1(u^,g^)E1(u,g^),E1(u^,g^)E1(u^,g).

(ii) Let {(uk,g1k,g2k)} be the sequence derived from Algorithm 2. Then {(uk,g1k,g2k)} converges to (u^,g^1,g^2)X×X×X (up to a subsequence), and for any (u,g1,g2)X×X×X, one has (32)E2(u^,g^1,g^2)E2(u,g^1,g^2),E2(u^,g^1,g^2)E2(u^,g1,g2).

Proof.

(i) First, we can easily deduce the following inequality from Algorithm 1: (33)E1(uk+1,gk+1)E1(uk+1,gk)E1(uk,gk). Then the sequence {E1(uk,gk)} is bounded, and there exists a constant N0 such that (34)E1(uk,gk)N,k. The above inequality reads as (35)12uk-fX2+αgk-Mα·1X2+1i,jn(gk)i,j2(K(|Huk|))i,jN, which implies that {(uk,gk)} is uniformly bounded in X×X. Then we can find a subsequence {(unk,gnk)}{(uk,gk)} and (u^,g^)X×X such that they satisfy (36)(unk,gnk)X×X(u^,g^).

On the other hand, for any uX, we have (37)E1(unk+1,gnk+1)E1(unk+1,gnk+1)E1(unk+1,gnk)E1(u,gnk). Recall that the function E1:X×X is continuous; we obtain (38)E1(u^,g^)E1(u,g^), by letting k tend to infinity. Similarly, for any gX, we have (39)E1(u^,g^)E1(u^,g). Hence we complete the proof of (i).

(ii) Using the same technique, (ii) can be similarly proofed, and we omit the details.

It is obvious that (b) in Algorithm 1 and (b) in Algorithm 2 can be easily solved. To solve (a) in Algorithm 1 and (a) in Algorithm 2, we use the alternating direction method (ADM) , which is closely related to the augmented Lagrangian method , the Douglas-Rachford splitting algorithm , and the split Bregman method . For the sake of brevity, we only present the ADM procedure for solving (a) in Algorithm 2. The ADM procedure for solving (a) in Algorithm 1 can be analogously derived. We rewrite (a) in Algorithm 2 by introducing auxiliary variables p and q as follows: (40)minuX,pY,qZ12u-fX2+1i,jn(K(g1k-1g1k-1))i,j|p|i,j&hhhhhhhh+1i,jn(K(g2k-1g2k-1))i,j|q|i,jsubjecttohp=u,q=Hu. Then the problem fits the framework of the alternating direction method. By using the Lagrangian multipliers λ1Y and λ2Z, the augmented Lagrangian function of (40) is given by (41)L(u,p,q,λ1,λ2)=12u-fX2+1i,jn(K(g1k-1g1k-1))i,j|p|i,j+1i,jn(K(g2k-1g2k-1))i,j|q|i,j+λ1,p-uY+λ2,q-HuZ+β2(p-uY2+q-HuZ2), where β>0 is the penalty parameter for the linear constraints. Then the minimization of (40) is achieved by an iterative process: in each iteration, we minimize the augmented Lagrangian function (41) with respect to u, p, and q, given the other two updated in previous iteration, and then update λ1 and λ2. The computational procedure is presented in Algorithm 3.

<bold>Algorithm 3: </bold>The alternating minimization method for solving (<xref ref-type="disp-formula" rid="EEq20">41</xref>).

Input:  f, g1k-1, g2k-1, r, β, and L.

Output:  uk, pk, qk, λ1k, and λ2k.

Initialization:  pk,0=pk-1, qk,0=qk-1, λ1k,0=λ1k-1, and λ2k,0=λ2k-1.

while not converged and l<L  do

Step  1. Given pk,l-1, qk,l-1, λ1k,l-1, and λ2k,l-1, computing uk,l by

u k , l = - 1 ( ( f - div ( λ 1 k , l - 1 + β p k , l - 1 ) + H * ( λ 2 k , l - 1 + β q k , l - 1 ) ) 1 - β ( Δ ) + β ( H * H ) ) ,       (a)

where and -1 denote the discrete Fourier transform and the inverse discrete Fourier

transform, respectively, and Fourier transforms of operators Δ and H*H are regarded as the

transforms of their corresponding convolution kernels.

Step  2. Given uk,l, λ1k,l-1, and λ2k,l-1, update pk,l and qk,l by the two-dimensional shrinkage

p i , j k , l = max { | u k , l - λ 1 k , l - 1 β | i , j - ( K ( g 1 k - 1 g 1 k - 1 ) i , j ) β , 0 } ( u k , l - ( λ 1 k , l - 1 / β ) ) i , j | u k , l - ( λ 1 k , l - 1 / β ) | i , j ,      (b)

q i , j k , l = max { | H u k , l - λ 2 k , l - 1 β | i , j - ( K ( g 2 k - 1 g 2 k - 1 ) ) i , j β , 0 } ( H u k , l - ( λ 2 k , l - 1 / β ) ) i , j | H u k , l - ( λ 2 k , l - 1 / β ) | i , j ,    (c)

respectively, where 0·(0/0)=0 is assumed.

Step  3. Given uk,l, pk,l, and qk,l, update λ1k,l and λ2k,l by

λ1k,l=λ1k,l-1+β(pk,l-uk,l),              (d)

λ2k,l=λ2k,l-1+β(qk,l-Huk,l),              (e)

respectively.

end while

Now we make some remarks on the ADM procedure. First, we observe that every step in Algorithm 3 has a closed-form solution, and then the alternating direction method can be efficiently implemented. Moreover, for a convex objective function with linear constraints, the convergence of the alternating direction method is guaranteed; see, for example, [28, 29]. Second, to obtain an exact solution of (a) in Algorithm 2, one needs to let L (the maximum iteration number of the ADM procedure) tend to infinity; that is, uk=uk,. Since the regularization parameters are not optimal, in practice, it is not necessary to compute uk exactly. For the sake of computational efficiency, only several ADM iterations are implemented. Third, aiming at faster convergence of the ADM procedure, we initialize each iteration with the auxiliary variables and the Lagrangian multipliers updated in the previous iteration. Numerical examples will be given in Section 5.2 to demonstrate the efficiency of our numerical scheme.

5. Numerical Experiments

In this section, we provide numerical results to illustrate the effectiveness of the proposed models. In Section 5.1, we compare the performance of the proposed models with the ROF model (3), the LLT model (4), and the hybrid model (5). In Section 5.2, we give some criterions on choosing the parameters in our algorithms. All the experiments are performed under Windows 7 and MATLAB R2010a (Version 7.10.0.499) running on a desktop with an Intel Core i3-2130 CPU at 3.40 GHz and 4 GB memory.

5.1. Comparative Results

Four test images shown in Figure 1 are considered for the comparative experiment. The intensity range of the original images is scaled to [0,1]. The degraded images are corrupted by white Gaussian noise with the noise level 0.1, which are shown in Figures 3(a), 4(a), 5(a), and 6(a). The quality of the restored images is assessed using the relative error (ReErr) and the signal-to-noise ratio (SNR). They are defined as (42)ReErr=u-ucFuF,SNR=10log10u-u¯Fu-ucF, where u, u¯, and uc are the original image, the mean intensity value of u, and the restored image, respectively.

Now we present implementation details of the comparative experiment. In our algorithms, parameters M and L are fixed and set to be 0.07 and 10, respectively. For parameters α and r, their values are reported in the corresponding figures. For solving models (3), (4), and (5), we use the alternating direction method (ADM) . For a fair comparison, all of the parameters in these three models are optimized to achieve the best restoration with respect to the ReErr and the SNR values. In all the ADM procedures, we set the penalty parameter β to be 1 as a default value for most of digital images with intensity range in [0,1]. We terminate all the algorithms by the following stopping criterion: (43)ui-1-uiFui-1F10-4.

In Figures 3, 4, 5, and 6, we display the resulting images restored by different models. The corresponding relative errors, SNR values (dB), and computational time (in seconds) are listed in Table 1. From these results, we find that our models restore better images than the ROF model (3), the LLT model (4), and the hybrid model (5), both visually and quantitatively. In the images restored by the ROF model, we observe that sharp edges are preserved, but the staircase effect is clearly present (e.g., the homogeneous regions in Figures 5 and 6). For the restored results of the LLT model, the staircase effect is suppressed, but some speckle effect is introduced in homogeneous regions (e.g., the background of Figures 3, 4, 5, and 6). We also note that both models (3) and (4) compromise details and textures due to using the global constant regularization parameter. For the images restored by the hybrid model, visual artifacts are almost suppressed, but details are not recovered well (e.g., the ramp region in Figure 3). This is due to the inaccurate estimation of their weighting function from the observed image. Our methods, on the other hand, restore the homogeneous regions significantly better while preserving more details. To better understand the behavior of our methods, some zoomed-in local results and contour plots are shown in Figures 7, 8, and 9. We can find that the homogeneous regions are almost smooth (e.g., the background of Figure 3, the flat and ramp regions in Figure 7, and the face of Barbara in Figure 9). The contour plots given in Figure 8 also visually illustrate the above observation. At the same time, details and textures are preserved clearly without being overregularized (e.g., the ramp region in Figure 3, the textures in Figure 7, and the textures on the scarf in Figure 9). Furthermore, we find that Model 2 performs better than Model 1 with respect to preserving edges (e.g., the contours around the lips and nose of Lena in Figure 8). In Figure 10, we show the final values of the regularization parameters in our models compared with the weighting functions in the hybrid model. It is clear that Figures 10(a) and 10(e) are noisy and inaccurate, whereas Figures 10(b)10(d) and 10(f)10(h) are less affected by noise. Moreover, we observe that the values of the regularization parameters are small in detail and texture regions, and they are large in homogeneous regions. Therefore, our methods are superior with respect to removing noise while preserving details.

Summarized results of the comparative experiment.

Figure 2(a) Figure 2(b) Figure 2(c) Figure 2(d)
ReErr
Noisy image 0.1064 0.1847 0.1913 0.1643
ROF model 0.0522 0.1279 0.0784 0.0925
LLT model 0.0633 0.1273 0.0807 0.0894
Hybrid model 0.0509 0.1224 0.0758 0.0879
Model 1 0.0361 0.1151 0.0751 0.0842
Model 2 0.0298 0.1132 0.0729 0.0827

SNR
Noisy image 9.00 6.43 5.49 6.42
ROF model 15.19 9.62 13.24 11.41
LLT model 13.51 9.66 12.99 11.71
Hybrid model 15.41 10.00 13.53 11.85
Model 1 18.39 10.53 13.61 12.23
Model 2 20.06 10.68 13.87 12.38

Time
ROF model 0.28 1.22 1.26 1.05
LLT model 1.33 4.63 1.73 3.06
Hybrid model 2.23 8.44 4.48 5.46
Model 1 2.87 6.36 5.18 4.65
Model 2 2.50 9.63 5.93 6.08
5.2. Parameter Study

In this section, we present some criterions on choosing the parameters which are necessary to start up our algorithms. In our experiments, the parameter settings are the same as those in Section 5.1, except the testing parameters.

The window size r is used to control the smoothness of the spatially dependent regularization parameter. We test our algorithms for different values of r varying from 1 to 19. Figure 11 shows the plots of the SNR values of the restored images. We see from the plots that a small value of r (leads to a sharp regularization parameter) is suitable for images with sharp features (e.g., r=1 for Figure 2(a) and r=3 for Figure 2(c)), whereas a moderate value of r (leads to a relatively smooth regularization parameter) is needed for images containing textures (e.g., r=7 for Figures 2(b) and 2(d)). For large values of r (r9), we find that the changes of the SNR values are not significant. However, if r becomes too large, then the regularization parameter is oversmooth, which compromises image details.

Original images: (a) synthetic image 1 (128×128), (b) synthetic image 2 (256×256), (c) “Lena” (256×256), and (d) “Barbara” (256×256).

Results of different models for the synthetic image 1: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.1 and r=1), and (f) our Model 2 (α=0.18 and r=1).

Results of different models for the synthetic image 2: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.21 and r=7), and (f) our Model 2 (α=0.35 and r=7).

Results of different models for the image “Lena”: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.26 and r=3), and (f) our Model 2 (α=0.36 and r=3).

Results of different models for the image “Barbara”: (a) the noisy image, the restored images by (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1 (α=0.28 and r=7), and (f) Model 2 (α=0.39 and r=7).

Zoomed-in local results corresponding to Figures 4(b)4(f): (a) the original image, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.

Zoomed-in local results corresponding to Figures 6(b)6(f): (a) the original image, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.

Contour plots corresponding to Figures 5(b)5(f): (a) the ideal contour plot, results of (b) the ROF model, (c) the LLT model, (d) the hybrid model, (e) our Model 1, and (f) our Model 2.

Final values of the regularization parameters in the proposed models and weighting functions in the hybrid model corresponding to Figures 4(d)4(f) and 6(d)6(f). (a) g in the hybrid model, (b) g* in Model 1, (c) g1* in Model 2, (d) g2* in Model 2, (e) g in the hybrid model, (f) g* in Model 1, (g) g1* in Model 2, and (h) g2* in Model 2.

SNR for results restored by our models with different window size r: (a) Model 1 and (b) Model 2.

The parameters M and α jointly measure the values of the spatially dependent regularization parameter. The difficulty of tuning these parameters is that they depend not only on the noise level but also on the images. For this reason, trial by error for the two parameters is used. More precisely, we simplify the tuning process by fixing M and searching for the best α of each image.

We also study the behavior of the proposed numerical scheme with respect to the setting of L and our initialization. For the setting of Figure 2(c), Table 2 shows the summarized results of our models for different values of L with our initialization and zero initialization (the auxiliary variables and the Lagrangian multipliers are initialized to zero). We see from Table 2 that when using zero initialization, only sufficiently large L (e.g., L=30 and 40) can provide good restoration result with increasing computational time. Our initialization, on the other hand, produces results as effectively as zero initialization with slightly less computation time. In addition, there is no considerable difference between the relative errors and the SNR values for different values of L. Since a too small or too large L yields comparatively high computational cost, a moderate L (e.g., L=10) is recommended in our algorithms.

Summarized results of the proposed models for different values of L with our initialization and zero initialization.

L Model 1 Model 2
ReErr SNR Time ReErr SNR Time
Our initialization 3 0.0754 13.57 4.66 0.0731 13.85 7.10
6 0.0752 13.60 4.21 0.0730 13.86 5.71
10 0.0751 13.61 5.51 0.0729 13.87 6.19
15 0.0752 13.60 6.27 0.0729 13.86 7.74

Zero initialization 10 0.0780 13.28 6.27 0.0768 13.42 5.87
20 0.0758 13.54 15.93 0.0740 13.74 14.21
30 0.0753 13.58 29.83 0.0733 13.83 23.87
40 0.0753 13.59 49.03 0.0730 13.85 33.28
6. Concluding Remarks

We have proposed two regularization models for image denoising, in which both the restored image and the spatially dependent regularization parameter are estimated simultaneously. By the construction of our models, both variables mutually benefit from each other during the denoising process, which can achieve joint optimality. The numerical behavior of the regularization parameter and the strengths of our models have been discussed in detail. The proposed models, which are nonconvex, can be asymptotically solved by the proposed alternating minimization scheme. Finally, the numerical results indicated that our methods outperform several popular methods with respect to both noise removal and detail preservation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), and Sichuan Province Sci. and Tech. Research Project (2012GZX0080).

Rudin L. Osher S. Fatemi E. Nonlinear total variation based noise removal algorithms Physica D 1992 60 259 268 Giusti E. Minimal Surfaces and Functions of Bounded Variation, 1984 Boston, Mass, USA Birkhäuser Strong D. Chan T. Edge-preserving and scale-dependent properties of total variation regularization Inverse Problems 2003 19 6 S165 S187 10.1088/0266-5611/19/6/059 MR2036526 ZBL1043.94512 Ring W. Structural properties of solutions to total variation regularization problems ESAIM: Mathematical Modelling and Numerical Analysis 2000 34 4 799 810 10.1051/m2an:2000104 MR1784486 ZBL1018.49021 Weickert J. Anisotropic Diffusion in Image Processing 1998 Stuttgart, Germany B. G. Teubner MR1666943 Buades A. Coll B. Morel J. M. The staircasing effect in neighborhood filters and its solution IEEE Transactions on Image Processing 2006 15 1499 1505 Chan T. Marquina A. Mulet P. High-order total variation-based image restoration SIAM Journal on Scientific Computing 2000 22 2 503 516 10.1137/S1064827598344169 MR1780611 ZBL0968.68175 Lysaker M. Lundervold A. Tai X.-C. Noise removal using fourth-order partial differential equations with applications to medical magnetic resonance images in space and time IEEE Transactions on Image Processing 2003 12 1579 1590 You Y.-L. Kaveh M. Fourth-order partial differential equations for noise removal IEEE Transactions on Image Processing 2000 9 10 1723 1730 10.1109/83.869184 MR1807566 ZBL0962.94011 Hinterberger W. Scherzer O. Variational methods on the space of functions of bounded Hessian for convexification and denoising Computing 2006 76 1-2 109 133 10.1007/s00607-005-0119-1 MR2174674 ZBL1098.49022 Lysaker M. Osher S. Tai X.-C. Noise removal using smoothed normals and surface fitting IEEE Transactions on Image Processing 2004 13 10 1345 1357 10.1109/TIP.2004.834662 MR2093522 Greer J. B. Bertozzi A. L. Traveling wave solutions of fourth order PDEs for image processing SIAM Journal on Mathematical Analysis 2004 36 1 38 68 10.1137/S0036141003427373 MR2083852 ZBL1082.35080 Lysaker M. Tai X.-C. Iterative image restoration combining total variation minimization and a second-order functional International Journal of Computer Vision 2006 66 5 18 Li F. Shen C.-M. Fan J.-S. Shen C.-L. Image restoration combining a total variational filter and a fourth-order filter Journal of Visual Communication and Image Representation 2007 18 322 330 Yang F.-C. Chen K. Yu B. Efficient homotopy solution and a convex combination of ROF and LLT models for image restoration International Journal of Numerical Analysis and Modeling 2012 9 4 907 927 MR2926494 ZBL1264.35288 Chang Q.-S. Tai X.-C. Xing L. A compound algorithm of denoising using second-order and fourth-order partial differential equations Numerical Mathematics Theory Methods and Applications 2009 2 4 353 376 MR2604890 ZBL1212.68383 Pang Z.-F. Yang Y.-F. A projected gradient algorithm based on the augmented Lagrangian strategy for image restoration and texture extraction Image and Vision Computing 2011 29 117 126 Gilboa G. Sochen N. Zeevi Y. Y. Variational denoising of partly textured images by spatially varying constraints IEEE Transactions on Image Processing 2006 15 2281 2289 Bertalmio M. Caselles V. Rougé B. Solé A. TV based image restoration with local constraints Journal of Scientific Computing 2003 19 95 122 10.1023/A:1025391506181 MR2028838 ZBL1034.49036 Almansa A. Ballester C. Caselles V. Haro G. A TV based restoration model with local constraints Journal of Scientific Computing 2008 34 3 209 236 10.1007/s10915-007-9160-x MR2377617 ZBL1218.94007 Dong Y.-Q. Hintermüller M. Rincon-Camacho M. M. Automated regularization parameter selection in multi-scale total variation models for image restoration Journal of Mathematical Imaging and Vision 2011 40 1 82 104 10.1007/s10851-010-0248-9 MR2782120 ZBL1255.68230 Bredies K. Dong Y.-Q. Hintermüller M. Spatially dependent regularization parameter selection in total generalized variation models for image restoration International Journal of Computer Mathematics 2013 90 1 109 123 10.1080/00207160.2012.700400 MR3018412 Wu C.-L. Tai X.-C. Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models SIAM Journal on Imaging Sciences 2010 3 3 300 339 10.1137/090767558 MR2679430 ZBL1206.90245 Bertsekas D. P. Nedic A. Ozdaglar A. E. Convex Analysis and Optimization 2003 Boston, Mass, USA Athena Scientic Ng M. K. Weiss P. Yuan X. Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods SIAM Journal on Scientific Computing 2010 32 5 2710 2736 10.1137/090774823 MR2684734 ZBL1217.65071 Eckstein J. Bertsekas D. P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators Mathematical Programming 1992 55 3 293 318 10.1007/BF01581204 MR1168183 ZBL0765.90073 Goldstein T. Osher S. The split Bregman method for L1-regularized problems SIAM Journal on Imaging Sciences 2009 2 2 323 343 10.1137/080725891 MR2496060 Ng M. Wang F. Yuan X. Inexact alternating direction methods for image recovery SIAM Journal on Scientific Computing 2011 33 4 1643 1668 10.1137/100807697 MR2821262 ZBL1234.94013 He B. Yuan X. On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method SIAM Journal on Numerical Analysis 2012 50 2 700 709 10.1137/110836936 MR2914282 ZBL1245.90084