AAA Abstract and Applied Analysis 1687-0409 1085-3375 Hindawi Publishing Corporation 213536 10.1155/2013/213536 213536 Research Article An Efficient Variational Method for Image Restoration Liu Jun 1 Huang Ting-Zhu 1 Lv Xiao-Guang 2 Wang Si 1 Shi Peilin 1 School of Mathematical Sciences/Institute of Computational Science University of Electronic Science and Technology of China Chengdu Sichuan 611731 China uestc.edu.cn 2 School of Science Huaihai Institute of Technology Lianyungang Jiangsu 222005 China hhit.edu.cn 2013 28 11 2013 2013 29 07 2013 14 10 2013 2013 Copyright © 2013 Jun Liu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Image restoration is one of the most fundamental issues in imaging science. Total variation regularization is widely used in image restoration problems for its capability to preserve edges. In this paper, we consider a constrained minimization problem with double total variation regularization terms. To solve this problem, we employ the split Bregman iteration method and the Chambolle’s algorithm. The convergence property of the algorithm is established. The numerical results demonstrate the effectiveness of the proposed method in terms of peak signal-to-noise ratio (PSNR) and the structure similarity index (SSIM).

1. Introduction

Image restoration is a fundamental problem in the literature of image processing. It plays an important role in various areas such as remote sensing, astronomy, medical imaging, and microscopy [1, 2]. Many image restoration tasks can be posed as linear inverse problems of the form: (1)g=Hf+n, where Hn2×n2 is a blurring matrix constructed from a discretized point spread function (PSF), fn2 is an original n×n gray-scale image, gn2 is a degraded observation, and nn2 is additive noise. We remark that the matrix H has special structure that can be exploited in computations, when the special boundary conditions such as periodic and Dirichlet boundary conditions are imposed [3, 4]. In this work, the PSF is assumed to be known. In fact, if the PSF is unknown, there are a variety of means of techniques available for estimating it [5, 6].

Image restoration problems are frequently ill conditioned; thus, the straightforward solution of (1) typically does not yield a meaningful approximation [7, 8]. In order to avoid this difficulty, one typical method is to replace the linear system (1) by a nearby system that is less sensitive to the error n in g and considers the computed solutions of the latter system. This replacement is known as regularization.

One of the most popular regularization approaches is Tikhonov regularization  which seeks to minimize a penalized least squares problem of the form: (2)minf{g-Hf22+αRf22}, where the first term is the data fidelity of the solution f, and the regularization term Rf22 restricts smoothness of the solution. The positive regularization parameter α plays the role of balancing the tradeoff between the fidelity and noise sensitivity. The regularization operator R is a carefully chosen matrix, often the identity matrix or a discrete approximation of the first or second order derivative operator.

In application to image processing, however, the Tikhonov regularization can produce poor solutions (with overly smoothed edges) when the desired solution comes from an image with edges; that is, it overly penalizes discontinuities in the solutions . In this regard, Rudin et al.  proposed a total variation (TV) regularization which has the ability to preserve edges well and remove noise at the same time. The resulting model (commonly referred to as the Rudin-Osher-Fatemi (ROF) model) has been proven to be successful in a wide range of applications in image processing . We should note that there are many other edge-preserving restoration techniques in the literature, such as the anisotropic diffusion methods of Perona and Malik for denoising , morphological wavelets, and dyadic tree-based edge-preserving method proposed by Xiang and Ramadge . In this paper, we focus on the the minimization problem with TV regularization (3)minf{g-Hf22+αfTV}, where ·TV denotes the discrete TV norm. To define the discrete TV norm, we first introduce the discrete gradient f : (4)(f)i,j=((f)i,jx,(f)i,jy) with (5)(f)i,jx={fi+1,j-fi,j,ifi<n,0,ifi=n,(f)i,jy={fi,j+1-fi,j,ifj<n,0,ifj=n, for i,j=1,,n, and fi,j represents the value of pixel (i,j) in the image (the (j-1)n+ith entry of the vector f). Then the discrete TV norm of f is defined as follows: (6)fTV:=1i,jn|(f)i,j|=1i,jn((f)i,jx)2+((f)i,jy)2, where |y|=y12+y22 for every y=(y1,y2)2.

In recent years, a great many of algorithms have been developed for total variation based image restoration and proved to be effective for reducing blur and noise while preserving edges. In the original TV regularization paper , the authors proposed a time marching scheme to solve the associated Euler-Lagrange equation of (3). The drawback of this method is very slow due to stability constraints. Later, Vogel and Oman  proposed a lagged diffusivity fixed point method to solve the same Euler-Lagrange equation of (3). They proved that this method had a global convergent property and was asymptotically faster than the explicit time marching scheme . In , Chan et al. applied the Newton’s method to solve the nonlinear primal-dual system of the system (3). Chambolle  considered a dual formulation of the TV denoising problem and proposed a semi-implicit gradient descent algorithm to solve the resulting constrained optimization problem. This method is globally convergent with a suitable step size. Recently, Wang et al.  proposed a fast total variation deconvolution method which uses splitting technique and constructs an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. In , Goldstein and Osher proposed the novel split Bregman iterative algorithm to deal with the artificial constraints; their method has several advantages such as fast convergence rate and stability.

More recently, Huang et al.  proposed a minimization problem of the form (7)minf,u𝒥(f,u)minf,uHf-g22+α1f-u22+α2uTV, where α1 and α2 are positive regularization parameters. The authors employed an alternating minimization algorithm to solve the system (7). The numerical results on image restoration show the efficiency of their method. The idea of this method is similar to the one proposed in . Both of them use the penalty method by introducing an auxiliary variable.

In , the minimization problem (7) can be solved by two steps: one is the deblurring step which is employed by the Tikhonov regularization and the other step is denoising. Although the noise can be removed to a certain extent in the second step, we may lose some details before the denoising step because of the Tikhonov regularization used in the first step (deblurring), as we know that Tikhonov regularization penalizes edges.

In , Chavent and Kunisch considered a total bounded variation regularization minimization problem given by (8)minfHf-g22+αf22+βfTV, where α,β are both positive parameters. The authors proved that the solution of system (8) is unique. In , Hinrermüller and Kunisch applied the semismooth Newton method to solve the Fenchel predual of (8). Numerical results for image denoising and zooming/resizing showed the efficiency of their approach. In , Liu and Huang introduced an extended split Bregman iteration to solve the minimization problem (8). Numerical simulations illustrated the excellent reconstruction performance of their method.

Note that the unconstrained problem (3) is equivalent to the following constrained minimization problem: (9)minf,u𝒥(f,u)minfHf-g22+α2fTV+α3uTVs.  t.f=u, where α2,α3>0. Then using penalty method, we obtain proposed minimization problem as follows: (10)minf,u𝒥(f,u)minfHf-g22+α1f-u22+α2fTV+α3uTV, where α1, α2, and α3 are positive regularization parameters. We note that the problem (9) is the same with the problem (7) if we set the parameter α2=0. The minimization problem (10) can be rewritten as follows: (11)minf,u𝒥(f,u)minu{minf{Hf-g22+α1f-u22+α2fTV}+α3uTVminf{Hf-g22+α1f-u22+α2fTV}}. The method for solving the minimization problem (11) will be discussed in Section 2. Our numerical results will show that the proposed method yields state of the art results both in terms of SSIM and PSNR.

This paper is outlined as follows. In the next section, we first give a brief introduction of the split Bregman method, and then we propose an iterative algorithm for solving (10). The convergence property of the proposed method is given in Section 3. In Section 4, we present numerical experiments to show the efficiency of the proposed method. Finally, the concluding remarks can be found in Section 5.

2. Alternating Minimization Iterative Scheme

In this section, we derive an algorithm to solve the minimization problem (11). Before we discuss the alternating iterative algorithm for solving (11), we would like to give a brief introduction of the split Bregman iteration .

2.1. Split Bregman Iteration 2.1.1. Bregman Iteration

Bregman iteration is a concept that originated in functional analysis for finding minimizer of convex functionals . Osher et al.  first applied the Bregman iteration to the ROF model for denoising problem in image processing. The basic idea of the Bregman iteration is to transform a constrained optimization problem to an unconstrained problem. The objective functional in the transformed unconstrained problem is defined by means of the Bregman distance of a convex functional. Suppose the unconstrained problem is formulated as (12)minfJ(f)+λ2F(f,g), where J(f) is a convex function and F(f,g) is convex and differentiable. The Bregman distance of a convex function J(f) at the point v is defined as the following (nonnegative) quantity: (13)DJp(f,v)J(f)-J(v)-p,f-v, where pJ; that is, p is one of the subgradients of J at v. Then the authors in  employed the Bregman iteration method to solve the unconstrained problem (11). Assume F(f,g)=Hf-g2, and then the Bregman iteration method is to alternatively iterate the following scheme: (14)fk+1=argminfDJpk(f,fk)+λ2Hf-g22=argminfJ(f)-pk,f-fk+λ2Hf-g22,pk+1=pk-λHT(Hfk+1-g). As shown in [25, 26], when H is linear, the iteration (13) can be reformulated into the simplified method (15)fk+1=argminuJ(f)+λ2Hf-bk22,(16)bk+1=bk+(g-Hfk+1). This Bregman iteration technique has mainly two advantages over tradition penalty function/continuation methods. One is that it converges very quickly when applied to certain types of objective functions, especially for problems where J(f) contains an l1-regularization term. The other advantage is that the value of λ in (14) remains constant. See [20, 25, 26] for further details.

2.1.2. Split Bregman Iteration

In , the authors considered the problem (17)minu,dd1+E(f)suchthatd=ϕ(f), where E(f) and ϕ(f) are convex and differentiable. To solve the problem (16), they convert the constrained problem into an unconstrained problem: (18)minf,dd1+E(f)+λ2d-ϕ(f)22.

Let J(f,d)=d1+E(f) and F(f,d)=d-ϕ(f)22, and the aforementioned Bregman iteration (14) can be similarly applied to (17). Then, they obtained the following elegant two-phase iterative algorithm (split Bregman iteration scheme).

Split Bregman iteration: (19)(fk+1,dk+1)=minf,dd1+E(f)+λ2d-ϕ(f)-bk22,(20)bk+1=bk+(ϕ(fk+1)-dk+1). The split Bregman iteration has stable convergence property, and it is extremely fast and very simple to program. For more details on split Bregman and its applications, see [20, 23, 27, 28].

2.2. Proposed Alternating Iteration Scheme

In this section, we propose an alternating minimization algorithm to solve the problem (10). Given an initial u0, we get (21)𝒫h(uk-1)fk=argminf{Hf-g22+α1f-uk-122+α2fTV},𝒫tv(fk)  :=uk=argminu{α1u-fk22+α3uTV}, for i=1,2,. As a matter of convenience, we express the relationship between uk and uk-1 as below: (22)ui=𝒫tv(𝒫h(ui-1)), we denote uk=𝒯(uk-1) for simplicity, where 𝒯(·)=𝒫tv(𝒫h(·)).

Considering f, the minimization problem (10) is reduced to a minimization problem with respect to f(23)minf{Hf-g22+α1f-u22+α2fTV}. Although there are many methods to solve (22), we focus on the split Bregman iteration here. Let E(f)=(1/α2)  Hf-g22+(α1/α2)  f-u2,J(f,d)=d1+E(f), and F(f,d)=d-f2. According to (18) and (19), we have (24)(f~i+1,di+1)=argminf,dd1+1α2Hf-g22+α1α2f-u22+λ2d-f-bi22,bi+1=bi+(f~i+1-di+1). Clearly, the minimization with respect to f~i+1 and di+1 in (23) is decoupled, and thus they can be solved separately. We perform the subminimization problem (23) efficiently by iteratively minimizing the following subproblems with respect to f and d separately: (25)f~i+1=argminfHf-g22+α1f-u22+α2λ2di-f-bi22,(26)di+1=argmindd1+λ2d-f~i+1-bi22. The minimizer f~i+1 is given by normal equations: (27)(HTH+α2λ2T+α1I)f~i+1=HTg+α1u+α2λ2T(di-bi), where I is the identity matrix and T=-div represents the adjoint of . When an appropriate boundary condition is given, the normal equation (26) can be solved by fast algorithms. In this paper, we impose the periodic boundary condition, and then the matrix HTH,T are all block circulant [4, 8], which can be diagonalized by the two-dimensional discrete Fourier transform . By applying the convolution theorem, we obtain(28)f~i+1=-1{2(H)*(g)+2α1(u)+α2λ((x)*(vxi))+α2λ((y)*(vyi))2(H)*(H)+2α1I+α2λ((x)*(x))+α2λ((y)*(y))},where “*” denotes complex conjugacy, “” denotes component-wise multiplication, vi=di-bi=(dxi-bxi,dyi-byi)=(vxi,vyi), =(x,y), and the division is component-wise as well.

The minimizer di+1 of (25) can be determined by the following shrinkage formula: (29)dk+1=max{f~i+1+bi,0}f~i+1+bif~i+1+bi,withtheconvention0·00=0.

To sum up, we get Algorithm 1 for solving the subproblem (22).

<bold>Algorithm 1: </bold>The split Bregman scheme for subproblem (<xref ref-type="disp-formula" rid="EEq18">22</xref>).

(1) initialization: λ,α1,α2,u,f~0=0,d0=b0=0,    i  =  0;

(2) iteration:

compute f~i+1 by using formula (27)

di+1=max{f~i+1+bi,0}f~i+1+bif~i+1+bi, with the convention 0·00=0

bi+1=bi+(f~i+1-di+1)

stop or set i=i+1

Considering u, the outer minimization problem can be interpreted as the TV minimization scheme to denoise the recovered f generated by the previous step. The minimum problem is as follows: (30)𝒫tv(f)=argminu{α1u-f2+α3uTV}. There are several efficient methods for solving this problem, such as the primal-dual method [18, 29], the lagged diffusivity fixed point iteration proposed by Vogel and Oman , the semismooth Newton method , and Chambolle’s dual algorithm . For the simplicity of Chambolle’s dual algorithm, we adopt it here for TV denoising problem (29). The idea given by Chambolle is to replace the optimization of the image u by the optimization of a vector field p that is related to u by u=f-(α3/2α1)divp. For a noisy image f, the vector field is the one that minimizes (31)minpf-α32α1divp2,s.t.pi,j1,i,j=1,,n, where (32)pi,j=[pi,jxpi,jy] is the dual variable at the (i,j)th pixel location, p is the concatenation of all pi,j, and the discrete divergence of p is given by (33)(div p)i,jpi,jx-pi-1,jx+pi,jy-pi,j-1y with p0,jx=pi,0y=0. The vector div p is the concatenation of all (div p)i,j. For simplicity, we denote β=α3/2α1. The iterative scheme proposed by Chambolle for computing the optimal solution p is as follows: (34)pi,jl+1=pi,jl+τ((divpl-f/β))i,j1+τ|((divpl-f/β))i,j|,1i,jn, where τ is the step size and pi,jl is the lth iterate of the iterative method for minimizer; see  for more details. After the minimizer p* of the constrained optimization problem in (31) is determined, the denoised image can be computed by (35)u*=f-βdiv p*.

In summary, we obtain Algorithm 2 by using alternating minimization scheme to solve the minimization problem (10).

<bold>Algorithm 2: </bold>Alternating iteration scheme for solving (<xref ref-type="disp-formula" rid="EEq7">10</xref>).

(1) initialization: u0,pi,j0  w0,u0,α1,α2,λ,k=0;

(2) iteration

compute fk using Algorithm 1 for fixed uk-1

compute uk according to Chambolle method (31) and

(34) for fixed fk

stop or set k=k+1

3. Convergence Analysis

In this section, we make use of a theorem proposed in  to give the convergence property of the proposed algorithm. The theorem is given by the following.

Theorem 1 (see [<xref ref-type="bibr" rid="B31">31</xref>]).

Let 𝒯:n2n2 be a β-averaged nonexpansive operator and the set of fixed points of 𝒯 be nonempty. Then for any x0, the sequence xk=Tkx0 converges weakly to a fixed point in n2.

Definition 2.

An operator 𝒯 is called nonexpansive if, for all x and yn2, (36)𝒯x-𝒯yx-y. Given a nonexpansive operator 𝒩, let 𝒯=(1-α)+αN; for some α(0,1), the operator 𝒯 is said to be α-average.

Definition 3.

An operator 𝒯 is called ν-inverse strongly monotone (ism) if there is ν>0, such that (37)𝒯x-𝒯y,x-yν𝒯x-𝒯y2. Let =-𝒯 be complement of the operator 𝒯, and then we can easily get the following identity: (38)x-y2-𝒯x-𝒯y2=2x-y,x-y-x-y2. An operator is called firmly nonexpansive if it is 1-ism.

Lemma 4 (see [<xref ref-type="bibr" rid="B31">31</xref>]).

An operator 𝒩 is nonexpansive if and only if its complement =-𝒩 is 1/2-ism. If is ν-ism and γ>0, then the operator γ is (ν/γ)-ism.

Lemma 5.

An operator 𝒫 is β-average nonexpansive if and only if its complement =-𝒫 is (1/2β)-ism.

Proof.

Firstly, suppose 𝒫 is β-average; from Definition 2, there exists a nonexpansive 𝒩 such that 𝒫=(1-β)+β𝒩, and then =-𝒫=β(-𝒩). Since 𝒩 is nonexpansive, from Lemma 4, we have that -𝒩 is 1/2-ism and =β(-𝒩) is (1/2β)-ism.

Now we assume that is (1/2β)-ism, we write 𝒫=(1-β)I+β𝒩, for 𝒩=-(1/β), from Lemma 4, we have that -𝒩 is (1/2)-ism and 𝒩 is nonexpansive, and then from Definition 2, the operator 𝒫 is β-average.

Lemma 6 (see [<xref ref-type="bibr" rid="B32">32</xref>]).

Let be convex and semicontinuous and α>0. Suppose x^ is defined as follows: (39)x^=argminxy-x2+αφ(x). Define 𝒮 such that x^=𝒮(y) for every y. Then 𝒮 and -𝒮 are firmly nonexpansive.

Theorem 7.

Let α1 and α2 be positive numbers. Suppose f^ is defined as follows: (40)f^=argminfHf-g2+α1f-u2+α2fTV. Define 𝒫n such that f^=𝒫n(u) for every u. Then 𝒫n is 1/2-averaged nonexpansive.

Proof.

Let ϕ(f)=fTV, for every u, the minimum in (22) is achieved at a unique point 𝒫n which is characterized by the inclusion (41)HTg+α1u-(HTH+α1I)𝒫n(u)α22ϕ(f) from the property of subdifferential of ϕ(f), u,wn2, and we have the following inequalities: (42)2α2𝒫n(w)-𝒫n(u),HTg+α1u-(HTH+α1I)𝒫n(u)+ϕ(𝒫n(u))ϕ(𝒫n(w)),(43)2α2𝒫n(u)-𝒫n(w),HTg+α1w-(HTH+α1I)𝒫n(w)+ϕ(𝒫n(w))ϕ(𝒫n(u)). Adding these two inequalities, we obtain (44)[𝒫n(w)-𝒫n(u)]T(1α1HTH+I)[𝒫n(w)-𝒫n(u)][𝒫n(w)-𝒫n(u)]T(w-u). It is obvious that we have the following inequality: (45)𝒫n(w)-𝒫n(u)2[𝒫n(w)-𝒫n(u)]T×(1α1HTH+I)[𝒫n(w)-𝒫n(u)]. From (43) and (44), we obtain (46)𝒫n(w)-𝒫n(u)2[𝒫n(w)-𝒫n(u)]T(w-u). Then from Definition 3, the operator 𝒫n is firmly nonexpansive. Analogously we can easily obtain that the operator -𝒫n is also firmly nonexpansive. Therefore, it follows from Lemma 5 that the operator 𝒫n and -𝒫n is 1/2-averaged nonexpansive.

Corollary 8.

The operator 𝒯=𝒫tv(𝒫h) is 3/4-average nonexpansive.

Proof.

From Lemmas 5 and 6, and Theorem 7, we know that 𝒫tv and 𝒫h are both 1/2-averaged nonexpansive operators, and there exist nonexpansive operators 𝒩h and 𝒩tv such that (47)𝒫h=12(+𝒩h),𝒫tv=12(+𝒩tv). Thus we have (48)𝒯=𝒫tv(𝒫h)=14(+𝒩h+𝒩tv+𝒩tv𝒩h). Set 𝒩=(1/3)(𝒩h+𝒩tv+𝒩tv𝒩h), then for any x and y, (49)𝒩x-𝒩y=13𝒩tv(x-y)+𝒩h(x-y)+𝒩tv𝒩h(x-y)13(𝒩tvx-𝒩tvy+𝒩hx-𝒩hy+𝒩tv𝒩h(x)-𝒩tv𝒩h(y))13(2x-y+𝒩h(x)-𝒩h(y))13(3x-y)=x-y. Consequently, 𝒩 is nonexpansive, and we rewrite 𝒫h as (50)𝒯=(1-34)+34𝒩. It follows from Definition 2 that the operator 𝒯=𝒫tv(𝒫h) is 3/4-averaged.

According to Theorem 1 and Corollary 8, we conclude that for any initial guess u0n2, {uk} generated by (20) converges to the minimizer of 𝒥 in (10).

4. Numerical Experiments

In this section, we present several numerical experiments to illustrate the behavior of the proposed method for image restoration problems. The quality of the restoration results by different methods is compared quantitatively by using the PSNR and SSIM. PSNR is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The higher PSNR value, the higher image quality. The SSIM is a well-known quality metric used to measure the similarity between two images. The method is developed by Wang et al.  and is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. The higher SSIM value, the better restoration. We also use blur signal-to-noise ratio (BSNR) to describe how much noise is added in the blurry image. Suppose f,g,f~, and n are the original image, the blurred and noisy image, the restored image, and the noise, respectively. The PSNR, SSIM, and BSNR are defined as follows [2, 34]: (51)PSNR=10logn2MaxI2f-f~2, where MaxI is the maximum possible pixel value of the image (i.e., when the pixels are represented by using 8 bits per sample, this is 255): (52)SSIM=l(f,f~)c(f,f~)s(f,f~), where (53)l(f,f~)=2μfμf~+C1μf2+μf~2+C1,c(f,f~)=2σfσf~+C2σf2+σf~2+C2,s(f,f~)=2σff~+C3σfσf~+C3. The first term in (52) is the luminance comparison function which measures the closeness of the two images’ mean luminance (μf and μf^). The unique maximum of this factor equals 1 if and only if μf=μf^. The second term c(f,f~) is the contrast comparison function which measures the closeness of the contrast of the two images. Here the contrast is measured by the standard deviation σf and σf^. This term achieves maximum value 1 if and only if σf=σf^. The third term is the structure comparison function which measures the correlation coefficient between the two images f and f^. Note that σff~ is the covariance between f and f^. The positive values of the SSIM index are in [0,1]. A value of 0 means no correlation between images, and 1 means that f=f^. The positive constants C1,C2, and C3 can be thought of as stabilizing constants for near-zero denominator values. In the following experiments, we will also use SSIM index map to reveals areas of high/low similarity between two images, the whiter SSIM index map, the closer between the two images. We refer the reader to see [33, 34] for further details on SSIM and SSIM index map.

The BSNR is given by (54)BSNR=20log10gn.

In the following experiments, we compare our proposed method (we call the proposed method FNDTV later) with FastTV . For FastTV, based on the suggestions in , we fixed its parameters α1=0.003 for BSNR = 40 dB and α1=0.006 for BSNR = 30 dB, and we determine the best value of α2 such as the restored images with best performance. For our method, we also determine the best value of the regularization parameters to give the best performance.

The stopping criterion of the proposed method is that the relative difference between the successive iteration of the restored image should satisfy the following inequality: (55)fk+1-fkfk+1tol, where fk is the restored image at the kth iteration of the proposed method. We set tol=1×10-4 in all tests for both methods.

Four test images, “Cameraman,” “Bridge,” “Lena,” and “Resolution Chart,” which are commonly used in the literature, are shown in Figure 1. We test several kinds of blurring kernels including average, motion, gaussian, and out-of-focus. These different blurring kernels can be generated by the Matlab built-in function fspecial. In all tests, we add the Gaussian white noise of different BSNR to the blurred images.

Original images. (a) “Cameraman,” (b) “Resolution Chart,” (c) “Bridge,” (d) “Lena.”

All experiments are carried out on Windows XP 32-bit and Matlab v7.10 running on a desktop equipped with an Intel Core2 Duo CPU 2.93 GHz and 2 GB of RAM.

4.1. Average Blur Example

In this example, we consider the well-known “Cameraman” image (256×256) which is shown in Figure 1(a). The image is blurred by a 3×3 box average kernel and contaminated by BSNR = 40 dB Gaussian noise. The blurred and noisy image is shown in Figure 2(a).

Results of different methods when restoring blurred and noisy image “Cameraman” degraded by average 9×9 uniform blur and a noise with BSNR = 40 dB: (a) blurred and noisy image “Cameraman”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

Figures 2(b)2(d) show the restored images by FNDTV and FastTV. We can see that the visual quality of reconstruction by FNDTV is slightly better than the outcome of FastTV. From Table 1, it is not difficult to see that both of PSNR and SSIM of the recovered image by FNDTV are higher than the one obtained by FastTV.

Numerical results for the experiments in terms of PSNR (dB) and SSIM.

Problem Method PSNR SSIM
Cameraman9×9 uniform, BSNR = 40 FastTV 29.47 0.8781
FNDTV 29.60 0.8830

Resolution chartmotion with length 7, BSNR = 30 FastTV 33.76 0.9811
FNDTV 34.38 0.9853

BridgeGaussian with radius 7, σ=3, BSNR = 40 FastTV 25.29 0.7860
FNDTV 25.42 0.7882

LenaOut-of-focus with radius 7, BSNR = 30 FastTV 26.43 0.7639
FNDTV 26.56 0.7661

We also show the SSIM index maps of the restored images recovered by the two methods in Figures 2(e)2(f), and the map can deliver more information about the quality degradation of the restored images. In contrast, the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV.

4.2. Motion Blur Example

Motion blur is considered in this example. The observed image is the so-called “Resolution Chart” , it is degraded by a motion bur with length 7 and contaminated by a white Gaussian noise with BSNR = 30 dB. The degraded image is shown in Figure 3(a). The restored images by FastTV and FNDTV are presented in Figures 3(b) and 3(c). We also report the PSNR and SSIM values by these methods in Table 1. We see that both the PSNR and SSIM values of the restored image by the FNDTV method are higher than FastTV. In addition, the SSIM map obtained by the proposed method is slightly whiter than the map by FastTV.

Results of different methods when restoring blurred and noisy image “Resolution Chart” degraded by motion blur with length 7 and a noise with BSNR = 30 dB: (a) blurred and noisy image “Resolution Chart”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

4.3. Gaussian Blur Example

The original image “Bridge” is shown in Figure 1(c). The blurred and noisy image is degraded by a Gaussian blur with radius 3 and standard deviation σ=3 and then contaminated by Gaussian noise with BSNR = 40 dB. Figure 4(a) shows the blurred and noisy observation. The restored images obtained by FastTV and FNDTV are shown in Figures 4(b) and 4(c). The numerical results of two different methods in terms of PSNR and SSIM are given in Table 1.

Results of different methods when restoring blurred and noisy image “Bridge” degraded by Gaussian blur with radius 3 and standard deviation σ = 3 and a noise with BSNR = 40 dB: (a) blurred and noisy image “Bridge”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

From Table 1, it is not difficult to see that the PSNR and SSIM of the restored image by FNDTV are higher than the one obtained by FastTV.

4.4. Out-of-Focus Blur Example

This example consists in restoring the image “Lena” degraded by an out-of-focus blur with radius 3 and contaminated by BSNR = 30 white Gaussian noise. The image “Lena” is a good test image because it has a nice mixture of detail, flat regions, shading area, and texture and has been widely used in the literature to test image restoration algorithms . Figure 5(a) shows the blurred and noisy image. The restored results by both methods are shown in Figures 5(b)5(d), and Table 1 lists the PSNR and SSIM values. From the table, we observe that both the PSNR and SSIM values of the restored image by the proposed method are better than what obtained by FastTV.

Results of different methods when restoring blurred and noisy image “Lena” degraded by out-of-focus blur with radius 3 and a noise with BSNR = 30 dB: (a) blurred and noisy image “Lena”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

5. Conclusion

In this paper, we have presented a new efficient algorithm for image restoration based on total variation regularization. We give a convergence proof for the algorithm, and the numerical results show that the proposed method is competitive with the state of the art method FastTV. In addition, an important feature is that the proposed method can suppress noise very well while it can preserve details of the restored image. We will consider extending the proposed method for color or other multichannel image restoration in the future.

Conflict of Interests

All of the coauthors do not have a direct financial relation with the trademarks mentioned in our paper that might lead to a conflict of interests for any of the coauthors.

Acknowledgments

The work of Jun Liu, Ting-Zhu Huang, and Si Wang is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci., and Tech. Research Project (2012GZX0080). The work of Xiao-Guang Lv is supported by Nature science foundation of Jiangsu Province (BK20131209).

Banham M. R. Kataggelos A. K. Digital image restoration IEEE Signal Processing Magazine 1997 14 24 41 Huang Y. M. Ng M. K. Wen Y.-W. A fast total variation minimization method for image restoration Multiscale Modeling & Simulation 2008 7 2 774 795 10.1137/070703533 MR2443011 ZBL1172.94316 Nagy J. G. Ng M. K. Perrone L. Kronecker product approximations for image restoration with reflexive boundary conditions SIAM Journal on Matrix Analysis and Applications 2003 25 3 829 841 10.1137/S0895479802419580 MR2081237 Huang J. Huang T.-Z. Zhao X.-L. Xu Z.-B. Image restoration with shifting reflective boundary conditions Science China Information Sciences 2013 56 6 1 15 10.1007/s11432-011-4425-2 Lay K. T. Katsaggelos A. K. Identification and restoration based on the expectationmaximization algorithm Optical Engineering 1990 29 436 445 Hansen P. C. Nagy J. G. O'Leary D. P. Deblurring Images: Matrices, Spectra, and Filtering 2006 3 Philadelphia, Pa, USA Society for Industrial and Applied Mathematics 10.1137/1.9780898718874 MR2271138 Hansen P. C. Rank-deficient and Discrete Ill-Posed Problems 1998 Philadelphia, Pa, USA Society for Industrial and Applied Mathematics 10.1137/1.9780898719697 MR1486577 Lv X.-G. Huang T.-Z. Xu Z.-B. Zhao X.-L. Kronecker product approximations for image restoration with whole-sample symmetric boundary conditions Information Sciences 2012 186 150 163 10.1016/j.ins.2011.09.026 MR2860018 ZBL1239.94011 Tikhonov A. Arsenin V. Solution of Ill-Poised Problems 1977 Washington, DC, USA Winston Agarwal V. Gribok A. V. Abidi M. A. Image restoration using L1 norm penalty function Inverse Problems in Science and Engineering 2007 15 8 785 809 10.1080/17415970600971987 MR2374286 ZBL1258.94013 Rudin L. Osher S. Fatemi E. Nonlinear total variation based noise removal algorithms Journal of Physics D 1992 60 259 268 Chan T. Esedoglu S. Park F. Yip A. Total variation image restoration: overview and recent developments Handbook of Mathematical Models in Computer Vision 2006 New York, NY, USA Springer 17 31 10.1007/0-387-28831-7_2 MR2232522 Perona P. Malik J. Scale-space and edge detection using anisotropic diffusion IEEE Transactions on Pattern Analysis and Machine Intelligence 1990 12 629 639 Xiang Z. J. Ramadge P. J. Edge-preserving image regularization based on morphological wavelets and dyadic trees IEEE Transactions on Image Processing 2012 21 4 1548 1560 10.1109/TIP.2011.2181399 MR2959470 Chambolle A. An algorithm for total variation minimization and applications Journal of Mathematical Imaging and Vision 2004 20 1-2 89 97 10.1023/B:JMIV.0000011320.81911.38 MR2049783 Vogel C. R. Oman M. E. Iterative methods for total variation denoising SIAM Journal on Scientific Computing 1996 17 1 227 238 10.1137/0917016 MR1375276 ZBL0847.65083 Chan T. F. Mulet P. On the convergence of the lagged diffusivity fixed point method in total variation image restoration SIAM Journal on Numerical Analysis 1999 36 2 354 367 10.1137/S0036142997327075 MR1668254 Chan T. F. Golub G. H. Mulet P. A nonlinear primal-dual method for total variation-based image restoration SIAM Journal on Scientific Computing 1999 20 6 1964 1977 10.1137/S1064827596299767 MR1694649 ZBL0929.68118 Wang Y. Yang J. Yin W. Zhang Y. A new alternating minimization algorithm for total variation image reconstruction SIAM Journal on Imaging Sciences 2008 1 3 248 272 10.1137/080724265 MR2486032 ZBL1187.68665 Goldstein T. Osher S. The split Bregman method for L1-regularized problems SIAM Journal on Imaging Sciences 2009 2 2 323 343 10.1137/080725891 MR2496060 Chavent G. Kunisch K. Regularization of linear least squares problems by total bounded variation ESAIM. Control, Optimisation and Calculus of Variations 1997 2 359 376 10.1051/cocv:1997113 MR1483764 ZBL0890.49010 Hintermüller M. Kunisch K. Total bounded variation regularization as a bilaterally constrained optimization problem SIAM Journal on Applied Mathematics 2004 64 4 1311 1333 10.1137/S0036139903422784 MR2068672 ZBL1055.94504 Liu X. Huang L. Split Bregman iteration algorithm for total bounded variation regularization based image deblurring Journal of Mathematical Analysis and Applications 2010 372 2 486 495 10.1016/j.jmaa.2010.07.013 MR2678877 ZBL1202.94062 Bregman L. The relaxation method of finnding the common points of convex sets and its application to the solution of problems in convex optimization USSR Computational Mathematics and Mathematical Physics 1967 7 200 217 Osher S. Burger M. Goldfarb D. Xu J. Yin W. An iterative regularization method for total variation-based image restoration SIAM Multiscale Modeling and Simulation 2005 4 2 460 489 10.1137/040605412 MR2162864 ZBL1090.94003 Yin W. T. Osher S. Goldfarb D. Darbon J. Bregman iterative algorithms for l1-minimization with applications to compressed sensing SIAM Journal on Imaging Sciences 2008 1 1 143 168 10.1137/070703983 MR2475828 Cai J.-F. Osher S. Shen Z. Split Bregman methods and frame based image restoration SIAM Multiscale Modeling and Simulation 2009 8 2 337 369 10.1137/090753504 MR2581025 Li W. H. Li Q. L. Gong W. Tang S. Total variation blind deconvolution employing split Bregman iteration Journal of Visual Communication and Image Representation 2012 23 409 417 Zhu M. Chan T. F. An efficient primal-dual hybrid gradient algorithm for total variation image restoration CAM Report 2008 08-34 Mathematics Department, UCLA Ng M. K. Qi L. Yang Y.-F. Huang Y.-M. On semismooth Newton's methods for total variation minimization Journal of Mathematical Imaging and Vision 2007 27 3 265 276 10.1007/s10851-007-0650-0 MR2325853 Byrne C. A unified treatment of some iterative algorithms in signal processing and image reconstruction Inverse Problems 2004 20 1 103 120 10.1088/0266-5611/20/1/006 MR2044608 ZBL1051.65067 Combettes P. L. Wajs V. R. Signal recovery by proximal forward-backward splitting SIAM Multiscale Modeling & Simulation 2005 4 4 1168 1200 10.1137/050626090 MR2203849 ZBL1179.94031 Wang Z. Bovik A. C. Sheikh H. R. Simoncelli E. P. Image quality assessment: from error visibility to structural similarity IEEE Transactions on Image Processing 2004 13 4 600 612 Horé A. Ziou D. Image quality metrics: PSNR vs. SSIM Proceedings of the IEEE International Conference on Pattern Recognition 2010 Istanbul, Turkey 2366 2369