Image restoration is one of the most fundamental issues in imaging science. Total variation regularization is widely used in image restoration problems for its capability to preserve edges. In this paper, we consider a constrained minimization problem with double total variation regularization terms. To solve this problem, we employ the split Bregman iteration method and the Chambolle’s algorithm. The convergence property of the algorithm is established. The numerical results demonstrate the effectiveness of the proposed method in terms of peak signal-to-noise ratio (PSNR) and the structure similarity index (SSIM).
1. Introduction
Image restoration is a fundamental problem in the literature of image processing. It plays an important role in various areas such as remote sensing, astronomy, medical imaging, and microscopy [1, 2]. Many image restoration tasks can be posed as linear inverse problems of the form:
(1)g=Hf+n,
where H∈ℝn2×n2 is a blurring matrix constructed from a discretized point spread function (PSF), f∈ℝn2 is an original n×n gray-scale image, g∈ℝn2 is a degraded observation, and n∈ℝn2 is additive noise. We remark that the matrix H has special structure that can be exploited in computations, when the special boundary conditions such as periodic and Dirichlet boundary conditions are imposed [3, 4]. In this work, the PSF is assumed to be known. In fact, if the PSF is unknown, there are a variety of means of techniques available for estimating it [5, 6].
Image restoration problems are frequently ill conditioned; thus, the straightforward solution of (1) typically does not yield a meaningful approximation [7, 8]. In order to avoid this difficulty, one typical method is to replace the linear system (1) by a nearby system that is less sensitive to the error n in g and considers the computed solutions of the latter system. This replacement is known as regularization.
One of the most popular regularization approaches is Tikhonov regularization [9] which seeks to minimize a penalized least squares problem of the form:
(2)minf{∥g-Hf∥22+α∥Rf∥22},
where the first term is the data fidelity of the solution f, and the regularization term ∥Rf∥22 restricts smoothness of the solution. The positive regularization parameter α plays the role of balancing the tradeoff between the fidelity and noise sensitivity. The regularization operator R is a carefully chosen matrix, often the identity matrix or a discrete approximation of the first or second order derivative operator.
In application to image processing, however, the Tikhonov regularization can produce poor solutions (with overly smoothed edges) when the desired solution comes from an image with edges; that is, it overly penalizes discontinuities in the solutions [10]. In this regard, Rudin et al. [11] proposed a total variation (TV) regularization which has the ability to preserve edges well and remove noise at the same time. The resulting model (commonly referred to as the Rudin-Osher-Fatemi (ROF) model) has been proven to be successful in a wide range of applications in image processing [12]. We should note that there are many other edge-preserving restoration techniques in the literature, such as the anisotropic diffusion methods of Perona and Malik for denoising [13], morphological wavelets, and dyadic tree-based edge-preserving method proposed by Xiang and Ramadge [14]. In this paper, we focus on the the minimization problem with TV regularization
(3)minf{∥g-Hf∥22+α∥f∥TV},
where ∥·∥TV denotes the discrete TV norm. To define the discrete TV norm, we first introduce the discrete gradient ∇f [15]:
(4)(∇f)i,j=((∇f)i,jx,(∇f)i,jy)
with
(5)(∇f)i,jx={fi+1,j-fi,j,ifi<n,0,ifi=n,(∇f)i,jy={fi,j+1-fi,j,ifj<n,0,ifj=n,
for i,j=1,…,n, and fi,j represents the value of pixel (i,j) in the image (the (j-1)n+ith entry of the vector f). Then the discrete TV norm of f is defined as follows:
(6)∥f∥TV:=∑1≤i,j≤n|(∇f)i,j|=∑1≤i,j≤n((∇f)i,jx)2+((∇f)i,jy)2,
where |y|=y12+y22 for every y=(y1,y2)∈ℝ2.
In recent years, a great many of algorithms have been developed for total variation based image restoration and proved to be effective for reducing blur and noise while preserving edges. In the original TV regularization paper [11], the authors proposed a time marching scheme to solve the associated Euler-Lagrange equation of (3). The drawback of this method is very slow due to stability constraints. Later, Vogel and Oman [16] proposed a lagged diffusivity fixed point method to solve the same Euler-Lagrange equation of (3). They proved that this method had a global convergent property and was asymptotically faster than the explicit time marching scheme [17]. In [18], Chan et al. applied the Newton’s method to solve the nonlinear primal-dual system of the system (3). Chambolle [15] considered a dual formulation of the TV denoising problem and proposed a semi-implicit gradient descent algorithm to solve the resulting constrained optimization problem. This method is globally convergent with a suitable step size. Recently, Wang et al. [19] proposed a fast total variation deconvolution method which uses splitting technique and constructs an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values. In [20], Goldstein and Osher proposed the novel split Bregman iterative algorithm to deal with the artificial constraints; their method has several advantages such as fast convergence rate and stability.
More recently, Huang et al. [2] proposed a minimization problem of the form
(7)minf,u𝒥(f,u)≡minf,u∥Hf-g∥22+α1∥f-u∥22+α2∥u∥TV,
where α1 and α2 are positive regularization parameters. The authors employed an alternating minimization algorithm to solve the system (7). The numerical results on image restoration show the efficiency of their method. The idea of this method is similar to the one proposed in [19]. Both of them use the penalty method by introducing an auxiliary variable.
In [2], the minimization problem (7) can be solved by two steps: one is the deblurring step which is employed by the Tikhonov regularization and the other step is denoising. Although the noise can be removed to a certain extent in the second step, we may lose some details before the denoising step because of the Tikhonov regularization used in the first step (deblurring), as we know that Tikhonov regularization penalizes edges.
In [21], Chavent and Kunisch considered a total bounded variation regularization minimization problem given by
(8)minf∥Hf-g∥22+α∥f∥22+β∥f∥TV,
where α,β are both positive parameters. The authors proved that the solution of system (8) is unique. In [22], Hinrermüller and Kunisch applied the semismooth Newton method to solve the Fenchel predual of (8). Numerical results for image denoising and zooming/resizing showed the efficiency of their approach. In [23], Liu and Huang introduced an extended split Bregman iteration to solve the minimization problem (8). Numerical simulations illustrated the excellent reconstruction performance of their method.
Note that the unconstrained problem (3) is equivalent to the following constrained minimization problem:
(9)minf,u𝒥(f,u)≡minf∥Hf-g∥22+α2∥f∥TV+α3∥u∥TVs.t.f=u,
where α2,α3>0. Then using penalty method, we obtain proposed minimization problem as follows:
(10)minf,u𝒥(f,u)≡minf∥Hf-g∥22+α1∥f-u∥22+α2∥f∥TV+α3∥u∥TV,
where α1, α2, and α3 are positive regularization parameters. We note that the problem (9) is the same with the problem (7) if we set the parameter α2=0. The minimization problem (10) can be rewritten as follows:
(11)minf,u𝒥(f,u)≡minu{minf{∥Hf-g∥22+α1∥f-u∥22+α2∥f∥TV}+α3∥u∥TVminf{∥Hf-g∥22+α1∥f-u∥22+α2∥f∥TV}}.
The method for solving the minimization problem (11) will be discussed in Section 2. Our numerical results will show that the proposed method yields state of the art results both in terms of SSIM and PSNR.
This paper is outlined as follows. In the next section, we first give a brief introduction of the split Bregman method, and then we propose an iterative algorithm for solving (10). The convergence property of the proposed method is given in Section 3. In Section 4, we present numerical experiments to show the efficiency of the proposed method. Finally, the concluding remarks can be found in Section 5.
2. Alternating Minimization Iterative Scheme
In this section, we derive an algorithm to solve the minimization problem (11). Before we discuss the alternating iterative algorithm for solving (11), we would like to give a brief introduction of the split Bregman iteration [20].
Bregman iteration is a concept that originated in functional analysis for finding minimizer of convex functionals [24]. Osher et al. [25] first applied the Bregman iteration to the ROF model for denoising problem in image processing. The basic idea of the Bregman iteration is to transform a constrained optimization problem to an unconstrained problem. The objective functional in the transformed unconstrained problem is defined by means of the Bregman distance of a convex functional. Suppose the unconstrained problem is formulated as
(12)minfJ(f)+λ2F(f,g),
where J(f) is a convex function and F(f,g) is convex and differentiable. The Bregman distance of a convex function J(f) at the point v is defined as the following (nonnegative) quantity:
(13)DJp(f,v)≡J(f)-J(v)-〈p,f-v〉,
where p∈∂J; that is, p is one of the subgradients of J at v. Then the authors in [25] employed the Bregman iteration method to solve the unconstrained problem (11). Assume F(f,g)=∥Hf-g∥2, and then the Bregman iteration method is to alternatively iterate the following scheme:
(14)fk+1=argminfDJpk(f,fk)+λ2∥Hf-g∥22=argminfJ(f)-〈pk,f-fk〉+λ2∥Hf-g∥22,pk+1=pk-λHT(Hfk+1-g).
As shown in [25, 26], when H is linear, the iteration (13) can be reformulated into the simplified method
(15)fk+1=argminuJ(f)+λ2∥Hf-bk∥22,(16)bk+1=bk+(g-Hfk+1).
This Bregman iteration technique has mainly two advantages over tradition penalty function/continuation methods. One is that it converges very quickly when applied to certain types of objective functions, especially for problems where J(f) contains an l1-regularization term. The other advantage is that the value of λ in (14) remains constant. See [20, 25, 26] for further details.
2.1.2. Split Bregman Iteration
In [20], the authors considered the problem
(17)minu,d∥d∥1+E(f)suchthatd=ϕ(f),
where E(f) and ϕ(f) are convex and differentiable. To solve the problem (16), they convert the constrained problem into an unconstrained problem:
(18)minf,d∥d∥1+E(f)+λ2∥d-ϕ(f)∥22.
Let J(f,d)=∥d∥1+E(f) and F(f,d)=∥d-ϕ(f)∥22, and the aforementioned Bregman iteration (14) can be similarly applied to (17). Then, they obtained the following elegant two-phase iterative algorithm (split Bregman iteration scheme).
Split Bregman iteration:
(19)(fk+1,dk+1)=minf,d∥d∥1+E(f)+λ2∥d-ϕ(f)-bk∥22,(20)bk+1=bk+(ϕ(fk+1)-dk+1).
The split Bregman iteration has stable convergence property, and it is extremely fast and very simple to program. For more details on split Bregman and its applications, see [20, 23, 27, 28].
2.2. Proposed Alternating Iteration Scheme
In this section, we propose an alternating minimization algorithm to solve the problem (10). Given an initial u0, we get
(21)𝒫h(uk-1)≔fk=argminf{∥Hf-g∥22+α1∥f-uk-1∥22+α2∥f∥TV},𝒫tv(fk):=uk=argminu{α1∥u-fk∥22+α3∥u∥TV},
for i=1,2,…. As a matter of convenience, we express the relationship between uk and uk-1 as below:
(22)ui=𝒫tv(𝒫h(ui-1)),
we denote uk=𝒯(uk-1) for simplicity, where 𝒯(·)=𝒫tv(𝒫h(·)).
Considering f, the minimization problem (10) is reduced to a minimization problem with respect to f(23)minf{∥Hf-g∥22+α1∥f-u∥22+α2∥f∥TV}.
Although there are many methods to solve (22), we focus on the split Bregman iteration here. Let E(f)=(1/α2)∥Hf-g∥22+(α1/α2)∥f-u∥2,J(f,d)=∥d∥1+E(f), and F(f,d)=∥d-∇f∥2. According to (18) and (19), we have
(24)(f~i+1,di+1)=argminf,d∥d∥1+1α2∥Hf-g∥22+α1α2∥f-u∥22+λ2∥d-∇f-bi∥22,bi+1=bi+(∇f~i+1-di+1).
Clearly, the minimization with respect to f~i+1 and di+1 in (23) is decoupled, and thus they can be solved separately. We perform the subminimization problem (23) efficiently by iteratively minimizing the following subproblems with respect to f and d separately:
(25)f~i+1=argminf∥Hf-g∥22+α1∥f-u∥22+α2λ2∥di-∇f-bi∥22,(26)di+1=argmind∥d∥1+λ2∥d-∇f~i+1-bi∥22.
The minimizer f~i+1 is given by normal equations:
(27)(HTH+α2λ2∇T∇+α1I)f~i+1=HTg+α1u+α2λ2∇T(di-bi),
where I is the identity matrix and ∇T=-div represents the adjoint of ∇. When an appropriate boundary condition is given, the normal equation (26) can be solved by fast algorithms. In this paper, we impose the periodic boundary condition, and then the matrix HTH,∇T∇ are all block circulant [4, 8], which can be diagonalized by the two-dimensional discrete Fourier transform ℱ [19]. By applying the convolution theorem, we obtain(28)f~i+1=ℱ-1{2ℱ(H)*∘ℱ(g)+2α1ℱ(u)+α2λ(ℱ(∇x)*∘ℱ(vxi))+α2λ(ℱ(∇y)*∘ℱ(vyi))2ℱ(H)*∘ℱ(H)+2α1I+α2λ(ℱ(∇x)*∘ℱ(∇x))+α2λ(ℱ(∇y)*∘ℱ(∇y))},where “*” denotes complex conjugacy, “∘” denotes component-wise multiplication, vi=di-bi=(dxi-bxi,dyi-byi)=(vxi,vyi), ∇=(∇x,∇y), and the division is component-wise as well.
The minimizer di+1 of (25) can be determined by the following shrinkage formula:
(29)dk+1=max{∥∇f~i+1+bi∥,0}∇f~i+1+bi∥∇f~i+1+bi∥,withtheconvention0·00=0.
To sum up, we get Algorithm 1 for solving the subproblem (22).
di+1=max{∥∇f~i+1+bi∥,0}∇f~i+1+bi∥∇f~i+1+bi∥, with the convention 0·00=0
bi+1=bi+(∇f~i+1-di+1)
stop or set i=i+1
Considering u, the outer minimization problem can be interpreted as the TV minimization scheme to denoise the recovered f generated by the previous step. The minimum problem is as follows:
(30)𝒫tv(f)=argminu{α1∥u-f∥2+α3∥u∥TV}.
There are several efficient methods for solving this problem, such as the primal-dual method [18, 29], the lagged diffusivity fixed point iteration proposed by Vogel and Oman [16], the semismooth Newton method [30], and Chambolle’s dual algorithm [15]. For the simplicity of Chambolle’s dual algorithm, we adopt it here for TV denoising problem (29). The idea given by Chambolle is to replace the optimization of the image u by the optimization of a vector field p that is related to u by u=f-(α3/2α1)divp. For a noisy image f, the vector field is the one that minimizes
(31)minp∥f-α32α1divp∥2,s.t.∥pi,j∥≤1,∀i,j=1,…,n,
where
(32)pi,j=[pi,jxpi,jy]
is the dual variable at the (i,j)th pixel location, p is the concatenation of all pi,j, and the discrete divergence of p is given by
(33)(div p)i,j≡pi,jx-pi-1,jx+pi,jy-pi,j-1y
with p0,jx=pi,0y=0. The vector div p is the concatenation of all (div p)i,j. For simplicity, we denote β=α3/2α1. The iterative scheme proposed by Chambolle for computing the optimal solution p is as follows:
(34)pi,jl+1=pi,jl+τ(∇(divpl-f/β))i,j1+τ|(∇(divpl-f/β))i,j|,∀1≤i,j≤n,
where τ is the step size and pi,jl is the lth iterate of the iterative method for minimizer; see [15] for more details. After the minimizer p* of the constrained optimization problem in (31) is determined, the denoised image can be computed by
(35)u*=f-βdiv p*.
In summary, we obtain Algorithm 2 by using alternating minimization scheme to solve the minimization problem (10).
<bold>Algorithm 2: </bold>Alternating iteration scheme for solving (<xref ref-type="disp-formula" rid="EEq7">10</xref>).
(1) initialization: u0,pi,j0w0,u0,α1,α2,λ,k=0;
(2) iteration
compute fk using Algorithm 1 for fixed uk-1
compute uk according to Chambolle method (31) and
(34) for fixed fk
stop or set k=k+1
3. Convergence Analysis
In this section, we make use of a theorem proposed in [31] to give the convergence property of the proposed algorithm. The theorem is given by the following.
Theorem 1 (see [<xref ref-type="bibr" rid="B31">31</xref>]).
Let 𝒯:ℝn2→ℝn2 be a β-averaged nonexpansive operator and the set of fixed points of 𝒯 be nonempty. Then for any x0, the sequence xk=Tkx0 converges weakly to a fixed point in ℝn2.
Definition 2.
An operator 𝒯 is called nonexpansive if, for all x and y∈ℝn2,
(36)∥𝒯x-𝒯y∥≤∥x-y∥.
Given a nonexpansive operator 𝒩, let 𝒯=(1-α)ℐ+αN; for some α∈(0,1), the operator 𝒯 is said to be α-average.
Definition 3.
An operator 𝒯 is called ν-inverse strongly monotone (ism) if there is ν>0, such that
(37)〈𝒯x-𝒯y,x-y〉≥ν∥𝒯x-𝒯y∥2.
Let ℰ=ℐ-𝒯 be complement of the operator 𝒯, and then we can easily get the following identity:
(38)∥x-y∥2-∥𝒯x-𝒯y∥2=2〈ℰx-ℰy,x-y〉-∥ℰx-ℰy∥2.
An operator ℱ is called firmly nonexpansive if it is 1-ism.
Lemma 4 (see [<xref ref-type="bibr" rid="B31">31</xref>]).
An operator 𝒩 is nonexpansive if and only if its complement ℰ=ℐ-𝒩 is 1/2-ism. If ℰ is ν-ism and γ>0, then the operator γℰ is (ν/γ)-ism.
Lemma 5.
An operator 𝒫 is β-average nonexpansive if and only if its complement ℰ=ℐ-𝒫 is (1/2β)-ism.
Proof.
Firstly, suppose 𝒫 is β-average; from Definition 2, there exists a nonexpansive 𝒩 such that 𝒫=(1-β)ℐ+β𝒩, and then ℰ=ℐ-𝒫=β(ℐ-𝒩). Since 𝒩 is nonexpansive, from Lemma 4, we have that ℐ-𝒩 is 1/2-ism and ℰ=β(ℐ-𝒩) is (1/2β)-ism.
Now we assume that ℰ is (1/2β)-ism, we write 𝒫=(1-β)I+β𝒩, for 𝒩=ℐ-(1/β)ℰ, from Lemma 4, we have that ℐ-𝒩 is (1/2)-ism and 𝒩 is nonexpansive, and then from Definition 2, the operator 𝒫 is β-average.
Lemma 6 (see [<xref ref-type="bibr" rid="B32">32</xref>]).
Let be convex and semicontinuous and α>0. Suppose x^ is defined as follows:
(39)x^=argminx∥y-x∥2+αφ(x).
Define 𝒮 such that x^=𝒮(y) for every y. Then 𝒮 and ℐ-𝒮 are firmly nonexpansive.
Theorem 7.
Let α1 and α2 be positive numbers. Suppose f^ is defined as follows:
(40)f^=argminf∥Hf-g∥2+α1∥f-u∥2+α2∥f∥TV.
Define 𝒫n such that f^=𝒫n(u) for every u. Then 𝒫n is 1/2-averaged nonexpansive.
Proof.
Let ϕ(f)=∥f∥TV, for every u, the minimum in (22) is achieved at a unique point 𝒫n which is characterized by the inclusion
(41)HTg+α1u-(HTH+α1I)𝒫n(u)∈α22∂ϕ(f)
from the property of subdifferential of ϕ(f), ∀u,w∈ℝn2, and we have the following inequalities:
(42)2α2〈𝒫n(w)-𝒫n(u),HTg+α1u-(HTH+α1I)𝒫n(u)〉+ϕ(𝒫n(u))≤ϕ(𝒫n(w)),(43)2α2〈𝒫n(u)-𝒫n(w),HTg+α1w-(HTH+α1I)𝒫n(w)〉+ϕ(𝒫n(w))≤ϕ(𝒫n(u)).
Adding these two inequalities, we obtain
(44)[𝒫n(w)-𝒫n(u)]T(1α1HTH+I)[𝒫n(w)-𝒫n(u)]≤[𝒫n(w)-𝒫n(u)]T(w-u).
It is obvious that we have the following inequality:
(45)∥𝒫n(w)-𝒫n(u)∥2≤[𝒫n(w)-𝒫n(u)]T×(1α1HTH+I)[𝒫n(w)-𝒫n(u)].
From (43) and (44), we obtain
(46)∥𝒫n(w)-𝒫n(u)∥2≤[𝒫n(w)-𝒫n(u)]T(w-u).
Then from Definition 3, the operator 𝒫n is firmly nonexpansive. Analogously we can easily obtain that the operator ℐ-𝒫n is also firmly nonexpansive. Therefore, it follows from Lemma 5 that the operator 𝒫n and ℐ-𝒫n is 1/2-averaged nonexpansive.
Corollary 8.
The operator 𝒯=𝒫tv(𝒫h) is 3/4-average nonexpansive.
Proof.
From Lemmas 5 and 6, and Theorem 7, we know that 𝒫tv and 𝒫h are both 1/2-averaged nonexpansive operators, and there exist nonexpansive operators 𝒩h and 𝒩tv such that
(47)𝒫h=12(ℐ+𝒩h),𝒫tv=12(ℐ+𝒩tv).
Thus we have
(48)𝒯=𝒫tv(𝒫h)=14(ℐ+𝒩h+𝒩tv+𝒩tv𝒩h).
Set 𝒩=(1/3)(𝒩h+𝒩tv+𝒩tv𝒩h), then for any x and y,
(49)∥𝒩x-𝒩y∥=13∥𝒩tv(x-y)+𝒩h(x-y)+𝒩tv𝒩h(x-y)∥≤13(∥𝒩tvx-𝒩tvy∥+∥𝒩hx-𝒩hy∥+∥𝒩tv𝒩h(x)-𝒩tv𝒩h(y)∥)≤13(2∥x-y∥+∥𝒩h(x)-𝒩h(y)∥)≤13(3∥x-y∥)=∥x-y∥.
Consequently, 𝒩 is nonexpansive, and we rewrite 𝒫h as
(50)𝒯=(1-34)ℐ+34𝒩.
It follows from Definition 2 that the operator 𝒯=𝒫tv(𝒫h) is 3/4-averaged.
According to Theorem 1 and Corollary 8, we conclude that for any initial guess u0∈ℝn2, {uk} generated by (20) converges to the minimizer of 𝒥 in (10).
4. Numerical Experiments
In this section, we present several numerical experiments to illustrate the behavior of the proposed method for image restoration problems. The quality of the restoration results by different methods is compared quantitatively by using the PSNR and SSIM. PSNR is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. The higher PSNR value, the higher image quality. The SSIM is a well-known quality metric used to measure the similarity between two images. The method is developed by Wang et al. [33] and is based on three specific statistical measures that are much closer to how the human eye perceives differences between images. The higher SSIM value, the better restoration. We also use blur signal-to-noise ratio (BSNR) to describe how much noise is added in the blurry image. Suppose f,g,f~, and n are the original image, the blurred and noisy image, the restored image, and the noise, respectively. The PSNR, SSIM, and BSNR are defined as follows [2, 34]:
(51)PSNR=10logn2MaxI2∥f-f~∥2,
where MaxI is the maximum possible pixel value of the image (i.e., when the pixels are represented by using 8 bits per sample, this is 255):
(52)SSIM=l(f,f~)c(f,f~)s(f,f~),
where
(53)l(f,f~)=2μfμf~+C1μf2+μf~2+C1,c(f,f~)=2σfσf~+C2σf2+σf~2+C2,s(f,f~)=2σff~+C3σfσf~+C3.
The first term in (52) is the luminance comparison function which measures the closeness of the two images’ mean luminance (μf and μf^). The unique maximum of this factor equals 1 if and only if μf=μf^. The second term c(f,f~) is the contrast comparison function which measures the closeness of the contrast of the two images. Here the contrast is measured by the standard deviation σf and σf^. This term achieves maximum value 1 if and only if σf=σf^. The third term is the structure comparison function which measures the correlation coefficient between the two images f and f^. Note that σff~ is the covariance between f and f^. The positive values of the SSIM index are in [0,1]. A value of 0 means no correlation between images, and 1 means that f=f^. The positive constants C1,C2, and C3 can be thought of as stabilizing constants for near-zero denominator values. In the following experiments, we will also use SSIM index map to reveals areas of high/low similarity between two images, the whiter SSIM index map, the closer between the two images. We refer the reader to see [33, 34] for further details on SSIM and SSIM index map.
The BSNR is given by
(54)BSNR=20log10∥g∥∥n∥.
In the following experiments, we compare our proposed method (we call the proposed method FNDTV later) with FastTV [2]. For FastTV, based on the suggestions in [2], we fixed its parameters α1=0.003 for BSNR = 40 dB and α1=0.006 for BSNR = 30 dB, and we determine the best value of α2 such as the restored images with best performance. For our method, we also determine the best value of the regularization parameters to give the best performance.
The stopping criterion of the proposed method is that the relative difference between the successive iteration of the restored image should satisfy the following inequality:
(55)∥fk+1-fk∥∥fk+1∥≤tol,
where fk is the restored image at the kth iteration of the proposed method. We set tol=1×10-4 in all tests for both methods.
Four test images, “Cameraman,” “Bridge,” “Lena,” and “Resolution Chart,” which are commonly used in the literature, are shown in Figure 1. We test several kinds of blurring kernels including average, motion, gaussian, and out-of-focus. These different blurring kernels can be generated by the Matlab built-in function fspecial. In all tests, we add the Gaussian white noise of different BSNR to the blurred images.
All experiments are carried out on Windows XP 32-bit and Matlab v7.10 running on a desktop equipped with an Intel Core2 Duo CPU 2.93 GHz and 2 GB of RAM.
4.1. Average Blur Example
In this example, we consider the well-known “Cameraman” image (256×256) which is shown in Figure 1(a). The image is blurred by a 3×3 box average kernel and contaminated by BSNR = 40 dB Gaussian noise. The blurred and noisy image is shown in Figure 2(a).
Results of different methods when restoring blurred and noisy image “Cameraman” degraded by average 9×9 uniform blur and a noise with BSNR = 40 dB: (a) blurred and noisy image “Cameraman”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.
Figures 2(b)–2(d) show the restored images by FNDTV and FastTV. We can see that the visual quality of reconstruction by FNDTV is slightly better than the outcome of FastTV. From Table 1, it is not difficult to see that both of PSNR and SSIM of the recovered image by FNDTV are higher than the one obtained by FastTV.
Numerical results for the experiments in terms of PSNR (dB) and SSIM.
Problem
Method
PSNR
SSIM
Cameraman9×9 uniform, BSNR = 40
FastTV
29.47
0.8781
FNDTV
29.60
0.8830
Resolution chartmotion with length 7, BSNR = 30
FastTV
33.76
0.9811
FNDTV
34.38
0.9853
BridgeGaussian with radius 7, σ=3, BSNR = 40
FastTV
25.29
0.7860
FNDTV
25.42
0.7882
LenaOut-of-focus with radius 7, BSNR = 30
FastTV
26.43
0.7639
FNDTV
26.56
0.7661
We also show the SSIM index maps of the restored images recovered by the two methods in Figures 2(e)–2(f), and the map can deliver more information about the quality degradation of the restored images. In contrast, the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV.
4.2. Motion Blur Example
Motion blur is considered in this example. The observed image is the so-called “Resolution Chart” [11], it is degraded by a motion bur with length 7 and contaminated by a white Gaussian noise with BSNR = 30 dB. The degraded image is shown in Figure 3(a). The restored images by FastTV and FNDTV are presented in Figures 3(b) and 3(c). We also report the PSNR and SSIM values by these methods in Table 1. We see that both the PSNR and SSIM values of the restored image by the FNDTV method are higher than FastTV. In addition, the SSIM map obtained by the proposed method is slightly whiter than the map by FastTV.
Results of different methods when restoring blurred and noisy image “Resolution Chart” degraded by motion blur with length 7 and a noise with BSNR = 30 dB: (a) blurred and noisy image “Resolution Chart”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.
4.3. Gaussian Blur Example
The original image “Bridge” is shown in Figure 1(c). The blurred and noisy image is degraded by a Gaussian blur with radius 3 and standard deviation σ=3 and then contaminated by Gaussian noise with BSNR = 40 dB. Figure 4(a) shows the blurred and noisy observation. The restored images obtained by FastTV and FNDTV are shown in Figures 4(b) and 4(c). The numerical results of two different methods in terms of PSNR and SSIM are given in Table 1.
Results of different methods when restoring blurred and noisy image “Bridge” degraded by Gaussian blur with radius 3 and standard deviation σ = 3 and a noise with BSNR = 40 dB: (a) blurred and noisy image “Bridge”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.
From Table 1, it is not difficult to see that the PSNR and SSIM of the restored image by FNDTV are higher than the one obtained by FastTV.
4.4. Out-of-Focus Blur Example
This example consists in restoring the image “Lena” degraded by an out-of-focus blur with radius 3 and contaminated by BSNR = 30 white Gaussian noise. The image “Lena” is a good test image because it has a nice mixture of detail, flat regions, shading area, and texture and has been widely used in the literature to test image restoration algorithms [19]. Figure 5(a) shows the blurred and noisy image. The restored results by both methods are shown in Figures 5(b)–5(d), and Table 1 lists the PSNR and SSIM values. From the table, we observe that both the PSNR and SSIM values of the restored image by the proposed method are better than what obtained by FastTV.
Results of different methods when restoring blurred and noisy image “Lena” degraded by out-of-focus blur with radius 3 and a noise with BSNR = 30 dB: (a) blurred and noisy image “Lena”; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.
5. Conclusion
In this paper, we have presented a new efficient algorithm for image restoration based on total variation regularization. We give a convergence proof for the algorithm, and the numerical results show that the proposed method is competitive with the state of the art method FastTV. In addition, an important feature is that the proposed method can suppress noise very well while it can preserve details of the restored image. We will consider extending the proposed method for color or other multichannel image restoration in the future.
Conflict of Interests
All of the coauthors do not have a direct financial relation with the trademarks mentioned in our paper that might lead to a conflict of interests for any of the coauthors.
Acknowledgments
The work of Jun Liu, Ting-Zhu Huang, and Si Wang is supported by 973 Program (2013CB329404), NSFC (61170311), Chinese Universities Specialized Research Fund for the Doctoral Program (20110185110020), Sichuan Province Sci., and Tech. Research Project (2012GZX0080). The work of Xiao-Guang Lv is supported by Nature science foundation of Jiangsu Province (BK20131209).
BanhamM. R.KataggelosA. K.Digital image restorationHuangY. M.NgM. K.WenY.-W.A fast total variation minimization method for image restorationNagyJ. G.NgM. K.PerroneL.Kronecker product approximations for image restoration with reflexive boundary conditionsHuangJ.HuangT.-Z.ZhaoX.-L.XuZ.-B.Image restoration with shifting reflective boundary conditionsLayK. T.KatsaggelosA. K.Identification and restoration based on the expectationmaximization algorithmHansenP. C.NagyJ. G.O'LearyD. P.HansenP. C.LvX.-G.HuangT.-Z.XuZ.-B.ZhaoX.-L.Kronecker product approximations for image restoration with whole-sample symmetric boundary conditionsTikhonovA.ArseninV.AgarwalV.GribokA. V.AbidiM. A.Image restoration using L1 norm penalty functionRudinL.OsherS.FatemiE.Nonlinear total variation based noise removal algorithmsChanT.EsedogluS.ParkF.YipA.Total variation image restoration: overview and recent developmentsPeronaP.MalikJ.Scale-space and edge detection using anisotropic diffusionXiangZ. J.RamadgeP. J.Edge-preserving image regularization based on morphological wavelets and dyadic treesChambolleA.An algorithm for total variation minimization and applicationsVogelC. R.OmanM. E.Iterative methods for total variation denoisingChanT. F.MuletP.On the convergence of the lagged diffusivity fixed point method in total variation image restorationChanT. F.GolubG. H.MuletP.A nonlinear primal-dual method for total variation-based image restorationWangY.YangJ.YinW.ZhangY.A new alternating minimization algorithm for total variation image reconstructionGoldsteinT.OsherS.The split Bregman method for L1-regularized problemsChaventG.KunischK.Regularization of linear least squares problems by total bounded variationHintermüllerM.KunischK.Total bounded variation regularization as a bilaterally constrained optimization problemLiuX.HuangL.Split Bregman iteration algorithm for total bounded variation regularization based image deblurringBregmanL.The relaxation method of finnding the common points of convex sets and its application to the solution of problems in convex optimizationOsherS.BurgerM.GoldfarbD.XuJ.YinW.An iterative regularization method for total variation-based image restorationYinW. T.OsherS.GoldfarbD.DarbonJ.Bregman iterative algorithms for l1-minimization with applications to compressed sensingCaiJ.-F.OsherS.ShenZ.Split Bregman methods and frame based image restorationLiW. H.LiQ. L.GongW.TangS.Total variation blind deconvolution employing split Bregman iterationZhuM.ChanT. F.An efficient primal-dual hybrid gradient algorithm for total variation image restorationNgM. K.QiL.YangY.-F.HuangY.-M.On semismooth Newton's methods for total variation minimizationByrneC.A unified treatment of some iterative algorithms in signal processing and image reconstructionCombettesP. L.WajsV. R.Signal recovery by proximal forward-backward splittingWangZ.BovikA. C.SheikhH. R.SimoncelliE. P.Image quality assessment: from error visibility to structural similarityHoréA.ZiouD.Image quality metrics: PSNR vs. SSIMProceedings of the IEEE International Conference on Pattern Recognition2010Istanbul, Turkey23662369