An Efficient Variational Method for Image Restoration

and Applied Analysis 3 Themethod for solving theminimization problem (11) will be discussed in Section 2. Our numerical results will show that the proposed method yields state of the art results both in terms of SSIM and PSNR. This paper is outlined as follows. In the next section, we first give a brief introduction of the split Bregman method, and then we propose an iterative algorithm for solving (10). The convergence property of the proposed method is given in Section 3. In Section 4, we present numerical experiments to show the efficiency of the proposed method. Finally, the concluding remarks can be found in Section 5. 2. Alternating Minimization Iterative Scheme In this section, we derive an algorithm to solve the minimization problem (11). Before we discuss the alternating iterative algorithm for solving (11), we would like to give a brief introduction of the split Bregman iteration [20]. 2.1. Split Bregman Iteration 2.1.1. Bregman Iteration. Bregman iteration is a concept that originated in functional analysis for finding minimizer of convex functionals [24]. Osher et al. [25] first applied the Bregman iteration to the ROF model for denoising problem in image processing. The basic idea of the Bregman iteration is to transform a constrained optimization problem to an unconstrained problem. The objective functional in the transformed unconstrained problem is defined by means of the Bregman distance of a convex functional. Suppose the unconstrained problem is formulated as


Introduction
Image restoration is a fundamental problem in the literature of image processing.It plays an important role in various areas such as remote sensing, astronomy, medical imaging, and microscopy [1,2].Many image restoration tasks can be posed as linear inverse problems of the form: where  ∈ R  2 × 2 is a blurring matrix constructed from a discretized point spread function (PSF),  ∈ R  2  is an original  ×  gray-scale image,  ∈ R  2 is a degraded observation, and  ∈ R  2 is additive noise.We remark that the matrix  has special structure that can be exploited in computations, when the special boundary conditions such as periodic and Dirichlet boundary conditions are imposed [3,4].In this work, the PSF is assumed to be known.In fact, if the PSF is unknown, there are a variety of means of techniques available for estimating it [5,6].
Image restoration problems are frequently ill conditioned; thus, the straightforward solution of (1) typically does not yield a meaningful approximation [7,8].In order to avoid this difficulty, one typical method is to replace the linear system (1) by a nearby system that is less sensitive to the error  in  and considers the computed solutions of the latter system.This replacement is known as regularization.
One of the most popular regularization approaches is Tikhonov regularization [9] which seeks to minimize a penalized least squares problem of the form: where the first term is the data fidelity of the solution , and the regularization term ‖‖ 2 2 restricts smoothness of the solution.The positive regularization parameter  plays the role of balancing the tradeoff between the fidelity and noise sensitivity.The regularization operator  is a carefully chosen matrix, often the identity matrix or a discrete approximation of the first or second order derivative operator.
In application to image processing, however, the Tikhonov regularization can produce poor solutions (with overly smoothed edges) when the desired solution comes from an image with edges; that is, it overly penalizes discontinuities in the solutions [10].In this regard, Rudin et al. [11] proposed a total variation (TV) regularization which has the ability to preserve edges well and remove noise at the same time.The resulting model (commonly referred to as the Rudin-Osher-Fatemi (ROF) model) has been proven to be successful in a wide range of applications in image processing [12].We should note that there are many other edge-preserving restoration techniques in the literature, such as the anisotropic diffusion methods of Perona and Malik for denoising [13], morphological wavelets, and dyadic tree-based edge-preserving method proposed by Xiang and Ramadge [14].In this paper, we focus on the the minimization problem with TV regularization where ‖ ⋅ ‖ TV denotes the discrete TV norm.To define the discrete TV norm, we first introduce the discrete gradient ∇ [15]: with for ,  = 1, . . ., , and  , represents the value of pixel (, ) in the image (the ( − 1) + th entry of the vector ).Then the discrete TV norm of  is defined as follows: where || = √ 2 1 +  2 2 for every  = ( 1 ,  2 ) ∈ R 2 .In recent years, a great many of algorithms have been developed for total variation based image restoration and proved to be effective for reducing blur and noise while preserving edges.In the original TV regularization paper [11], the authors proposed a time marching scheme to solve the associated Euler-Lagrange equation of (3).The drawback of this method is very slow due to stability constraints.Later, Vogel and Oman [16] proposed a lagged diffusivity fixed point method to solve the same Euler-Lagrange equation of (3).They proved that this method had a global convergent property and was asymptotically faster than the explicit time marching scheme [17].In [18], Chan et al. applied the Newton's method to solve the nonlinear primal-dual system of the system (3).Chambolle [15] considered a dual formulation of the TV denoising problem and proposed a semiimplicit gradient descent algorithm to solve the resulting constrained optimization problem.This method is globally convergent with a suitable step size.Recently, Wang et al. [19] proposed a fast total variation deconvolution method which uses splitting technique and constructs an iterative procedure of alternately solving a pair of easy subproblems associated with an increasing sequence of penalty parameter values.In [20], Goldstein and Osher proposed the novel split Bregman iterative algorithm to deal with the artificial constraints; their method has several advantages such as fast convergence rate and stability.
where  1 and  2 are positive regularization parameters.The authors employed an alternating minimization algorithm to solve the system (7).The numerical results on image restoration show the efficiency of their method.The idea of this method is similar to the one proposed in [19].Both of them use the penalty method by introducing an auxiliary variable.
In [2], the minimization problem (7) can be solved by two steps: one is the deblurring step which is employed by the Tikhonov regularization and the other step is denoising.Although the noise can be removed to a certain extent in the second step, we may lose some details before the denoising step because of the Tikhonov regularization used in the first step (deblurring), as we know that Tikhonov regularization penalizes edges.
In [21] where ,  are both positive parameters.The authors proved that the solution of system (8) is unique.In [22], Hinrermüller and Kunisch applied the semismooth Newton method to solve the Fenchel predual of (8).Numerical results for image denoising and zooming/resizing showed the efficiency of their approach.In [23], Liu and Huang introduced an extended split Bregman iteration to solve the minimization problem (8).Numerical simulations illustrated the excellent reconstruction performance of their method.
where  2 ,  3 > 0. Then using penalty method, we obtain proposed minimization problem as follows: min The method for solving the minimization problem (11) will be discussed in Section 2. Our numerical results will show that the proposed method yields state of the art results both in terms of SSIM and PSNR.This paper is outlined as follows.In the next section, we first give a brief introduction of the split Bregman method, and then we propose an iterative algorithm for solving (10).The convergence property of the proposed method is given in Section 3. In Section 4, we present numerical experiments to show the efficiency of the proposed method.Finally, the concluding remarks can be found in Section 5.

Alternating Minimization Iterative Scheme
In this section, we derive an algorithm to solve the minimization problem (11).Before we discuss the alternating iterative algorithm for solving (11), we would like to give a brief introduction of the split Bregman iteration [20].

Bregman Iteration.
Bregman iteration is a concept that originated in functional analysis for finding minimizer of convex functionals [24].Osher et al. [25] first applied the Bregman iteration to the ROF model for denoising problem in image processing.The basic idea of the Bregman iteration is to transform a constrained optimization problem to an unconstrained problem.The objective functional in the transformed unconstrained problem is defined by means of the Bregman distance of a convex functional.Suppose the unconstrained problem is formulated as where () is a convex function and (, ) is convex and differentiable.The Bregman distance of a convex function () at the point V is defined as the following (nonnegative) quantity: where  ∈ ; that is,  is one of the subgradients of  at V. Then the authors in [25] employed the Bregman iteration method to solve the unconstrained problem (11).Assume (, ) = ‖ − ‖ 2 , and then the Bregman iteration method is to alternatively iterate the following scheme: As shown in [25,26], when  is linear, the iteration ( 13) can be reformulated into the simplified method This Bregman iteration technique has mainly two advantages over tradition penalty function/continuation methods.One is that it converges very quickly when applied to certain types of objective functions, especially for problems where () contains an  1 -regularization term.The other advantage is that the value of  in ( 14) remains constant.See [20,25,26] for further details.

Split Bregman iteration:
The split Bregman iteration has stable convergence property, and it is extremely fast and very simple to program.For more details on split Bregman and its applications, see [20,23,27,28].
Considering , the minimization problem (10) Clearly, the minimization with respect to f+1 and  +1 in ( 23) is decoupled, and thus they can be solved separately.We perform the subminimization problem (23) efficiently by iteratively minimizing the following subproblems with respect to  and  separately: The minimizer f+1 is given by normal equations: where  is the identity matrix and ∇  = − div represents the adjoint of ∇.When an appropriate boundary condition is given, the normal equation ( 26) can be solved by fast algorithms.In this paper, we impose the periodic boundary condition, and then the matrix   , ∇  ∇ are all block circulant [4,8], which can be diagonalized by the twodimensional discrete Fourier transform F [19].By applying the convolution theorem, we obtain where " * " denotes complex conjugacy, "∘" denotes component-wise multiplication, , and the division is component-wise as well.
The minimizer  +1 of (25) can be determined by the following shrinkage formula: To sum up, we get Algorithm 1 for solving the subproblem (22).
Considering , the outer minimization problem can be interpreted as the TV minimization scheme to denoise the recovered  generated by the previous step.The minimum problem is as follows: There are several efficient methods for solving this problem, such as the primal-dual method [18,29], the lagged diffusivity fixed point iteration proposed by Vogel and Oman [16], (1) initialization:  0 ,  0 , w 0 , u 0 ,  1 ,  2 , ,  = 0; (2) iteration compute   using Algorithm 1 for fixed  −1 compute   according to Chambolle method ( 31) and (34) for fixed   stop or set  =  + 1 Algorithm 2: Alternating iteration scheme for solving (10).
The idea given by Chambolle is to replace the optimization of the image  by the optimization of a vector field  that is related to  by  =  − ( 3 /2 where is the dual variable at the (, )th pixel location,  is the concatenation of all  , , and the discrete divergence of  is given by with   0, =   ,0 = 0.The vector div  is the concatenation of all (div ) , .For simplicity, we denote  =  3 /2 1 .The iterative scheme proposed by Chambolle for computing the optimal solution  is as follows: where  is the step size and   , is the th iterate of the iterative method for minimizer; see [15] for more details.After the minimizer  * of the constrained optimization problem in (31) is determined, the denoised image can be computed by In summary, we obtain Algorithm 2 by using alternating minimization scheme to solve the minimization problem (10).

Convergence Analysis
In this section, we make use of a theorem proposed in [31] to give the convergence property of the proposed algorithm.The theorem is given by the following.
Theorem 1 (see [31]).Let T : R  2 → R  2 be a -averaged nonexpansive operator and the set of fixed points of T be nonempty.Then for any  0 , the sequence   =    0 converges weakly to a fixed point in R Given a nonexpansive operator N, let T = (1 − )I + ; for some  ∈ (0, 1), the operator T is said to be -average.
Let E = I − T be complement of the operator T, and then we can easily get the following identity: An operator F is called firmly nonexpansive if it is 1-ism.
Lemma 4 (see [31]).An operator N is nonexpansive if and only if its complement Lemma 5.An operator P is -average nonexpansive if and only if its complement E = I − P is (1/2)-ism.
Proof.Firstly, suppose P is -average; from Definition 2, there exists a nonexpansive N such that P = (1 − )I + N, and then E = I − P = (I − N).Since N is nonexpansive, from Lemma 4, we have that Now we assume that E is (1/2)-ism, we write P = (1 − ) + N, for N = I − (1/)E, from Lemma 4, we have that I − N is (1/2)-ism and N is nonexpansive, and then from Definition 2, the operator P is -average.Lemma 6 (see [32]).Let be convex and semicontinuous and  > 0. Suppose x is defined as follows: Define S such that x = S() for every .Then S and I − S are firmly nonexpansive.Proof.From Lemmas 5 and 6, and Theorem 7, we know that P tv and P ℎ are both 1/2-averaged nonexpansive operators, and there exist nonexpansive operators N ℎ and N tv such that Thus we have Set N = (1/3)(N ℎ + N tv + N tv N ℎ ), then for any  and , Consequently, N is nonexpansive, and we rewrite P ℎ as It follows from Definition 2 that the operator T = P V (P ℎ ) is 3/4-averaged.
According to Theorem 1 and Corollary 8, we conclude that for any initial guess  0 ∈ R  2 , {  } generated by (20) converges to the minimizer of J in (10).

Numerical Experiments
In this section, we present several numerical experiments to illustrate the behavior of the proposed method for image restoration problems.The quality of the restoration results by different methods is compared quantitatively by using the PSNR and SSIM.PSNR is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation.The higher PSNR value, the higher image quality.The SSIM is a well-known quality metric used to measure the similarity between two images.The method is developed by Wang et al. [33] and is based on three specific statistical measures that are much closer to how the human eye perceives differences between images.The higher SSIM value, the better restoration.We also use blur signal-to-noise ratio (BSNR) to describe how much noise is added in the blurry image.Suppose , , f, and  are the original image, the blurred and noisy image, the restored image, and the noise, respectively.The PSNR, SSIM, and BSNR are defined as follows [2,34]: where Max  is the maximum possible pixel value of the image (i.e., when the pixels are represented by using 8 bits per sample, this is 255): where The first term in (52) is the luminance comparison function which measures the closeness of the two images' mean luminance (  and  f).The unique maximum of this factor equals 1 if and only if   =  f.The second term (, f) is the contrast comparison function which measures the closeness of the contrast of the two images.Here the contrast is measured by the standard deviation   and  f.This term achieves maximum value 1 if and only if   =  f.The third term is the structure comparison function which measures the correlation coefficient between the two images  and f.Note that   f is the covariance between  and f.The positive values of the SSIM index are in [0, 1].A value of 0 means no correlation between images, and 1 means that  = f.The positive constants  1 ,  2 , and  3 can be thought of as stabilizing constants for near-zero denominator values.In the following experiments, we will also use SSIM index map to reveals areas of high/low similarity between two images, the whiter SSIM index map, the closer between the two images.We refer the reader to see [33,34] for further details on SSIM and SSIM index map.The BSNR is given by BSNR = 20 log 10          ‖‖ .
In the following experiments, we compare our proposed method (we call the proposed method FNDTV later) with FastTV [2].For FastTV, based on the suggestions in [2], we fixed its parameters  1 = 0.003 for BSNR = 40 dB and  1 = 0.006 for BSNR = 30 dB, and we determine the best value of  2 such as the restored images with best performance.For our method, we also determine the best value of the regularization parameters to give the best performance.
The stopping criterion of the proposed method is that the relative difference between the successive iteration of the restored image should satisfy the following inequality: where   is the computed image at the th iteration of the proposed method.We set tol = 1 × 10 −4 in all tests for both methods.Four test images, "Cameraman, " "Bridge, " "Lena, " and "Resolution Chart, " which are commonly used in the literature, are shown in Figure 1.We test several kinds of blurring kernels including average, motion, gaussian, and out-offocus.These different blurring kernels can be generated by the Matlab built-in function  .In all tests, we add the Gaussian white noise of different BSNR to the blurred images.
All experiments are carried out on Windows XP 32-bit and Matlab v7.10 running on a desktop equipped with an Intel Core2 Duo CPU 2.93 GHz and 2 GB of RAM.

Average Blur Example.
In this example, we consider the well-known "Cameraman" image (256 × 256) which is shown in Figure 1(a).The image is blurred by a 3 × 3 box average  We also show the SSIM index maps of the restored images recovered by the two methods in Figures 2(e)-2(f), and the map can deliver more information about the quality degradation of the restored images.In contrast, the SSIM map of the restored image by the proposed method is slightly whiter than the SSIM map by FastTV.

Motion Blur Example.
Motion blur is considered in this example.The observed image is the so-called "Resolution Chart" [11], it is degraded by a motion bur with length 7 and contaminated by a white Gaussian noise with BSNR = 30 dB.The degraded image is shown in Figure 3(a).The restored images by FastTV and FNDTV are presented in Figures 3(b) and 3(c).We also report the PSNR and SSIM values by these methods in Table 1.We see that both the PSNR and SSIM values of the restored image by the FNDTV method are higher than FastTV.In addition, the SSIM map obtained by the proposed method is slightly whiter than the map by FastTV.    1.

Gaussian Blur
From Table 1, it is not difficult to see that the PSNR and SSIM of the restored image by FNDTV are higher than the one obtained by FastTV.

Out-of-Focus Blur
Example.This example consists in restoring the image "Lena" degraded by an out-of-focus blur with radius 3 and contaminated by BSNR = 30 white Gaussian noise.The image "Lena" is a good test image because it has a nice mixture of detail, flat regions, shading area, and texture and has been widely used in the literature to test image restoration algorithms [19].

Conclusion
In this paper, we have presented a new efficient algorithm for image restoration based on total variation regularization.We give a convergence proof for the algorithm, and the numerical results show that the proposed method is competitive with the state of the art method FastTV.In addition, an important feature is that the proposed method can suppress noise very well while it can preserve details of the restored image.We will consider extending the proposed method for color or other multichannel image restoration in the future.

Figure 3 :
Figure 3: Results of different methods when restoring blurred and noisy image "Resolution Chart" degraded by motion blur with length 7 and a noise with BSNR = 30 dB: (a) blurred and noisy image "Resolution Chart"; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

Figure 4 :
Figure 4: Results of different methods when restoring blurred and noisy image "Bridge" degraded by Gaussian blur with radius 3 and standard deviation  = 3 and a noise with BSNR = 40 dB: (a) blurred and noisy image "Bridge"; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.
Example.The original image "Bridge" is shown in Figure1(c).The blurred and noisy image is degraded by a Gaussian blur with radius 3 and standard deviation  = 3 and then contaminated by Gaussian noise with BSNR = 40 dB.
Figure 4(a)  shows the blurred and noisy observation.The restored images obtained by FastTV and

Figure 5 :
Figure 5: Results of different methods when restoring blurred and noisy image "Lena" degraded by out-of-focus blur with radius 3 and a noise with BSNR = 30 dB: (a) blurred and noisy image "Lena"; (b) restored image by FastTV; (c) restored image by FNDTV; (d) SSIM index map of the corrupted image; (e) SSIM index map of the recovered image by FastTV; (f) SSIM index map of the recovered image by FNDTV.

Table 1 :
Numerical results for the experiments in terms of PSNR (dB) and SSIM.We can see that the visual quality of reconstruction by FNDTV is slightly better than the outcome of FastTV.From Table1, it is not difficult to see that both of PSNR and SSIM of the recovered image by FNDTV are higher than the one obtained by FastTV.
Table 1 lists the PSNR and SSIM values.From the table, we observe that both the PSNR and SSIM values of the restored image by the proposed method are better than what obtained by FastTV.