An Efficient Blind Image Deblurring Using a Smoothing Function

(is paper introduces an efficient deblurring image method based on a convolution-based and an iterative concept. Our method does not require specific conditions on images, so it can be widely applied for unspecific generic images. (e kernel estimation is firstly performed and then will be used to estimate a latent image in each iteration. (e final deblurred image is obtained from the convolution of the blurred image with the final estimated kernel. However, image deblurring is an ill-posed problem due to the nonuniqueness of solutions. (erefore, we propose a smoothing function, unlike previous approaches that applied piecewise functions on estimating a latent image. In our approach, we employ L2-regularization on intensity and gradient prior to converging to a solution of the deblurring problem. Moreover, our work is based on the quadratic splitting method. It guarantees that each subproblem has a closed-form solution. Various experiments on synthesized and real-world images confirm that our approach outperforms several existing methods, especially on the images corrupted by noises. Moreover, our method gives more reasonable and more natural deblurred images than those of other methods.


Introduction
An image deblurring is a recovering process that recovers a sharp latent image from a blurred image, which is caused by camera shake or object motion. It has widely attracted attention in image processing and computer vision fields. A number of algorithms have been proposed to address the image deblurring problem. e most common approaches to blur removal are to treat blur as a noisy convolution operated with a blur kernel. e mathematical form of the blurred image is then usually modelled as where y is a blurred image, x is a corresponding latent image, k and n denote the blur kernel and white noise, respectively, and * represents the convolution operator. A number of approaches have been proposed to address deblurring image problem, and one of popular approaches was based on the deconvolution-based concept. ere are two types of approaches: nonblind and blind deconvolutions. e main difference between these two types is that the blur kernel must be known as prior in the case of nonblind deconvolution. e blur kernel estimation is an essentially important step in obtaining a high-quality sharp image. Most approaches use statistical priors on natural images and selection of salient edges for the blur kernel estimation [1][2][3][4][5][6]. Moreover, several researchers estimate a high-quality blur kernel directly from the input image with motion blur by studying the problem in the frequency domain [7][8][9].
In the blind deconvolution, the goal is to estimate both the corresponding latent image and the blur kernel. e blind deconvolution is an ill-posed problem since it has many pairs of solutions. To deal with the blind deblurring problem well-posed, most blind image deblurring methods tend to formulate the problem as a minimization problem. e blur kernel and the latent image are usually solved in an alternating fashion. e algorithm will converge quickly if an appropriately useful initialization is well chosen. However, various assumptions or regularizations are demanded. Pan et al. [10] presented L 0 -regularized intensity and gradient prior to deblurring text images. is method is suitable for text images with a clean background, but it is less useful for a cluttered background. Liu et al. [11] combined L 0 -regularized of both first-and second-order image gradients in order to standardize the final estimation results. Cho et al. [12] developed a method to incorporate text-specific properties for deblurring.
is prior outperformed in two-tone images; however, it is still less effective in generic images.
Many researchers proposed numerous image-priors underlining optimization techniques to resolve the ill-posed problem in generic images. Pan et al. [13] continued to propose their work with the generic images via modifying a dark channel firstly introduced by He et al. [14]. It is based on the observation that, in most of the natural scene patches, at least one of the color channels possesses some pixels with very close to zero intensities. is algorithm enforces the sparsity of the dark channel of latent images for kernel estimation and generates better results compared to other approaches. However, the dark channel may not help intermediate latent image estimation if there are no dark pixels in the images, but instead, there is a large degree of bright pixels.
Yan et al. [15] proposed a novel natural image prior named Extreme Channels Prior (ECP) by taking the advantages of both Dark Channel Prior (DCP) and Bright Channel Prior (BCP) to refine natural images. is method is more robust and performs favourably against the state-ofthe-art image deblurring methods on both synthesized and natural images. However, it has limitations and is less likely to support kernel estimation. e kernel estimation process is complicated, and the performance depends mainly on the intensity values of images. Several researchers turned to employ a gradient prior to improve the accuracy of kernel estimation. Shan et al. [16] adopted a sparse image prior and introduced a unified probabilistic model to fit the gradient distribution of natural images to solve the kernel estimation problems. e authors in [17,18] proposed a hyper-Laplacian prior to fit the distribution of natural image gradients for avoiding local solutions. Fergus et al. [1] presented an algorithm using a zero-mean mixture of Gaussians to learn the image gradient prior to fitting the distribution of natural image gradients. Zhong et al. [19] proposed an adaptive generalized variation (TGV) regularized model to remove deblurring and maintaining fine image details.
All proposed algorithms [3][4][5][6][7][8][9][10][11][12][13][14][15] added the auxiliary terms in objective functions. Based on the observation, we found that the closed forms of auxiliary terms are in terms of the piecewise functions containing the parameters. ese parameter values were constantly updated by scaling, leading to the loss of detail in the images. Moreover, most algorithms require specific conditions on generic images so they cannot be widely applied for nonspecific generic images.
In this paper, we present an efficient optimization algorithm based on the quadratic splitting method. e splitting method guarantees that each subproblem has a closed-form solution and ensures fast convergence. Additionally, we introduce a novel smoothing function for updating pixel values and a sigmoid function for scaling parameters. Our smoothing function based on L 2 -regularized intensity and gradient prior gives a significant advantage result and can reduce the loss of details in the images. Moreover, our approach is also suitable for colored images that do not require any specific conditions on the image. is paper is organized as follows. Our L 2 -regularization method is described in Section 2. Section 3 describes our blind image deblurring method. In Section 4, we present some experimental results. Finally, conclusions are drawn in Section 5.

Our L 2 -Regularization Method
We first formulate the blind image deblurring to a minimization problem and introduce a new efficient regularization term for convergence to a solution of deblurring. As we mentioned, equation (1) is solvable.
at is, E(y) � (x, k)|y � x * k + n is not empty. We seek an x corresponding with a kernel k whose norm is the minimum. However, we first need to confirm that such a solution always exists in the set E(y). Given a latent image x and a blur kernel k , a blurred image y can be rewritten as KX, where K is the Toeplitz matrix and X is the column latent image x. So, we have y � KX. To show that y � KX has the solution with the minimum norm, we need the following proposition.

Proposition 1. Let W and Z be Hilbert space, G ∈ L(W, Z)
and G * ∈ L(Z, W) the adjoint operator; then the following statement holds: Now, the matrix K may be viewed as a linear operator K: R m ⟶ R n ; therefore, K ∈ L(R m , R n ). Since y � KX is solvable for all y, K is surjective. Hence, by Proposition We can see that X � K * (KK * ) − 1 y is a solution of y � Kz. Now, we shall see that this solution has a minimum norm. In fact, consider W ∈ R m such that KW � y and Note that Hence, is means that X has the minimum norm. erefore, we can conclude that, for every blurred image, there always exists a corresponding latent image x with a kernel k whose norm of x is the minimum. Based on this fact, we confidently believe that seeking out such a latent image with the minimum norm is one of the alternative ways to solve the problem. us, we define P(x) � α ‖ x‖ 2 2 + ‖ ∇x‖ 2 2 as a regularization term in our approach.

Our Image Deblurring Method
Our image deblurring method is an iterative approach based on a convolution-based concept. In each iteration, the kernel estimation is performed, and then it is used to estimate a latent image. e estimated latent image is then to be as input in the next kernel estimation. e final deblurred image is obtained from a convolution of the blurred image with the final estimated kernel. A system overview of our proposed method is shown in Figure 1.
To estimate the kernel and the latent image, equation (1) is formed as an optimization problem with a regularization term P(x) for image deblurring expressed as follows: A solution directly from intensity values is often inaccurate [2,4,10], so we set the regularization term in an intensity space and a gradient space with P(x) � α ‖ x‖ 2 2 + ‖ ∇x‖ 2 2 . We use an efficient alternating minimization method [2,5,10,13] to solve equation (4). ese two subproblems are solved as follows: e details of the two subproblems are described in the following sections.

Estimating the Latent Image x with the Blur Kernel k.
We introduce an efficient alternating minimization method to solve equation (5). Like [10,13,20], we propose the auxiliary variables g � (g h , g v ) T corresponding to x and ∇x, respectively. e objective function then can be rewritten as where the constants β and μ are weights based on the latent image value and the constants λ and α are weight values.
To solve equation (7), the values of u and g are initialized to be zero in the first iteration. And then, the solution of x is obtained in each iteration by solving Based on a least square minimization problem, we perform FFTs and the closed-form solution is given by where F(·) and F − 1 (·) denote the Fast Fourier Transform (FFT) and inverse FFT, respectively. F(·) is the complex conjugate operator. ∇ � ∇ h + ∇ v , where ∇ h and ∇ v are the horizontal and vertical differential operators, respectively. To update the variable x, we first need to compute u and g, which can be done separately by Note that equations (10) and (11) are pixel-wise minimization problems, and most importantly, they are smooth.
us, the solutions of u and g are obtained by the simple technique of differentiation: It is worthwhile to note that, in each iteration, the weighted values of β and μ are always updated. In most algorithms [3][4][5][6][7][8][9][10][11][12][13][14][15], the weighted values of β and μ are glued in u and g, but in the forms of piecewise function and they are always updated by scaling.
is may force values for some pixels to be zero which will lead to the loss of the details in latent images. In fact, we confidently believe that each pixel should not be scaled by the same value but it should depend on itself. So, in our approach, we employ the sigmoid functions β(x) and μ(x) for updating the value β and μ, respectively, where β(x) � λσe x− a and μ(x) � λe x− b , where a and b are defined as the center of the transition area. Due to the smoothness of equations (12) and (13), not only does it have less artifact effect, but also it guarantees not losing details in intermediate latent images.

Estimating the Blur Kernel k with the Latent Image x.
To estimate k, we employ the latent image x and the blurred image y in the gradient space as shown in equation (14) and then apply FFTs to solve the least square minimization problem e closed form of k kernel estimation with the latent image x and the blurred image y is e proposed method, similar to the state-of-the-art methods, is used as a coarse-to-fine processing strategy for handling the blur kernel estimation. e algorithm is typically efficiently implemented using an image pyramid [2,10]. e summarization of the main step for the latent image and the blur kernel estimation on one pyramid level is presented in Algorithm 1.

Final Image Restoration.
In the final step, we restore the final deblurred image using nonblind deconvolution, given the final kernel from equation (15). e final latent image can be restored from We then compute a difference map between these two estimated images with different sigmoid parameters. Finally, we subtract the restored image with a difference map to suppress a ringing artifact.

Experimental Results
To evaluate the performance of our proposed method, we tested our approach on the real-image datasets [21], synthesized images, and specific domain images [14]. Moreover, we also evaluate the robustness of our approach to Gaussian noise. In all experiments, we set λ � σ � 0.001 and μ max � 1e 3 , β max � 2 3 .

Evaluation on the Real-Image
Datasets. First, we tested our approach on the image deblurring dataset [21], containing 48 images (4 images with 12 blur kernels). is dataset is used as a benchmark for performance measuring of image deblurring methods. e proposed method is compared with state-of-the-art methods: Cho and Lee [2], Fergus et al. [1], Hirsch et al. [22], Krishnan et al. [23], Whyte et al. [24], Xu and Jia [3], and Pan et al. [13]. To compare performance, we use peak signal-tonoise ratio (PSNR) value which is computed by comparing each restored image with 199 ground truth images. PSNR � 100, which corresponds to a perfect deblurring. e PSNR of our proposed method and those of compared methods on the dataset [21] are illustrated in Figure 2. We found that the PSNR of our approach, Xu and Jia [3], and Pan et al. [13] are not much different in image 2. It might be that on image 2 the balance of the dark and bright pixels is moderately the same. However, our approach gives a higher average PSNR than that of other approaches. Figure 3 shows sample images which are deblurred by the best three algorithms. However, our method gives clearer details and is more natural than that of other approaches.

Evaluation on the Synthesized Images.
We then evaluated our approach on the synthesized images with blur kernel. e comparison of results on the first synthesized blurred image is Input: Blur image y; initialize k with results from the coarser level; for i←1 to 5 do x←y; β←2λα; repeat solve for u using eq.12; μ←2λ; repeat solve for g using eq. 13; solve for x using eq.9; μ←2μ; until μ > μ max ; β←2β; until β > β max ; solve for k using eq.15; λ← max λ/1.1, 1e − 4 end Output: the latent image and blur kernel ALGORITHM 1: Latent image and blur kernel estimation algorithm.  shown in Figure 4. It can be seen that the restored image of our approach gives a better result and is more reasonable than that of other methods. e results from [3,13] methods redeem more blur than the original image. It might be the case that the original image contains lots of bright pixels. So the two methods could not control the regularization term, which leads to an unreasonable solution.
Moreover, we demonstrated the performance of the proposed method on synthesized blurred images corrupted by different realizations of the white Gaussian noises, 5-30 dB. e samples of synthesized noise blurred images and deblurred images are shown in Figure 5.
e PSNR values for the synthesized images corrupted by various white Gaussian noise levels are shown in Figure 6. Our method achieved the highest PSNR value. It is evident that our method shows outperformance. Our method obviously performs better than the other two methods when the test image contains lots of noise.  Figure 3: Samples of restored image by the best three algorithms on [21]. (a) Blurred image [21]. (b) Xu and Jia [3]. (c) Pan et al. [13].  e next step, we evaluated the proposed method on the specific domain images such as face and text images. We compared with the advanced deblurring methods in each domain. We then evaluated our approach on face images against the method of Pan et al. [13] that used a dark channel in image deblurring. Moreover, we compared our proposed method to Xu and Jia [3]. As shown in Figures 7 and 8 , our method gives more details and is more natural than other methods.
Next, we evaluated the proposed method on selected real and challenging text image for qualitative evaluation as shown in Figures 9 and 10 . We compared our method with the methods of Pan et al. [10] and Yan et al. [25] that present specific methods for deblurring the text image. As shown in Figures 9 and 10, our method gives more reasonable, sharper, and less losing details than those of both methods.
We evaluated the robustness of our approach to the blurred image with Gaussian noise. e comparison of results on a blurred image with Gaussian noise is shown in Figure 11. e other methods still contain blur in deblurring results, while ours is more reasonable and more natural. e method of [3] gives more artifacts than other methods. e one in [13] gives an unreasonable result since the noise changed the minimum intensity value of an image patch.

Conclusions
We propose a novel method for the deblurring image, especially for blind deconvolution. We present L 2 regularization in the deblurring image formed as an optimization problem.
is method does not require specific conditions on images, so it can be widely applied for unspecific generic images. It applies smooth functions on estimating a latent image in each iteration. It keeps more details in the restored image than that of existing approaches. To evaluate the performance of our approach, we performed experiments on both synthesized and real-world images. We discovered that our approach outperforms the previous approaches, especially in the image, which contains Gaussian noise. Moreover, our restored images give more details and are more natural than the restored images produced by other methods.

Conflicts of Interest
e authors declare that they have no conflicts of interest.