We propose a new frame for multiplicative noise removal. To improve the multiplicative denoising performance, we add the regularization of texture component in the denoising model, designing a multiscale multiplicative noise removal model. The proposed model is jointly convex and can be easily solved by optimization algorithms. We introduce Douglas-Rachford splitting method to solve the proposed model. In the algorithm, we make full use of some important proximity operators, which have closed expression or can be executed in one time iteration. In particular, the proximity of H-1 norm is deduced, which is just the Fourier domain filtering. In the process of simulation experiments, we first analyze and select the needed parameters and then test the experiments on several images using the designed algorithm and the given parameters. Finally, we compare the denoising performance of the proposed model with the existing models, in which the signal to noise ratio (SNR) and the peak signal to noise ratios (PSNRs) are applied to evaluate the noise suppressing effects. Experimental results demonstrate that the designed algorithms can solve the model perfectly and the recovery images of the proposed model have higher SNRs/PSNRs and better visual quality.
National Natural Science Foundation of China61472303612712946136212961379030Fundamental Research Funds for the Central UniversitiesNSIY211. Introduction
Image denoising is a basic and important task in image processing. The relatively mature developed denoising model is the additive noise model in which case the noise is assumed to obey a Gaussian distribution, that is, (1)g=f+n,n~N0,σ2.However, the noises involved in many applications do not conform to the characteristic of the additive one; they may corrupt an image in other forms. In this paper, we are concerned with the denoising problem under the assumption that the original image has been corrupted by some multiplicative noise. The corresponding examples include the uneven phenomena during the magnetic resonance imaging (MRI) and the speckle noise in the ultrasonic and in synthetic aperture radar (SAR) image. The degradation model of the multiplicative noise can be expressed as (2)g=f·n,where · refers to the componentwise multiplication. To model the actual problems, we assume that the multiplicative noise obeys some random distribution; for example, the noise in SAR images is assumed to obey the Gamma distribution and the one in ultrasonic image obeys the Rayleigh distribution.
1.1. Multiplicative Noise Removal Models
There are many multiplicative noise removal models; one of the most classic models based on TV regularization is called RLO model proposed by Rudin et al. [1]:(3)minf∫Ω∇fdxs.t.∫Ωgxfxdx=1,∫Ωgxfx-12dx=σ2,where the mean of noise is assumed to be 1 and the variance is assumed to be σ2. Since (3) is a nonconvex problem, it is very difficult to be solved.
The second classic model is called AA model [2] which is given by Aubert and Aujol under the hypothesis of Gamma distribution: (4)minf∫Ωlogf+gxfxdx+λ∇f1.The objective function in (4) is also nonconvex. However, Aubert and Aujol showed the existence of minimizers of the objective function and employed a gradient method to solve (4) numerically. There are several improved works based on (4) developed in recent years.
Recently, Shi and Osher [3] proposed considering a logarithm transformation on the noisy observation, logg=log+logn, and derived the TV minimization model for multiplicative noise removal problems. Huang et al. also proposed the log-domain denoising model using the transformation f=exp(z) in (4). Consider (5)minz∫Ωzx+gxe-zxdx+λzTV,which is known as EXP model [4]. The objective function in (5) is strictly convex in z and the authors promoted an alternating minimization algorithm to solve the model and showed the convergence of the method simultaneously. Nevertheless, model (5) is convex in z rather than in f in the original image domain. At the same time, Durand et al. [5] proposed a method composed of several stages. They also used the log-image data and applied reasonable suboptimal hard thresholding on its curvelet transform; then they applied a variational method by minimizing a specialized hybrid criterion composed of an l1 data-fidelity term of the thresholded curvelet coefficients and a TV regularization term in the log-image domain. The restored image can be obtained by using an exponential of the minimizer, weighted in such a way that the mean of the original image is preserved. Their restored images combine the advantages of shrinkage and variational methods. Besides the above approaches, dictionary learning methods and nonlocal mean methods have also been proposed and developed for multiplicative denoising [6–9].
In [10], Zhao et al. developed a convex optimization model for multiplicative noise removal. The main idea is to rewrite a multiplicative noise equation such that both the image variable and the noise variable are decoupled. That is to say, rewrite problem (2) as (6)g=Nf,where N=diag(n) is a diagonal matrix of which the main diagonal entries are given by [n]i. According to (2), when there is no noise in the observed image, we obtain that n=e, a vector of all ones. When there is a multiplicative noise in the observed image, we expect that {[n]i0≠0,∀i}, and moreover they are greater than zeros. So we can say that N is invertible, and (6) is equivalent to (7)Gw=f,where G=diag(g) is the diagonalizable matrix of vector g and [w]i=1/[n]i. The convex model to solve (2) is proposed as (8)minw,f12w-μe22+α1Gw-f1+α2fTV.
In (8), the first term is to measure the variance of w, the second term is the data term, and the third term is the TV regularization term. If the first term is absent and w can be arbitrarily assigned, then minimizing (9)minw,fα1Gw-f1+α2fTVmay lead to the trivial solution f≡C,w≡CG-1, where C is a constant. So the existence of variance term (1/2)w-μe22 can promise a proper minimizer of (8) [10]. Furthermore, It is worth noting that the image variable and the noise variable are decoupled, and the model is jointly convex for (f,w), which make the model able to be easily solved by optimizing methods.
1.2. The Contribution
We firstly analyze the performance of (8). It is observed that, if we fix w and set I=Gw, the minimizing problem (8) is just a TV-L1 problem, denoted as (10)minfα1I-f1+α2fTV.In [11], it was pointed out that minimizing (10) can extract the large scale component and leave out the small scale contents of image I. In other words, we can extract cartoon component with different scales based on the selection of parameter λ=α2/α1. We verify the above conclusion through the following experiment. The example is shown in Figure 1(a), an image I with four squares with size of 3×3, 5×5, 20×20, and 80×80 pixels, respectively. The experiment results are the following: when λ=0.01, all the four squares can be extracted, as shown in Figure 1(b); when λ=0.05, three squares can be extracted while the one of 3×3 is lost, as shown in Figure 1(c); when λ=0.2, the two big squares appear, and another two are lost, as shown in Figure 1(d); when λ=2, only the biggest square 80×80 remains, while all other ones are lost, as shown in Figure 1(e). The above results are consistent with the theoretical analysis in [11]; the bigger λ is, the bigger the scale of extraction is. As a result, we have sound reasons to believe that the minimizer f of model (10) is almost the cartoon component of the whole restored image we expect.
Numerical experiment results of (10) with different λ: (a) is the original image I; (b)–(e) are the minimizer f with λ=0.01,0.05,0.2,and2, respectively.
I
λ=0.01
λ=0.05
λ=0.2
λ=2
The above analysis shows that, if we want to recover more contents from an image I, we should choose a small λ, which bring much more small scales and details. However, for a noisy image, the small scale component may contain much more noise, which may hinder the denoising quality. For this reason, we will improve the denoising model through adding the prior information of the texture component and design a cartoon-texture decomposition based denoising model. Then, we design the numerical algorithm based on the Douglas-Rachford splitting algorithm and three useful proximity operators. Before conducting the experiment, we analyze the selection of the corresponding parameters. Finally, we verify the performance of our proposed model through comparing the SNRs/PSNRs indexes with some existing models.
The rest of the paper is organized as follows. We propose our new model in next section. In Section 3, the numerical method is presented, in which the Douglas-Rachford (DR) splitting algorithm and some proximal operators used in this paper are firstly reviewed, and the algorithm is then designed and described in detail. In Section 4, the denoising experiments on two types of multiplicative noise are implemented, and the experiment results and performance analyses are unfolded. Finally, we conclude our work in Section 5.
2. The Proposed Model
Based on the analysis in Section 1.2, the recovery image obtained by solving (8) mainly contains the cartoon component of what we expect, while most of the significant texture components are lost. To optimize the denoising performance, we improve model (8) by adding the texture information and alter the regularization term fTV to be uTV, where u denotes the cartoon component of the restored image f. Taking into consideration the prior information of texture v=f-u, we select vH-12 as the regularization term about v, for that minimizing ·H-12 can extract the texture component very well. In a word, the new model is (11)w∗,u∗,v∗=argminw,u,v12w-μe22+α1Gw-u+v1+α2uTV+ρvH-12,and the restored image is f∗=u∗+v∗. In (11), α1, α2, and ρ are the positive regularization parameters to control the balance among the terms in the objective function.
The proposed model possesses the following superiorities. Firstly, we add the term ρvH-12 in the regularization part, which can help to recover more image information and improve the quality of the restored image. Secondly, α2/α1 and ρ can be adjusted according to the noise level, which makes the model have multiscale property. Thirdly, when ρ→+∞, then v→0, and (11) degrades to model (8); that is, (11) is the modified and improved version of (8). Finally, the proposed model keeps the structure of (8), is jointly convex for (w,u,v), and can be easily solved by optimal methods. In the following section, we use alternative iteration method and Douglas-Rachford method to deal with it, which combine the application of several proximity operators.
3. The Numerical Method3.1. Douglas-Rachford Algorithm and Some Proximity Operators
Before proposing the numerical scheme of (11), we will review some basic algorithms and operators. The first one is the Douglas-Rachford splitting algorithm.
Definition 1.
Let Φ(x) and Ψ(x) be the proper, l.s.c., and convex functions such that (12)ridomΨ∩ridomΦ≠∅,and Ψ(x)+Φ(x)→+∞ as x→+∞; then the problem (13)minx∈RNΨx+Φxadmits at least one solution and, for any γ∈(0,+∞), its solutions are characterized by Algorithm 2.
Algorithm 2 (Douglas-Rachford algorithm).
Consider the following:
Fixing ϵ∈(0,1),γ>0,y0∈RN,
For n=0,1,…,do
xn=proxγΦyn
λn∈[ϵ,2-ϵ]
yn+1=yn+λn(proxγΨ(2xn-y)-xn),
end for
where the operator proxΦ will be defined in the following.
Secondly, we review the definition of proximity operator and present several special examples that will be used in the numerical scheme.
Definition 3.
Let Φ be a proper, l.s.c., and convex function; for every x∈RN, the minimization problem (14)minuΦu+12u-x22admits a unique solution, which is denoted by proxΦ(x), and the operator proxΦ(x):RN→RN thus defined is called the proximity operator of Φ.
In the following, we list three examples of proximity operator which will be used in this paper:
(1)Φ(x)=λTx1,λ=(λ1,λ2,…,λN)T∈RN. Set y=proxγΦ(x); then y can be componentwise expressed in the close form: (15)yi=signxixi-λiγ,xi≥λiγ0,elsei=1,2,…,N,which is known as the soft thresholding operator, and we denote it in short by y=STλγ(x).
(2)ConsiderΦ(x)=xTV. Computing proxγΦ(x) is equivalent to solving the TV-L2 problem (16)minu12u-x22+γuTV.There are many strategies to solve it. One of the most classical methods is the semi-implicit gradient descent algorithm proposed by Chambolle in [12]; another is the Forward-Backward method developed in [5]. Both of the above methods need inner loop in the processing of numerical scheme. In this paper, we adopt the variable splitting method and express the problem in discrete form: 16′minu12u-x22+γ∇u1,where ∇ is the gradient operator given by (17)∇=∇1,∇2,∇ui=∇1ui,∇2ui,i=1,2,…,N,∇u1=∑i=1N∇1ui2+∇2ui2.We introduce an auxiliary variable p∈RN×2 to transfer ∇u: (18)pi=pi,1,pi,2=∇ui=∇1ui,∇2ui,i=1,2,…,N;then we get the following approximation model to 16′: (19)minu,p12u-x22+γp1+β2p-∇u22with a sufficiently large penalty parameter β. We solve (19) by an alternating minimization scheme given in Algorithm 4.
Algorithm 4 (proximity of TV).
Consider the following:
(1) Fixing u, compute p componentwise
minpiγpi2+(β/2)pi-∇ui2
⇒pi=max∇ui2-γ/β,0(∇ui/∇ui2),
i=1,…,N
(2) Fixing p, compute u
minu(1/2)u-x22+(1/β)p-∇u22
⇒∇T∇+(1/β)Iu=∇Tp+(1/β)x
⇒u=∇T∇+(1/β)I-1∇Tp+(1/β)x.
It is worth mentioning that the above alternating minimization scheme is embedded into the whole algorithm with just one time iteration, which makes the algorithm have no inner iteration.
(3)ConsiderΦ(x)=xH-12. We will deduce proxγΦ(x) in the continuous situation, with the purpose of describing the derivation process clearly; this does not affect the processing of the numerical implementation. The proximity of Φ(x) is expressed as (20)proxγΦx=argminv12v-x22+γvH-12.Denote the objective function in (20) by (21)Hv=12v-x22+γvH-12, while minimizing H(v) is equivalent to minimizing H^(v^) in Fourier domain: (22)H^v^=12v^-x^22+γ∫v^ξ22πξ2dξ; minimizing H^(v^) in v^ yields the unique solution v^=L^x^, where (23)L^ξ=2πξ22πξ2+γ. Taking inverse Fourier transform, we have (24)v∗=proxγ1Φ1z=L∗z.It is indicated from (24) that the proximity of H-1 norm is just the Fourier domain filtering.
3.2. Solving the Proposed Model Numerically
We solve (11) by alternatively updating w,u,v in turn. The framework is given as in Algorithm 1.
<bold>Algorithm 1</bold>
Initialization: u1=g, v1=0, f1=u1+v1,
(1) for k=1,2,…, do
(2) wk+1=argminw(1/2)w-μe22+α1Gw-fk1
(3) uk+1=argminuα1Gwk+1-(u+vk)1+α2uTV
(4) vk+1=argminvα1Gwk+1-(uk+1+v)1+α2ρvH-12.
(5) fk+1=uk+1+vk+1.
(6) if stopping criterion is satisfied, then
(7) stop the loop and output fk+1.
(8) end for
In Algorithm 1, updating w is to compute the proximity of ·1, which is just the soft thresholding operator and can be realized by (15). Updating u is to solve a TV-L1 problem, and there are many methods to deal with it; to improve the algorithm efficiency, we realize the updating using Douglas-Rachford splitting algorithm, combining the computing of prox∥·∥1 and prox∥·∥TV. Updating v is to solve an L1-H-1 problem, which is also dealt with by Douglas-Rachford splitting algorithm, combining the soft thresholding and Fourier domain filtering. A detailed description of the whole process is shown as in Algorithm 2.
<bold>Algorithm 2</bold>
Initialization: u(1)=g, v(1)=0, f=u+v, y=u, z=0
(1) For k=1,2,…, do
(2) update w using soft-thresholding method
wk+1=G-1f+STα1G(μe-G-1fk)
(3) update u
yk+1=yk+λ1(proxγ1α1⋅-(Gwk+1-vk)1(2uk-yk)-uk)
uk+1=proxγ1α2⋅TV(yk+1)
(4) update v
zk+1=zk+μ1(proxγ2α1⋅-(Gwk+1-uk+1)1(2vk-zk+1)-vk)
vk+1=proxγ2α2ρ⋅H-12(zk+1)
(5) fk+1=uk+1+vk+1
(6) end For, if stopping criterion is satisfied
(7) output f=u+v.
There are some issues that need to be illustrated.
Remark 5.
For step (3) in Algorithm 1, when we update u as uk+1=proxγ1α2∥·∥TV(yk+1), we realize it by Algorithm 4 in one time iteration, which can avoid the inner loop and can improve the efficiency of the algorithm.
Remark 6.
In this paper, the stopping criterion is used as follows: (25)fk+1-fk2fk2≤1e-4.
4. Implementation Details and Experimental Results
This section is mainly devoted to numerical simulation of image restoration in the presence of multiplicative noise. We test the performance of our model under the corruption of two types of multiplicative noise: Gamma and Rayleigh.
Gamma Noise. The probability density function of Gamma noise n is given by (26)pn=nk-1Γkθke-n/θ,where θ>0,k>1. The mean and the variance of n are (27)En=kθ,Vn=kθ2.As multiplicative noise of an image, the mean of n is set to be 1, so θ=1/k, and the variance of n is kθ2=1/k.
In model (10), w=1/n(i)i=1N, and the value of its mean is estimated as [10] (28)μ=E1n=1θk-1=kk-1.
Rayleigh Noise. The probability density function of Rayleigh noise n is given by (29)pn=nσ2exp-n22σ2,where σ is a positive parameter. The mean value of n is equal to π/2σ, the variance of n is equal to (2-π/2)/σ2, and the mean value of the inverse of Rayleigh noise can be estimated as [10] (30)μ=E1n=π2σ2.
We test experiments on three images: Cameraman (size of 256×256), Lena (size of 512×512), and Aerial image of some city (size of 512×512); see Figure 2.
The original images: (a) Cameraman, (b) Lena, and (c) Aerial image of some city.
In this paper, we use the signal to noise ratio (SNR) and the peak signal to noise ratio (PSNR) as the evaluation indicators. These indicators are measured between clean and restored images. Let Inew represent the image restored from the noisy image Inoisy, and let σI02 represent the average variance of the clean image I0. Then, we define the SNR indicator by the formula as follows: (31)SNR=10log10dσI02Inew-I022, and the PSNR is defined as (32)PSNR=10log10dI0∞2Inew-I022, where d denotes the length of the images.
4.1. Parameters Selection
Some necessary parameters α1, α2, ρ, λ1, μ1, γ1, and γ2 are needed to be given to start up the new algorithm; α1, α2, and ρ are regularization parameters to keep balance among the terms in model (11). In particular, λ=α2/α1 is an important indicator to trade off between the data term and the regularization term. λ1 and μ1 are two relaxation parameters of the Douglas-Rachford splitting algorithms in Algorithm 2 and are empirically set to be 0.5. γ1 and γ2 are the parameters of the proximity operator of ·TV and ·H-12, respectively, and are empirically set to be 10.
According to the experimental analysis in [10], the ratio λ=α2/α1 in (8) is almost a constant depending on the type of noise and was empirically estimated to be 5/6, 5/8 for Gamma noise and Rayleigh noise, respectively. From a number of experiments, we find that the best value of α2/α1 can be chosen from the range [0.6,0.85], and the exact selection depends on the different noise levels and images. We give the values of α2/α1 in Table 1.
PSNR and SNR results for the Gamma noise removal.
Image
k
λ
ρ
Method
PSNR
SNR
Cameraman
k=15
—
—
Noisy image
17.3027
5.1705
0.18
—
AA
24.9257
12.7825
0.8
—
Convex
25.5732
13.4701
0.8
20
Ours
25.7435
13.5874
Lena
k=15
—
—
Noisy image
17.0867
2.9056
0.18
—
AA
27.5628
11.5464
0.6
—
Convex
27.4350
11.0819
0.6
20
Ours
27.7890
11.6649
Aerial
k=15
—
—
Noisy image
24.2567
4.5260
0.18
—
AA
26.8085
6.0724
0.85
—
Convex
28.1713
6.2617
0.85
20
Ours
28.5635
7.5391
Cameraman
k=10
—
—
Noisy image
15.5091
3.3768
0.18
—
AA
24.1401
11.9376
0.8
—
Convex
24.3942
11.5807
0.8
50
Ours
24.7390
11.7593
Lena
k=10
—
—
Noisy image
15.2946
1.1407
0.18
—
AA
26.6863
12.3616
0.6
—
Convex
26.7026
12.4127
0.6
50
Ours
27.0777
12.3698
Aerial
k=10
—
—
Noisy image
22.5160
2.7469
0.18
—
AA
25.4918
5.7500
0.85
—
Convex
27.3225
6.1550
0.85
50
Ours
27.7026
7.2175
Cameraman
k=4
—
—
Noisy image
11.4822
−0.6501
0.18
—
AA
22.2638
9.8301
0.6
—
Convex
22.4161
10.6513
0.6
80
Ours
22.6186
10.4654
Lena
k=4
—
—
Noisy image
11.3537
−2.8313
0.18
—
AA
24.4908
9.7435
0.6
—
Convex
24.4912
9.0932
0.6
80
Ours
24.5983
10.2984
Aerial
k=4
—
—
Noisy image
18.5211
−1.2087
0.18
—
AA
24.4714
4.6684
0.85
—
Convex
24.5657
4.7386
0.85
80
Ours
25.0209
5.0016
For ρ, it is a key parameter to promote the performance of model (11). Since a proper value of ρ is critical to the recovery of a satisfying image, it is necessary for us to emphatically discuss its impact on the denoising results. Here, we show its influences on image decomposition by using the image Cameraman. Fixing other parameters, we decompose the image into cartoon component and texture component with different ρ and put the experimental results in Figure 3. It is clear from Figure 3 that the different selections of ρ can bring completely different results. If ρ is selected too large (ρ≥100), there are very few contents in the texture v, and many details are found in u. With the decrease of ρ, more and more features get involved in the texture component v. However, when ρ is selected too little (e.g., ρ≤0.1), the block effect in cartoon component u is very serious, and some major features are included in the texture component v. Moreover, in the process of denoising, the too little value of ρ may bring much more details or small scale parts appearing in v, including some noise we do not expect. For this reason, we experimentally select ρ in the interval [20,80] according to the different noise level, and the exact selection can be found in Table 1.
The influence of ρ on decomposition. (a) is image f. (b–e) are the cartoon u’s with different selection of ρ, which correspond to ρ=0.1,1,10, and 100, respectively. (f–i) are the texture v’s, which correspond to the u’s on (b–e).
Original image
Cartoon: ρ=100
Cartoon: ρ=10
Cartoon: ρ=1
Cartoon: ρ=0.1
Texture: ρ=100
Texture: ρ=10
Texture: ρ=1
Texture: ρ=0.1
4.2. Experimental Results in the Assumption of Gamma Noise
In the assumption of Gamma noise corruption, we test denoising performances of AA model, convex model, and the proposed model on the images of Figure 2 and compare the SNRs/PSNRs of three methods. The AA model is formulated as (33)minf∫Ωlogf+gxfxdx+λ∇f1,which is solved by the gradient descent method (34)fi+1-fiΔt=divΔfiΔfi+λg-fifi2,i=1,2,…,where Δt is step size and is set as 0.01 for all experiments and λ is the regularization parameter and is adjusted according to the images and noise level; the exact value can be found in Table 1. The convex model is solved by the ADMM method; see [10].
In the test, the original images are corrupted by Gamma noise with k=4, k=10, and k=15, respectively. In Table 1, we list values of related parameters and obtained SNRs/PSNRs by taking average of ten noise cases under the same experimental setting. It is clear that the proposed model performs quite better in terms of PSNR values and SNR values than AA model and convex model.
We further show the denoised results obtained by three methods in Figures 4–6. It is observed that when k=15 and k=10, as shown in Figures 4 and 5, the restored images from the new model possess more clear contents and much better visual effects than the AA model and the convex model. However, when k=4, that is, the noise level is higher, we can observe from Figure 6 that the new model possesses little advantage over the convex model in the visual effect. We also observe from Table 1 that, when k=4, the SNRs/PSNRs of our model and the convex model are neck to neck. The main reason for this phenomenon is that the larger the variance of the noise is, the more the texture components are seriously destroyed and regarded as noise part, and the bigger value ρ should be adjusted. As described in the analysis (c) of the new model, there will be a little content in the v parts, and denoising results of the new model will approximate that of the convex model.
Experiment results under the corruption of Gamma noise, K=15.
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
Experiment results under the corruption of Gamma noise, K=10.
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
Experiment results under the corruption of Gamma noise, K=4.
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
Noisy
AA
Convex
Ours
4.3. Experimental Results in the Assumption of Rayleigh Noise
The Rayleigh noise is generated by σ-2log(1-b), where b is a uniformly distributed random variable generated by “rand” in MATLAB. The mean of Rayleigh noise is set to be 1, and the variance is 0.2732.
Since AA model is deduced under the assumption of Gamma noise, we just test the denoising performance of the convex model and the proposed model. Table 2 shows the SNRs/PSNRs results of the two models. we notice that the proposed model gets higher SNRs and PSNRs than the convex model. The experiment results are shown as in Figure 7. It can be observed that the restored images from the new model possess more clear contents and much better visual effects.
PSNR and SNR results for the Rayleigh noise removal.
Image
Method
PSNR
SNR
Cameraman
Noisy image
11.1192
−1.013
Convex
22.4597
10.3275
Ours
22.6852
10.5529
Lena
Noisy image
10.9307
−3.2543
Convex
24.6532
10.4683
Ours
25.2379
11.0529
Aerial
Noisy image
18.1219
−1.6079
Convex
25.0408
5.3110
Ours
25.0688
5.3390
Experiment results under the corruption of Rayleigh noise with the mean 1 and the variance 0.2732.
Original
Noisy
Convex
Ours
Original
Noisy
Convex
Ours
Original
Noisy
Convex
Ours
4.4. Further Analysis of the Restored Images
In this section, we further analyze the cartoon part u and the texture part v of the restored image f of model (11). Take the case under Gamma noise as an example; we exhibit the experiment results when k=10.
It can be observed from Figure 8 that there are rich contents in the texture parts v, which enhance the image quality in both the SNRs/PSNRs index and the visual effects. So, the proposed model can also be regarded as a cartoon-texture decomposition method under the multiplicative noise corruption, on which we need further research.
Experiment results of the proposed model. From (a–l), (a, e, i) are the noisy images corrupted by Gamma noise (K=10), (b, f, j) are the restored cartoon parts, (c, g, k) are the restored texture parts, and (d, h, l) are the whole recovery images.
Noisy
Cartoon
Texture
Ours
Noisy
Cartoon
Texture
Ours
Noisy
Cartoon
Texture
Ours
5. Conclusions
A novel cartoon-texture decomposition based multiplicative noise removal model has been proposed in this paper. The main advantage of our work is embodied in the following two aspects: the one in which we take most of the a priori information of texture component into the denoising model, which makes the model possess more flexibility and efficiency. Another is that we dexterously solve the new convex model using optimal splitting algorithms and Fourier domain filtering method. The experiment results demonstrate that the proposed model can better remove the noise than the two other models, and the recovery images have higher SNRs/PSNRs and better visual effects.
Competing Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
Acknowledgments
This paper is supported by the National Natural Science Foundation of China under Grant nos. 61472303, 61271294, 61362129, and 61379030 and by the Fundamental Research Funds for the Central Universities (no. NSIY21).
RudinL.LionsP.-L.OsherS.Multiplicative denoising and deblurring: theory and algorithmsAubertG.AujolJ.-F.A variational approach to removing multiplicative noiseShiJ.OsherS.A nonlinear inverse scale space method for a convex multiplicative noise modelHuangY.-M.NgM. K.WenY.-W.A new total variation method for multiplicative noise removalDurandS.FadiliJ.NikolovaM.Multiplicative noise removal using L1 fidelity on frame coefficientsHaoY.FengX.XuJ.Multiplicative noise removal via sparse and redundant representations over learned dictionaries and total variationHuangY.-M.MoisanL.NgM. K.ZengT.Multiplicative noise removal via a learned dictionaryTeuberT.LangA.BrucksteinA. M.ter Haar RomenyB. M.BronsteinA. M.BronsteinM. M.Nonlocal filters for removing multiplicative noiseDongF.ZhangH.KongD.-X.Nonlocal total variation models for multiplicative noise removal using split Bregman iterationZhaoX.-L.WangF.NgM. K.A new convex optimization model for multiplicative noise and blur removalYinW.GoldfarbD.OsherS.The total variation regularized model for multiscale decompostionChambolleA.An algorithm for total variation minimization and applications