A Convex Adaptive Total Variation Model Based on the Gray Level Indicator for Multiplicative Noise Removal

and Applied Analysis 3 derive a convex adaptive total variation model (4) for multiplicative noise removal. In Section 3, we prove the existence and uniqueness of the solution of the minimization problem (4). In Section 4, we study the associated evolution problem to the minimization problem (4). Specifically, we define the weak solution of the evolution problem, derive estimates for the solution of an approximating problem, prove existence anduniqueness of theweak solution, anddiscuss the behavior of the weak solution as t → ∞. In Section 5, we provide two numerical algorithms and experimental results to illustrate the effectiveness of our algorithms in image denoising. We also compare new algorithms with other ones. 2. A New Variational Model for Multiplicative Noise Removal The goal of this section is to propose a new variational model for multiplicative noise removal. First, we proposed a new fidelity with global convexity, which can always satisfy the constraint (2) during the evolution. Then, by analyzing the properties of the noise, we propose the adaptive total variation based on a gray level indicator α(x). 2.1. A Global Convex Fidelity Term. Based on the idea in [15], we can obtain the following fidelity: H(u, f) = u + f log 1 u . (19) Note that H(u, f) = 1 − (f/u). Let us consider the following Euler-Lagrange equation for some functionalbased problem: 0 = F (∇u, ∇2u) − λ(1 − f u ) , in Ω, (20) ∂u ∂ ⃗ n = 0, on ∂Ω, (21) where ∇u and ∇u stand, respectively, for the gradient and the Hessian matrix of u with respect to the space variable x, and F(∇u, ∇u) is divergence operator from the functional. By integrating (20) in space, using integration by parts and the boundary condition (21) in the sense of distributions, we have ∫ Ω (1 − f u )dx = 0, (22) which implies the constraint (2). Moreover, using the idea in [6], the parameter λ can be calculated as λ = 1 σ |Ω| ∫ Ω F (∇u, ∇2u)(1 − f u )dx. (23) Remark 1. (1) It is easy to check that the function u → u + f log (1/u) reaches its minimum value f + f log (1/f) over R for u = f. (2) Let us denote by h (u) = u + f log 1 u . (24) We have h(u) = 1 − (f/u) = (u − f)/u, and h(u) = (f/u) > 0, which deduce that the new fidelity termH(u, f) is globally strictly convex. 2.2.The Adaptive Total VariationModel. Assume u is a piecewise function, that is, u = ∑N i=1 g i 1 Ω i , where Ω i ∩ Ω j = 0, for i ̸ = j, ⋃N i=1 Ω i = Ω, and g i is the gray level. Moreover, we assume that the samples of the noise on each pixel x are mutually independent and identically distributed (i.i.d.) with the probability density function η(x). For x ∈ Ω, f(x) = g i η(x), and therefore, Var[f] = g i Var[η] = g i σ, where Var[f] and Var[η] are the variance of the noise image f and the noise at the pixel x, respectively. It is noticed that the variance of the noise is the constant σ on each pixel, but the variance of the noise image f is influenced by the gray level. The higher the gray level is, the more remarkable the influence of the noise is. Especially, f = u when u = 0, and therefore f is noise free in this case. The fact is illustrious in Figure 1. It can be seen that in despite of the independent identically distribution of noise (see Figure 1(b)), the noise image shows different features on the pixel, where the gray levels are different (see Figure 1(d)). In [14], Chan and Strong propose the following adaptive total variation model: min∫ Ω g (x) |∇u| dx, (25) where the weight function g(x) controls the speed of diffusion at different region.Utilizing this idea, we proposed a gray level indicator α(x).The indicator α(x) has such properties as follows: α(s) is monotonically increasing, g(0) = 0, α(s) ≥ 0, and α(s) → 1, as s → sup x∈Ω u. Therefore, we propose the following gray indicators α(x): α (x) = (1 − 1 1 + k 󵄨󵄨󵄨Gσ ∗ f 󵄨󵄨󵄨 2 ) 1 + kM kM , (26)


Introduction
Multiplicative noise occurs while one deals with active imaging system, such as laser images, microscope images, and SAR images.Given a noisy image  : Ω → , where Ω is a bounded open subset of R 2 , we assume where  is the true image and  is the noise.In what follows we will always assume that  > 0 and  > 0. Some prior information about the mean and variance of the multiplicative noise is as follows: where |Ω| = ∫ Ω 1 .Our purpose is to remove noise in images while preserving the maximum information about .The goal of this paper is to propose a globally strictly convex functional well-adapted to removing multiplicative noise, which is as follows: where  is noisy image and ∫ Ω ()|| stands for the adaptive total variation of .In the new model, we introduce a control factor, (), which controls the speed of diffusion at different region: at the low gray level (() → 0), the speed is slow; at the high gray level (() → 1), the speed is fast.The second term in the functional, that is, the fidelity term is global convex, which implies the constraint (2).Various adaptive filters for multiplicative noise removal have been proposed.In the beginning, variational methods for multiplicative noise reduction deal with the Gaussian multiplicative noise [1].In actual life, the speckle noise [2] is more widespread, such as synthetic aperture radar (SAR) imagery [3,4].Then, Aubert and Aujol [5] propose a new variational method for Gamma multiplicative noise reduction.By the logarithm transform, a large variety of methods rely on the conversion of the multiplicative noise into additive noise.
1.1.Gaussian Noise.In the additive noise case, the most classical assumption is that the noise is a white Gaussian noise.So one case in which dealing with multiplicative noise is the white Gaussian noise.Using the framework in [6], Rudin et al. [1] consider the following optimization problem for Gaussian multiplicative noise reduction: { () +  (, )} , (5) where () = ∫ Ω |∇| stands for the total variation of , and (, ) is a fidelity term which consists of two integrals with two Lagrange multipliers: In order to make sure the two constraints (2) and ( 3) are always satisfied during the evolution, by the gradient projection method, the authors evolve the following evolution equation: If by the gradient projection method the values of  1 and  2 are found dynamically, the method is not always convex; while if  1 ,  2 > 0 are fixed, the corresponding minimization problem will lead to a sequence of constant function  approaching +∞.
1.2.Gamma Noise.Generally, the speckle noise is treated as Gamma noise with mean equal to one.The probability distribution function () of the noise  takes the following form: Gamma noise is more complex than Gaussian noise [2].Based on maximum a posteriori (MAP) on ( | ), Aubert and Aujol [5] assume that the noise  follows a Gamma probability distribution with mean equal to one and () following a Gibbs prior and then derive a functional formulating the following minimization problem (called AA model): where (Ω) = { > 0,  ∈ BV(Ω)}.The new fidelity term (, ) = ∫ Ω (log  + (/)) is strictly convex for  ∈ (0, 2).The authors prove the existence of a minimizer  ∈ (Ω) to the minimization problem and derive existence and uniqueness results of the solution to the associated evolution equation: 1.3.The Model Based on the Logarithm Transform.The simplest idea is to take the log of both sides of (1), which essentially converts the multiplicative problem into an additive one.If the distribution of the noise  takes the form (8), the expectation and the variance of the log-noise  are where is the polygamma function [7].In [8], Shi and Osher use relaxed inverse scale space (RISS) flows to deal with various noises and provide iterative TV regularization.First, using the log-data log , they propose the following model: where  = log .The corresponding RISS flow reads with where  1 > 0 and  2 > 0 are regularization parameters, and  is an auxiliary variable.Next, they further develop an alternating minimization algorithm for the model ( 16) by incorporating another way of modified TV regularization in [12].However, the mathematical analysis to the variational problem ( 16) is not given in [11].By the exponential transformation  →   , Jin and Yang [13] change the fidelity term log  +  of AA model (2) into  +   and then study the following denoising model: Notice that the fidelity term (, ) = ∫ Ω ( +  − ) is globally strictly convex.Based on this, they prove the uniqueness of the solutions to the variational problem (17) and show the existence and uniqueness of the weak solution for the following evolution equation corresponding to (17): The paper is organized as follows.In Section 2, inspired from [1,5,14], based on the gray level indicator (), we derive a convex adaptive total variation model (4) for multiplicative noise removal.In Section 3, we prove the existence and uniqueness of the solution of the minimization problem (4).In Section 4, we study the associated evolution problem to the minimization problem (4).Specifically, we define the weak solution of the evolution problem, derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as  → ∞.In Section 5, we provide two numerical algorithms and experimental results to illustrate the effectiveness of our algorithms in image denoising.We also compare new algorithms with other ones.

A New Variational Model for Multiplicative Noise Removal
The goal of this section is to propose a new variational model for multiplicative noise removal.First, we proposed a new fidelity with global convexity, which can always satisfy the constraint (2) during the evolution.Then, by analyzing the properties of the noise, we propose the adaptive total variation based on a gray level indicator ().

A Global Convex Fidelity Term.
Based on the idea in [15], we can obtain the following fidelity: Note that   (, ) = 1 − (/).Let us consider the following Euler-Lagrange equation for some functionalbased problem: where ∇ and ∇ 2  stand, respectively, for the gradient and the Hessian matrix of  with respect to the space variable , and F(∇, ∇ 2 ) is divergence operator from the functional.By integrating (20) in space, using integration by parts and the boundary condition (21) in the sense of distributions, we have which implies the constraint (2).Moreover, using the idea in [6], the parameter  can be calculated as We have ℎ  () = 1 − (/) = ( − )/, and ℎ  () = (/ 2 ) > 0, which deduce that the new fidelity term (, ) is globally strictly convex.

The Adaptive Total Variation Model.
Assume  is a piecewise function, that is,  = ∑  =1   1 Ω  , where Ω  ∩ Ω  = 0, for  ̸ = , ⋃  =1 Ω  = Ω, and   is the gray level.Moreover, we assume that the samples of the noise on each pixel  are mutually independent and identically distributed (i.i.d.) with the probability density function ().For  ∈ Ω, () =   (), and therefore, Var[] =  2   Var[] =  2   2 , where Var[] and Var[] are the variance of the noise image  and the noise at the pixel , respectively.It is noticed that the variance of the noise is the constant  2 on each pixel, but the variance of the noise image  is influenced by the gray level.The higher the gray level is, the more remarkable the influence of the noise is.Especially,  =  when  = 0, and therefore  is noise free in this case.The fact is illustrious in Figure 1.It can be seen that in despite of the independent identically distribution of noise (see Figure 1(b)), the noise image shows different features on the pixel, where the gray levels are different (see Figure 1(d)).
In [14], Chan and Strong propose the following adaptive total variation model: where the weight function () controls the speed of diffusion at different region.Utilizing this idea, we proposed a gray level indicator ().The indicator () has such properties as follows: () is monotonically increasing, (0) = 0, () ≥ 0, and () → 1, as  → sup ∈Ω .Therefore, we propose the following gray indicators (): or where  = sup ∈Ω (  * )(),   () = (1/4) exp{−|| 2 /4 2 },  > 0, and  > 0 are parameters.With this choice, () is a positive-valued continuous function; () is much smaller value at low gray levels (() → 0) than at high gray levels (() → 1) so that some small features at low gray levels are much less smoothed and therefore are preserved.As previously stated, at high gray levels, the region of the noise image is degraded more, while the region at low gray levels is degraded less (see Figure 1(d)).Then, from (26)/(27), at the high gray levels, () → 1, the new model is more smooth at the region; at low gray levels, () → 0, the new model is less smooth at the region.
The previous analysis leads us to propose a convex adaptive total variation model for multiplicative noise removal, min The evolution of the Euler-Lagrange equation for (28) is as follows: (0, ) = ,  ∈ Ω. (31)

The Minimization Problem (28)
In this section, we study the existence and uniqueness of the solution to the minimization problem (28), and then we consider the comparison principle for the problem (28).
Lemma 5. Assume that  ∈ BV(Ω).Then, there exists a sequence By a minor modification of the proof of Lemma 1 in Section 4.3 of [17], we can have the following lemma.Lemma 6.Let  ∈ BV(Ω), and let  , be the cut-off function defined by Then,  , () ∈ BV(Ω), and

Existence and Uniqueness of the Problem (28)
. In this subsection, we show that problem (28) has at least one solution in BV(Ω).
Step 2. Let us prove that there exists  ∈ BV(Ω) such that The proof in the Step 1 implies in particular that   is bounded in  1 (Ω).Since  ≤   ≤  and ℎ() ∈ [, ], ℎ(  ) is bounded.Moreover, by the definition of {  }, there exists a constant  such that Then, which implies that the sequence {  } ∞ =1 is bounded in BV(Ω).Consequently, there exists a function  ∈ BV(Ω) and a subsequence {  } ∞ =1 , denoted by itself, such that as  → ∞,   ⇀  in weak * in BV (Ω) , Utilizing Lemma 4 and Fatou's lemma, we get that  is a solution of the problem (28).
Finally, from Remark 1(2), ℎ is strictly convex as  > 0, and then the uniqueness of the minimizer follows from the strict convexity of the energy functional in (28).

Comparison Principle.
In this subsection, we state a comparison principle for problem (28).
From Theorem 7, It is noticed that there exist the solutions  1 and  2 for  1 and  2 , respectively.Since   is a minimizer with data   , for  = 1, 2, Adding these two inequalities and by a minor modification of the facts in [18,19], which is as follows: then we can deduce that Since  1 <  2 , the set { 1 >  2 } is a zero Lebesgue measure, that is,  1 ≤  2 a.e. in Ω.

The Associated Evolution Equations (29)-(31)
In this section, by an approach from the theory used in both [16,20], we define the weak solution of the evolution problems (29)-(31), derive estimates for the solution of an approximating problem, prove existence and uniqueness of the weak solution, and discuss the behavior of the weak solution as  → ∞.
Let us denote which is associate with the problems (61)-(63).Combining the proof of Theorem 3.4 in [16] with the proof of Theorem 7, we also have the following lemma.
Lemma 10.Let   ∈  ∞ (Ω) with inf ∈Ω   > 0.Then, there is a unique solution to the problem which is satisfying where Based on this fact, we have the following existence and uniqueness result for the problems (61)-(63).
Since -Laplacian operator is a maximal monotone operator, using Galerkin method and Lebesgue convergence theorem, from standard results for parabolic equations [22], we get that the problems (72)-(74) admit a unique weak solution  , such that Next, let us verify that the truncated function [⋅]  in the problems (72)-( 74) can be omitted.Multiply (72) by ( , − ) − , where and integrate over Ω to get Then, we have that for all  ∈ [0, ], and so for  ∈ [0, ].Then (70) follows.

Existence and Uniqueness of the Problems (29)-(31).
In this subsection, we will prove the main theorem for the existence and uniqueness for the solution to the problems (29)-(31).

Proof
Step 1.First we fix  > 0 and pass to the limit  → 1.
Step 2. Now it only remains to pass to the limit as  → 0 in (102) to complete the existence of the solution to (29)-(31).

4.4.
Long-Time Behavior.At last, we will show the asymptotic limit of the solution (⋅, ) as  → ∞.
Proof.Take a function Since {(⋅, )} is uniformly bounded in  ∞ (Ω), we have  By dividing  in (116) and then taking the limit along   → ∞, we get that, for any which implies that ũ is the minimizer of the problem (28).

Numerical Methods and Experimental Results
We present in this section some numerical examples illustrating the capability of our model.We also compare it with the known model (AA).In the next two subsections, two numerical discrete schemes, the -total variation (-TV) scheme and the -Laplace approximate (-LA) scheme, will be proposed.Here the MATLAB function "conv2" is used to represent the two-dimensional discrete convolution transform of the matrix  , , that is,  * .Through the above lines, we can obtain  1 , by  0 , .The program will stop when it achieves our desire.

𝑝-Laplace Approximate Scheme.
From the proof of Theorem 12, we can know that the term ∫ Ω || can be approximated by the term (1/) ∫ Ω |∇|  .Based on this, we can use the numerical algorithms for the problems (61)-(63) to get a solution to the problem (28), as  → 1.As in [24], the numerical discrete scheme for the problems (61)-( 63) is given as follows: , = (1 − 1

Figure 1 :
Figure 1: The relation between the influence of the noise and the gray levels.(a) 1D signal ; (b) 1D speckled noise with mean equal to 1 and  = 5; (c)  degraded by the speckled noise; (d) the compare between  and the noise signal .