MPEMathematical Problems in Engineering1563-51471024-123XHindawi Publishing Corporation76463910.1155/2010/764639764639Research ArticleA Decomposition and Noise Removal Method Combining Diffusion Equation and Wave Atoms for Textured ImagesCasacaWallace Correa de OliveiraBoaventuraMaurílioLiatsisPanosDCCE/IBILCEUNESP-São Paulo State UniversityRua Cristóvão Colombo 2265 15054-000 São José do Rio Preto, SPBrazilunesp.br20102303201020102906200918122009060220102010Copyright © 2010This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a new method that is aimed at denoising images having textures. The method combines a balanced nonlinear partial differential equation driven by optimal parameters, mathematical morphology operators, weighting techniques, and some recent works in harmonic analysis. Furthermore, the new scheme decomposes the observed image into three components that are well defined as structure/cartoon, texture, and noise-background. Experimental results are provided to show the improved performance of our method for the texture-preserving denoising problem.

1. Introduction

A major topic in the image processing community is concerning the extraction of features. Some of the current research has been aimed at decomposing an image into various auxiliary images, each representing a specific set of characteristics such as edges, contours, structure, noise, texture, among others. In this context, an important application is the denoising problem, where many models assume that a noisy image f is the sum of two components, f=u+v, so that u contains the structure and the objects of the image (cartoon) while v contains the oscillatory characteristics; that is, texture and noise.

In most cases, when an image has only few regions defined by texture, the term v is discarded, being characterized by the equation v=f-u; that is, the models in this category assume that component v is only defined by noise, such as is the case of nonlinear models based on adaptive smoothing, anisotropic diffusion, variational methods, and inverse scale space flow . On the other hand, there are models that simultaneously explicit both terms: structure u and oscillatory v, so that f=u+v. This simultaneous decomposition was initially proposed by Meyer  from the study of a generalized space to model oscillatory patterns having zero mean, such as noise and texture, which has been recently studied and refined as in . In this way, in both described cases, if the observed image f has noise and a high texture concentration, then the term u will not faithfully represent the recovered image (without noise), since a good part of the fine details of the image (texture) is oscillatory; so consequently they will be annexed to the noise component v. In all those models, texture and noise are treated equally, which makes it difficult to identify the oscillatory component v. For example, for real images, it is practically impossible to obtain satisfactory results using some f=u+v decomposition model, due to the complex structure and irregular detailing in this category of images.

In order to overcome these problems there is even a third strand of studies that considers a combination of the mentioned techniques allied with recent approaches of harmonic analysis, as in . New hybrid models have been proposed with a view to integrate the main advantages of different methods that minimize noise, such as . Nevertheless, as most of these models are directly based on curvelets or wave atoms, they tend to make a reconstructed image a bit opaque, besides reproducing oscillating (Gibbs) phenomena, which is an oscillation problem in the frequency domain.

To solve this problem, in this paper we propose a methodology capable of restoring a noisy image having a high concentration of textures and fine details. The proposed model not only keeps the oscillatory well-placed characteristics in the image but also maintains more sensitive textures like intrinsic contours, and edges. Moreover, motivated by [9, 13], we propose a representation of a given image f into three terms, f=u+ṽ+w, with u representing the structure or cartoon of the image, ṽ only the texture and intrinsic contours and w the noise and background. In this sense, another great advantage of our method is that it was constructed so as to satisfy this decomposition.

The proposed scheme seeks to combine the ideas described in [2, 25], that is, a nonlinear PDE (Partial Differential Equation) balanced with an automatic selector of best parameters, based on recent works by [18, 19, 26] on wave atoms, with mathematic morphology operators, such as the top-hat transform one described in  and also with the ideas introduced here to synthesize each component of the new decomposition in three terms.

The remainder of the paper is outlined as follows: in Section 2 we briefly describe our method since we use a combination of different approaches in order to treat textured images with noise. We motivate and present our denoising framework in Section 3. Our method is then validated experimentally and statistically in Section 4. We finally summarize and conclude our work in Section 5.

2. The Proposed Method2.1. Description of the Problem

Let f be the observed image with noise and h the original image (without noise), represented by the functions f:Ω and h:Ω, where Ω is a rectangular region of 2. Here, we assumed that both f and h can be periodically extended to the 2, so as to have f,hL2(2). We supposed that the noise is additive, that is, f(x)=h(x)+n(x),xΩ, where n represents the Gaussian type noise with mean 0 e variance σn2.

Furthermore, we assumed that the original image h is composed of a structure and an oscillatory pattern component such as texture and irregular details.

The objective here is to minimize the noise level of the input image f; that is, the impact of noise n(x) should be minimum in the output image, making it visibly closer to the original image h. The texture and intrinsic contours built-in the image must be maintained and highlighted so that output images u, ṽ, and w satisfy hu+ṽ,nw and represent the following features of f:

u: structure, skeleton, or cartoon,

ṽ: texture, intrinsic contours, and irregular details,

w: noise and background.

2.2. The Proposed Scheme

Let f be the observed image contaminated with noise as in (2.1). The proposed algorithm can be described as an integrated system having six essential stages, being the first (step 1), the second-to-last (step 5), and the last (step 6) stages of the algorithm where output images are produced, while remaining stages (steps 2, 3, and 4) are where support images are generated that help output components which were synthesized in the previously mentioned steps. All these steps are presented according to the following description.

Classical image decomposition. The image decomposition f is calculated into two components, u and v, so that f=u+v, where u contains the skeleton or cartoon of f and v contains the oscillating elements of the image such as noise, irregular details, and texture.

Denoising texture-noise component. An iterative procedure is applied to remove noise present in component v, thereby producing an auxiliary component v1 that contains intrinsic contours, parts of the texture, and parts of the edges of v.

Oriented texture support component. An auxiliary image v2 is produced containing the warped oscillatory patterns of v such as regions characterized by the texture or oscillating details more subtle.

Fuzzy representation of edges and textures. A specific morphological filter is applied in v2 to obtain a fuzzy edge-texture representation ωv2:Ω[0,1] of the image v2.

Output of component having only texture. A combination of component v1, obtained in step 2, with the component v2, obtained in 4, is done, producing a compound image of only texture, intrinsic contours, fine and irregular details, but not the noise. The mentioned image is here identified as ṽ.

Output of restored image and residual component. A composition of the component u, obtained in 1, is done with the component ṽ, generated in the previous step. In addition, the image w characterized by noise and background (residue) can be obtained, thus satisfying a decomposition of three terms.

Figure 1 shows all the steps previously described while the Figures 2, 3, 5, 6, 7 and 8 show details of each of those steps, respectively. In the next section we will describe each of these steps.

Illustrative diagram of the proposed algorithm.

Details of step 1. (a) Input image f with noise, (b) structure/cartoon u, and (c) oscillatory patterns v (noise/texture).

Details of step 2. (a) Input image v and (b) intrinsic contours and parts of the texture, here represented by v1. Note in v1 that although there are texture losses close to the neck and background, face intrinsic contours are preserved, such as eyes, mouth, and hair.

3. Description of the Proposed Method3.1. Classical Image Decomposition

In the first step of the proposed method, the idea is to decompose the initial image f into two components u and v, with f=u+v, such as previously described in this paper. For this purpose, we used the nonlinear anisotropic PDE, proposed by Barcelos et al. , with selector of best parameters present in . It is a nonlinear parabolic PDE formulated as from a combination of the classical model from Perona and Malik , with the model from Alvarez et al. , and with the model proposed by Nordström . Its differential is to eliminate noise through a smoothing process that preserves image edges and contours using automatic detectors. This nonlinear model is presented and detailed in the next section.

In this step, the algorithm aims to smooth the observed image f by applying PDE , obtaining as a result the component cartoon u. Then, we do a simple operation f-u to determine the component having texture and noise, that is, v:=f-u. As the objective is to obtain u and v well characterized as to structure and texture/noise, respectively, the smoothing process is intensified, being possible to control diffusion velocity through the inclusion of PDE base parameters.

An alternative to implement this step of the algorithm is to use any model that is capable of a good image smoothing, such as [1, 3, 58]. Another good alternative is to use a simultaneous cartoon-texture decomposition model, which can be found in [914, 16].

The justification to use the nonlinear PDE proposed in  is that, from the computational point of view, it is more practical, because besides results being similar to others in literature, it is necessary for use in another step of the our method.

The numerical algorithm used to implement this step of our scheme follows from .

3.2. Denoising Texture-Noise Component

The purpose of this step is to remove noise from component v (obtained in previous step) in order to minimize the loss of edges, intrinsic contours, textures, and fine details. Again, we used the nonlinear model , but now the applied equation is helped by the best parameter selector proposed in . The adopted model is based on the following nonlinear parabolic equation:v(t)t=g|v(t)|div(v(t)|v(t)|)-λ(1-g)(v(t)-v),v(x)(0)=v(x),v(x)(t)n|Ω×+=0,xΩ,t+, where v represents the initial image having texture and noise, v(t) is its version on scale t, g=g(|Gσ*v(t)|) is a nonnegative and nonincreasing function called diffusivity term, Gσ*v(t) determines the convolution of signal v(t) with the Gaussian function Gσ, and λ is a weighting parameter. Here, the symbol |·| denotes the Euclidean norm while the constant σ denotes the standard noise deviation of the image v.

Generally, the diffusivity term g, besides being nonincreasing and nonnegative, is such that g(0)=1, g(s)0 when s and g(|s|)[0,1]. In this paper we choose g based on the Perona and Malik  diffusivity term, together with the ideas of Alvarez et al.  and Barcelos et al. :g(s)=g(|Gσ*v(t)|)=11+k|Gσ*v(t)|2 with Gσ=Gσ(x,t̂)=12σπt̂exp(-|x|22σt̂), where k=k(σ)0 is a σ-dependent constant and t̂ is a scalar variable related to the space of the scale produced by the Gaussian (3.4). The best choice for the scale t̂ will be better described in following pages.

With the intent to automate the computation of (3.3) and to avoid antiquated choices for k, we adopt it according the ideas in , that is,k(σ)={a1exp(a2  σ),σ[0,201],b1exp(b2  σ),  σ(201,350], where a1, a2, b1, and b2 are constants such as in . Such a choice brings great advantages such as eliminating an entry parameter in the application of the model and obtaining a good choice for the diffusivity term (3.3). In (3.3), k works as an edge selection parameter: for a fixed image with a high k value, false edges may be identified while with a small k value, only prominent edges will be selected.

The equation (3.1) can be seen as a balancing between smoothing and “keeping close to component v”. This balancing is managed by the diffusivity term g, which is used as an edge detector and also to control diffusion velocity. It can be observed that in homogeneous regions of the image we have |Gσ*v(t)| small, which implies in g~1. Then, (1-g)~0 and also the reaction term (v(t)-v) act in a practically insignificant way in the application of (3.1). Consequently, the diffusion process done by the first parcel of (3.1) is intense; that is, the smoothing will be incisive in these regions. In contrast, for contour regions where |Gσ*v(t)| is big we have g~0 and (1-g)~1, implying that the reaction term (v(t)-v) proposed by Nordström  will retain strongly the initial features of the image v under analysis.

The first great advantage of using the nonlinear model (3.1) instead of the classical models in literature (see [1, 3, 68]) is that it applies a balanced diffusion controlled by sensitive detector of contours, and since the image v can be simultaneously characterized for texture, irregular details, and noise, only regions where there are no contours will be subject to equation diffusion (3.1). Thus, a large part of the texture, edges, and intrinsic contours will be kept in this process. In contrast, it is true that part of those irregular details (warped texture and fine details) will be smoothed in the process. Nevertheless, this deficiency is bypassed by our algorithm in the oriented-texture support component synthesizing step, described in the next section.

The second great advantage is that there are only two parameters in the numerical solution of the model given in  to be determined: the constant k and the best scale for t̂, which can be automatically computed.

In , the authors linked the t̂ scale of the Gaussian kernel Gσ with the noise standard deviation σnoi of the initial image v=v(x)(t=0), which resulted in an estimate for optimal time T to stop the evolutionary process (3.1), given byT=σnoi2a, where σ is the standard deviation of image v with noise and a=σ, with σ being the constant present in the Gaussian kernel (3.4).

Encouraged by the authors of , we take the optimal smoothing time t̂ as in (3.6), that is, t̂=T. Having the optimal stopping time T, it is possible to automatically obtain (unless the temporal step Δt) the number of iterations of the model and also the best scale t̂ for the Gaussian function (3.4). The advantage of being able to automatically obtain the incoming parameters of the model (3.1) makes it pretty efficient and practical, considerably minimizing user intervention at this step of the algorithm. On the other hand, it is true that optimal time T is directly related to the standard noise deviation σnoi. For synthetic images, it is possible to calculate σnoi, but for real images it is usually not. In the latter case we try to estimate σnoi based on the visual quality of the image.

To implement the numerical equation (3.1), the idea is to construct an iterative process whose stopping criteria is based on optimal time T (3.6), as presented in . In this case, T is used in the Gaussian function (3.3), as was previously described. Also, the N number of iterations is calculated based on T, which is given by N=TΔt.

3.3. Oriented Texture Support Component

That is one of the most important steps of the proposed method, because it is in it that we extract the oriented texture and most of the oscillatory details of the image. To do this, we use a recent study of wavelet variants presented in [18, 19] for texture analysis, which became known as wave atoms.

Wave atoms are a variant obtained through a 2D wavelet packet obeying the important parabolic scaling relation wavelenght~(diameter)2, which improves the sparse representation of certain oscillatory patterns when compared to more traditional expansions, such as wavelets, gabor atoms, or curvelets. To be more precise, it means that the warped oscillatory functions (oriented textures) have a significant sparse expansion in wave atoms than in other representations of the literature.

Compared to other transforms, wave atoms have two great advantages: the ability to arbitrarily adapt in localities defined by a certain pattern and the ability to sparsely represent anisotropic patterns aligned with the axes. Wave atoms composition elements have a high direction sensibility and anisotropy, which makes them ideal to apply wherever the intention is to identify regions characterized by oscillatory patterns such as texture, as is the case here presented.

In the following, based on , we will give a brief explanation about wave atoms mathematical precedents. For more details, also see [19, 28].

Consider wave atoms given by φμ, with subscript μ=(j,m,n)=(j,m1,m2,n1,n2). The five quantities above are integer values and index a point (xμ,ωμ) in a space-phase such as xμ=2-jn, ωμ=π2jm, c12jmaxi=1,2|mi|c22j, where c1 and c2 are two positive constants. According to , the elements of a frame of wave packets φμ are called wave atoms when|φ̂μ(ω)|CM·2-j(1+2-j|ω-ωμ|)-M+CM·2-j(1+2-j|ω+ωμ|)-M|φμ(x)|CM·2j(1+2j|x-xμ|)-M,M>0.

To construct wave atoms for our problem, we first considered the case of a family 1D of wave packets ψm,nj(x)0, m0, n, in the scope of above same conditions. Let ϕ be a continuous real function given in [-7π/6,  5π/6], such that for |ω|π/3, ϕ((π/2)-ω)2+ϕ((π/2)+ω)2=1, and ϕ(-(π/2)-2ω)=ϕ((π/2)+ω).

Considering ψm0(t)=2Re{exp(iπ(m+1/2)t)ν((-1)n(t-1/2))}, with ν(t)=(1/2π)ϕ(ω)exp(iωt)dω, the Fourier transform of ψm0 is given byψ̂m0(ω)=exp(-iω2)[exp(iαm)ϕ(ϵm(ω-π(m+12)))+exp(-iαm)ϕ(ϵm+1(ω+π(m+12)))], where ϵm=(-1)m, αm=(π/2)(m+(1/2)). In this case, ϕ must be such that m=0|ψ̂m0|2=1.

Thus, we can write functions that make up the base as ψm,nj(x)=ψmj(x-2-jn)=2j/2ψm0(2jx-n), whose coefficients can be obtained by cj,m,n=ψm,njv(x)dx=(1/2π)exp(i2-j-nω)ψ̂mj(ω)¯v̂(ω)dω. Here, v represents the observed signal.

According , the extension to two-dimensional case (2D version) can be computed by φμ+(x1,x2):=ψm1,n1j(x1)ψm2,n2j(x2),φμ-(x1,x2):=Hψm1,n1j(x1)Hψm2,n2j(x2), where μ=(j,m,n) and a second equation is based on Hilbert transform relative to these wavelet packets. Therefore, the combination φμ(1)=(φμ++φμ-)/2 and φμ(2)=(φμ+-φμ-)/2 make up a wave atom frame.

For the denoising problem, it is recommendable to use wave atom shrinkage, which is formulated in most cases by vc=μθ(cμ(1)(v))φμ(1)+θ(cμ(2)(v))φμ(2), where θ=θγ is a thresholding function, here adopted byθγ={x,|x|γ,0,|x|<γ, where γ begin the threshold value.

In this work we use wave atom shrinkage to extract oriented texture from image component v. We take a transform based on wave atoms as follows: v2:=T̃γv=(WA)-1θγ(WA)(v), where WA denotes the wave atom transform (we use the version 2D), (WA)-1 the inverse transform (see [18, 19]), θγ is the threshold function (3.10) mentioned earlier, and v2 is the output component. Here, the nonlinear operator T̃ will keep and highlight important characteristics of the examined image v such as warped texture, oscillating details, and irregular patterns.

The implementation of our wave-atoms shrinkage transform consists of the execution of three steps. First, we apply wave atom transform WA. Next, we remove some insignificant coefficients from the wave atoms through thresholding (3.10). Finally, we apply the inverse transform WA-1 in order to reconstruct the signal containing remaining wave atoms coefficients. For a computational discretization of wave atoms present in the proposed model, we used the WaveatomLab packet, which can be found on the site http://www.waveatom.org/.

3.3.1. Wave Atoms <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M210"><mml:mrow><mml:mo>×</mml:mo></mml:mrow></mml:math></inline-formula> Other Systems

The main advantage of methods based on wavelet variants is space-frequency localization and multiscale view of the features of surfaces. However, it is known that traditional wavelets are not good to analyze surfaces with “scratches” or textures, due to wavelets ignoring properties defined by geometric features of edges and textures, which leads to strong oscillation along these “scratches”.

In contrast, curvelet transforms, such as [29, 30], are multiscale geometric transforms, which constitute an optimal sparse representation of objects characterized by singularities C2. Nevertheless, they do not work so efficiently when the objective is to represent oscillating textures; that is, they are not efficient to characterize surfaces having warped textures such as fingerprints, photographs, among other types of images.

Curvelets are good for representing edges while wave atoms are good for representing oscillatory patterns and textures. Wave atom texture-shape elements not only capture the coherence along the oscillations like curvelets but also take into consideration patterns across the oscillations (see Figures 4(a) and 4(b)). Since the objective is to characterize surfaces with oriented textures, we have a great advantage in applying wave atom transforms for this purpose.

Elements of the curvelets and wave atoms. (a) A digital curvelet  and (b) a digital wave atom .

Details of step 3. (a) Input image v and (b) oriented texture and irregular details v2. Note that contrary to what happened in Figure 3(b), all oriented texture is highlighted in v2; however, intrinsic contours (details of face) are lost or left with no definition.

Details of step 4. (a) Input image v2 and (b) highlight of oriented texture of v2 with background correction (in the scale [0,1]), here denoted by ωv2.

Details of step 5. (a) Input image v1, which is characterized by intrinsic contours, (b) fuzzy representation ωv2 of warped texture, and (c) combination of features of v1 with ωv2, resulting in the final component ṽ defined by oscillatory patterns of observed image f, except noise.

Details of final step 6. (a) Input image u, (b) texture ṽ, (c) recovered image h̃, obtained by the addition of u to ṽ, and (d) residual w containing noise and small details of observed image f. Here, the reconstruction process of each component validates the decomposition of three terms f=h̃+w=u+ṽ+w.

3.4. Fuzzy Representation of Edges and Textures

In this step of the process, the method is to produce a fuzzy representation (in [0,1]) of the features, contours, and principally of oriented texture (nonintrinsic) of the support image v2 generated in the previous step. For this purpose, we first apply a morphological filter to simultaneously remove background and heterogeneous regions of v2, seeking to highlight the oscillatory characteristics of that image. Next, the algorithm normalizes the image; that is, it translates the preprocessed image to the interval [0,1]. Here, the main idea was to use a morphological filter as is presented here in after.

3.4.1. Mathematical Morphology

For the treatment of the image v2, we used an approach based on morphological filters, which has been showing to be a powerful tool for analyzing the structure of an image as well as for investigating the geometry of objects that constitute an image. As the objective here is to maintain oscillatory features of component v2, a good alternative to doing this is to use a combination between the analyzed image and its version obtained by an opening or closing operator. Those combinations create a class of transformations called Top-hats.

According to , the open Top-hat transform WTH of an image I is defined by WTH(I)=I-γ(I), while the close Top-hat transform BTH is given by BTH(I)=ρ(I)-I, where γ and ρ are opening and closing operators, respectively. For details, see .

In this step of the algorithm, the objective is to emphasize the texture and simultaneously remove heterogeneous parts of the image. In such case, we opted to use a Top-hat transform. Precisely, we applied transformation (3.13) to the image v2, obtained in the previous step, to highlight its oscillatory features at the same time as correcting its background. Resulting image will have oscillatory details characterized by shades of gray very close to the black while the interior of the objects and background will be characterized by shades of gray close to the white.

To finish this step, we convert the preprocessed image to the interval of shades of gray [0,1]. The output image obtained in this process is here denoted by ωv2. The ωv2 component is a fuzzy representation of contours, and warped texture, which serve as a guide to generate final component ṽ defined by oriented texture, intrinsic contours and irregular details of the initial image.

3.5. Output of Component Having Only Texture

This step is aimed to synthesize final component ṽ representing all oscillatory features of the image, excepting noise. This component must be composed of the oriented texture (nonintrinsic), intrinsic contours, edges, and irregular details.

To do this, the idea is to combine auxiliary components generated in steps 2 and 4, that is, the component defined by intrinsic contours and parts of texture v1 and the component ωv2, characterized by the fuzzy representation of oriented texture from v, respectively. Motivated by the ideas of [1, 2, 6], we introduced an efficient weighted technique between ωv2 and v1, so that absent characteristics of each component could be compensated by other components. More precisely, the texture contained in ωv2 will be superimposed on the regions where there is an absence of this information in support image v1. Therefore, the second component of our decomposition in three terms ṽ is given as follows: ṽ(x):=ωv2(x)·v1(x),xΩ, where the above product is computed pixel by pixel.

Here, the proposed idea is very similar to that used by the diffusivity term g studied in step 2, which balances and attributes weights to each pixel according to its classification in the image.

This highlighting among pixels does not contribute for a variation in the range of the input image v1, since ωv2 belongs to initial range [0,1].

Because of having applied the closing top-hat transform (3.13), pixels that represent texture in ωv2 will be closer to zero while those that represent background and homogeneous regions will be close to one. On the other hand, in component v1 there are no pixels having high variations, since noise was previously eliminated. Moreover, the component v1 preserves the edges and intrinsic contours of the observed image f, which does not happen with term ωv2. Therefore, the main advantages of each component can be used: oriented texture of ωv2 and intrinsic contours of v1.

3.6. Output of Restored Image and Residual Component

The last step of the propose scheme consists of obtaining the recovered image h̃ and the image composed of noise and background w. Moreover, this step finishes the decomposition process of initial image f in terms of the three components previously described: u (structure), ṽ (texture), and w (noise and background).

As component u represents the structure/cartoon of the observed image f and ṽ the warped texture, edges, and intrinsic contours (but not noise), according to the classic Meyer model , it is sufficient to add both to obtain the restored image. The great advantage is that noise was removed by applying the previous steps, as much as for u as for ṽ. Then, in this case, h̃(x):=u(x)+ṽ(x),xΩ, where the sum is done pixel by pixel.

The characterizing noise is done by calculating the residue between the restored image h̃ and observed image f, that is, w(x):=f(x)-h̃(x),xΩ, where the sum is calculated term by term. In this case, this operation defines not only noise added to f but also small fragments in the background of the image.

Finally, besides generating reconstructed image h̃, the algorithm also satisfies the decomposition of three terms f=u+ṽ+w or of an classical decomposition of two components f=h̃+w (see Figure 8).

4. Experimental Results

Now we present some experiments obtained by our scheme, where images in a grayscale defined in the standard interval [0,255] were used. All the tested images are constituted by matrices of dimension 256×256. In the case of synthetic images, it is possible calculating the standard deviation of the noise σnoi. However, for real images, σnoi must be estimated in some other way. Thus, in step 2 we choose σnoi based on the visual quality of v. On the other hand, σnoi can be provided by the user in the step 1 since the goal here is smoothing the image f.

In order to validity our approach with respect to the tested methods, we used the statistical measure PSNR (peak signal-to-noise ratio), which is measured in dB.

In the first step of the algorithm, we use (3.1) supported by (3.3) and (3.5), where (motivated by ) we adopt λ=1 and the temporal step Δt=0.1 in all examples considered. The number of iterations N and the standard noise deviation σnoi must be given because they vary with each experiment. In the second step we again use the equation described in the previous step and, once again, we adopted λ=1. Here, the number of iterations N is determined by (3.7), while Δt and σnoi (for real images) continue being input parameters. In the third step, the only input parameter is the threshold value γ while in the fourth step we adopt two types of structuring elements in the top-hat transform: disk or ball. In the fifth and sixth steps there are no parameters to be determined or provided. In the case of images in Figure 3, N=50, σnoi=25 while in Figure 4, Δt=0.1. In Figure 5, we take γ=0.23 and in Figure 6 a disk with radius 4.

In Section 4.1 we emphasize the decomposition technique in three terms previously mentioned. In Section 4.2 we evaluate the good performance of the proposed scheme in comparison to other recent models in literature.

4.1. Restoring and Decomposition Using the Proposed Scheme

In the following, we show two experiments done on images having different levels of complexity: a highly-detailed real image and one of fingerprint.

Our first experiment mentions the real image of Barbara. Here the image contaminated with noise (SNR=9.1) contains important features to be preserved such as textures on the legs, in the region near the neck, in the background, and intrinsic contours of the face. Figure 9(a) presents a noisy image f. The image in Figure 9(c) shows component u characterized by the structure/cartoon of f obtained in the first step of our algorithm (σnoi=15 and N=30). Taking Δt=0.15, σnoi=5 in the second step, adopting γ=0.06 for the third step, and choosing a ball with radius 12 and height 10 in the fourth step, the algorithm generates Figure 9(d), ṽ containing restored oscillatory details, excepting noise. Both intrinsic contours and oriented texture of f are present in ṽ; that is, there was no significant loss of any type of oscillatory features. We can see that Figure 9(c) as in Figure 9(d), the images remain well defined in the visual perception sense of classical decomposition models such as [9, 13]. In Figure 9(b) we present the recovered image h̃ and in Figure 9(e) the component w containing the noise and the background. We can note that our algorithm works efficiently with noise removal, preserving texture and contours, besides producing, from a visual point of view, a well-defined three-term composition.

Decomposition into three components. (a) Observed image f, (b) image recovered using proposed method, (c) structure/cartoon u, (d) texture and intrinsic details ṽ, and (e) noise and residual parts w. Here, f=u+ṽ+w.

In the second experiment we take a synthetic image of fingerprint having a considerable noise level (SNR=4.5). Figure 10(a) represents a version with noise. Figures 10(b), 10(c), and 10(d) denote the three components of the evaluated decomposition: structure/cartoon u, texture ṽ, and noise background w, respectively, while Figures 11(a) and 11(b) show the original image and the restored image obtained with our scheme (step 1—σnoi=25, N=200; step 2—Δt=0.06; step 3—γ=0.16; step 4—B = ball with radius 10 and height 3), respectively. As to visual quality, the restored image Figure 11(b) is fairly close to the original image Figure 11(a). Furthermore, the residual image w=f-(u+ṽ) represented by Figure 10(d) detailed only noise and parts of the background of the observed image f, not maintaining any type of texture traces of the fingerprint.

Decomposition into three components. (a) Noisy image f, (b) structure/cartoon u, (c) texture and intrinsic details ṽ, and (d) noise and background w. All three components satisfy f=u+ṽ+w.

Restoring the image f. (a) Original image h (without noise) and (b) recovered image h̃.

4.2. Comparison to Some Existing Methods

To attest to the good performance of the proposed method, we compared it to recent models in literature. Parameters adopted in each of the models tested were chosen according to the best visual quality obtained from each one of those models, in addition to computation of PSNR between the original image and the compared image. Classical models that remove noise but do not cover treating texture were not considered.

Figure 12(a) shows a noisy part (SNR=9.97, PSNR=18.58) from Barbara's image. Figures 12(b) and 12(c) were obtained using curvelet transform [29, 30] (PSNR=19.04) and wave atoms transform [18, 19] (PSNR=20.89), respectively. Here, both techniques were supported by the hard threshold. In the image using curvelet, texture was not appropriately recovered. Moreover, pseudo-Gibbs phenomena was present. In contrast, the wave atom transform correctly restored texture but produced a blurred image. Figure 12(d) shows the image restored by the model based on adaptive fidelity term  (with C=0, PSNR=20.95) while Figure 12(e) shows the version recovered by the model described in  (0.0005 step size, 11 iterations and PSNR=21.06), which combines nonlinear anisotropic diffusion with curvelet shrinkage. Although the model based on adaptive fidelity term has recovered texture, some important image details were excessively smoothed as with face and hand. Furthermore, a diffusion-based curvelet shrinkage did not produce any type of intensive smoothing but retains part of the noise in the reconstructed image, besides the PSNR being higher. Figure 12(f) (PSNR=22.12) is the image restored by the proposed scheme using the same parameters mentioned at the beginning of this section. In this case, both texture and image details (intrinsic contours) are recovered, besides achieving a satisfactory minimizing of the noise level without excessive smoothing. Furthermore, the PSNR from the proposed method presented the biggest value in comparison with considering techniques. Figures 13(a), 13(b), 13(c), 13(d), and 13(e) show residues in relation to Figure 12(a) obtained by Figures 12(b), 12(c), 12(d), 12(e), and 12(f), respectively. As mentioned before, we can see that the curvelet transform (Figure 13(a)) removed part of the texture. Contrasting this, the wave atoms transform (Figure 13(b)) maintained the texture but extracted some features from the image. In residue Figure 13(c) generated by the model  there is no texture; nevertheless, it can be seen that all intrinsic contours of the image stay annexed to the residue. This does not happen in component Figure 13(d) extracted by model , since part of the texture and details of the image were retained in the same proportions of the residue, but, a considerable part of the texture was removed in the process. Finally, component Figure 12(f) extracted by our algorithm shows that only noise and few relevant details were left annexed to the residue, therefore attesting to the efficiency of the proposed method.

Comparison to models existing in literature. (a) Noisy image, (b) denoising by curvelets, (c) by wave atoms, (d) by adaptive fidelity term, (e) by nonlinear diffusion combined with curvelet, and (f) by proposed scheme.

Components removed using each of the methods. (a) Residual by curvelets, (b) by wave atoms, (c) by adaptive fidelity term, (d) by nonlinear diffusion combined with curvelet, and (e) by proposed scheme.

5. Conclusion

In this work we gather important mathematical techniques for image processing and we combine these techniques to generate an efficient algorithm for decomposition and noise removal in image processing. The new method has as its aim treating images contaminated with noise, having a high texture concentration, intrinsic contours, and irregular patterns. The scheme combines elementary techniques, such as classic morphological operators, with more sophisticated harmonic analysis models, such as wave atoms. Moreover, the scheme has, in some of its processing steps, a best parameter automatic selector. Based on the proposed method, we propose an efficient decomposition standard to separate the observed image into three well-defined components, as was shown previously. One of the advantages of this type of decomposition is that there is possible, from the degraded image, the individual treatment of each component, which allows a range of image processing applications such as image segmentation and digital inpainting. Experimental tests show the efficiency of the new method, even when compared to recent harmonic analysis techniques and to models based on nonlinear diffusion for the processing of images with texture.

Acknowledgments

The authors thank the São Paulo State Research Foundation (FAPESP) and the Brazilian Commission for Higher Education (CAPES) for financial support. They also thank Alagacone Sri Ranga and the unknown referees for suggestions on the improvement of the paper.

AlvarezL.LionsP. L.MorelJ. M.Image selective smoothing and edge detection by nonlinear diffusion. IISIAM Journal on Numerical Analysis1992293845866MR116336010.1137/0729052ZBL0766.65117BarcelosC. A. Z.BoaventuraM.SilvaE.Jr.A well-balanced flow equation for noise remove and edge detectionIEEE Transactions on Image Processing20031275176310.1109/TIP.2003.814242BurgerM.GilboaG.OsherS.XuJ.Nonlinear inverse scale space methodsCommunications in Mathematical Sciences200641179212MR2204083ZBL1106.68117BurgerM.OsherS.XuJ.GilboaG.Nonlinear inverse scale space methods for image restorationLecture Notes in Computer Science200537528596ChanT. F.OsherS.ShenJ.The digital TV filter and nonlinear denoisingIEEE Transactions on Image Processing20011022312412-s2.0-003525006610.1109/83.902288ZBL1039.68778PeronaP.MalikJ.Scale-space and edge detection using anisotropic diffusionIEEE Transactions on Pattern Analysis and Machine Intelligence19901276296392-s2.0-002546514510.1109/34.56205RudinL. I.OsherS.FatemiE.Nonlinear total variation based noise removal algorithmsPhysica D1992601–42592682-s2.0-4404911198210.1016/0167-2789(92)90242-FZBL0780.49028NordströmK. N.Biased anisotropic diffusion: a unified regularization and diffusion approach to edge detectionImage and Vision Computing1990843183272-s2.0-002552192010.1016/0262-8856(90)80008-HMeyerY.Oscillating Patterns in Image Processing and Nonlinear Evolution Equations200222Providence, RI, USAAmerican Mathematical SocietyUniversity Lecture SeriesGarnettJ. B.LeT. M.MeyerY.VeseL. A.Image decompositions using bounded variation and generalized homogeneous Besov spacesApplied and Computational Harmonic Analysis20072312556MR233382710.1016/j.acha.2007.01.005ZBL1118.68176LevineS. E.RamseyM.MisnerT.SchwabS.An adaptive variational model for image decomposition3757Proceedings of 5th International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition (EMMCVPR '05)November 2005St. Augustine, Fla, USA382397Lecture Notes in Computer Science 10.1007/11585978LieuL.Contribution to problems in image restoration, decomposition, and segmentation by variational methods and partial differential equations, Ph.D. thesis2006Los Angeles, Calif, USAUCLAVeseL. A.OsherS. J.Modeling textures with total variation minimization and oscillating patterns in image processingJournal of Scientific Computing2003191–3553572MR202885810.1023/A:1025384832106ZBL1034.49039YinW.GoldfarbD.OsherS.Image cartoon-texture decomposition and feature selection using the total variation regularized L1 functionalVariational, Geometric, and Level Set Methods in Computer Vision20053752New York, NY, USASpringer7384Lecture Notes in Computer Science10.1007/11567646_7ChanT. F.EsedogluS.ParkF. E.Image decomposition combining staircase reduction and texture extractionJournal of Visual Communication and Image Representation20071864644862-s2.0-3544896935710.1016/j.jvcir.2006.12.004AujolJ. F.jfaujol@sophia.inria.frChambolleA.antonin.chambolle@polytechnique.frDual norms and image decomposition modelsInternational Journal of Computer Vision2005631851042-s2.0-034790051110.1007/s11263-005-4948-3StarckJ. L.CandèsE. J.DonohoD. L.The curvelet transform for image denoisingIEEE Transactions on Image Processing2002116670684MR192940310.1109/TIP.2002.1014998DemanetL.YingL.Wave atoms and sparsity of oscillatory patternsApplied and Computational Harmonic Analysis2007233368387MR236240810.1016/j.acha.2007.03.003ZBL1132.68068DemanetL.YingL.Curvelets and wave atoms for mirror-extended images6701Wavelets XIIAugust 2007San Diego, Calif, USAProceedings of SPIE2-s2.0-003872948310.1117/12.733257MaJ.PlonkaG.Combined curvelet shrinkage and nonlinear anisotropic diffusionIEEE Transactions on Image Processing200716921982206MR246808810.1109/TIP.2007.902333CasacaW. C. O.BoaventuraM.A regularized nonlinear diffusion approach for texture image denoisingProceedings of the 22nd Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI '09)October 2009Rio de Janeiro, BrazilIEEE Computer SocietyGilboaG.SochenN.ZeeviY. Y.Variational denoising of partly textured images by spatially varying constraintsIEEE Transactions on Image Processing2006158228122892-s2.0-3374621715910.1109/TIP.2006.875247MaJ.jma@tsinghua.edu.cnImage assimilation by geometric wavelet based reaction-diffusion equation6701Wavelets XIIAugust 2007San Diego, Calif, USAProceedings of SPIE2-s2.0-003127025610.1117/12.733054PlonkaG.MaJ.Nonlinear regularized reaction-diffusion filters for denoising of images with texturesIEEE Transactions on Image Processing200817812831294MR251689810.1109/TIP.2008.925305BarcelosC. A. Z.BoaventuraM.SilvaE.Jr.Edge detection and noise removal by use of a partial differential equation with automatic selection of parametersComputational & Applied Mathematics200524113115010.1590/S0101-82052005000100008MR2153870DemanetL.Curvelets, wave atoms and wave equations, Ph.D. thesis2006Pasadena, Calif, USACalifornia Institute of TechnologySoilleP.Morphological Image Analysis. Principles and Applications20032ndBerlin, GermanySpringerxvi+391MR1989534VillemoesL. F.Wavelet packets with uniform time-frequency localizationComptes Rendus Mathématique200233510793796MR194770110.1016/S1631-073X(02)02570-0ZBL1015.42026CandésE.DemanetL.DonohoD.YingL.Fast discrete curvelet transformsMultiscale Modeling & Simulation200653861899MR225723810.1137/05064182XZBL1122.65134CandèsE. J.DonohoD. L.New tight frames of curvelets and optimal representations of objects with piecewise C2 singularitiesCommunications on Pure and Applied Mathematics2004572219266MR201264910.1002/cpa.10116