MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 518747 10.1155/2013/518747 518747 Research Article Nonlocal Variational Model for Saliency Detection Li Meng 1 Zhan Yi 2 Zhang Lidan 3 Ding Baocang 1 Department of Mathematics & KLDAIP Chongqing University of Arts and Sciences Yongchuan, Chongqing 402160 China 2 College of Mathematics and Statistics Chongqing Technology and Business University Chongqing 400067 China ctbu.edu.cn 3 College of Mathematics and Physics Chongqing University of Posts and Telecommunications Chongqing 400065 China cqupt.edu.cn 2013 23 9 2013 2013 04 06 2013 25 08 2013 2013 Copyright © 2013 Meng Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We present a nonlocal variational model for saliency detection from still images, from which various features for visual attention can be detected by minimizing the energy functional. The associated Euler-Lagrange equation is a nonlocal p-Laplacian type diffusion equation with two reaction terms, and it is a nonlinear diffusion. The main advantage of our method is that it provides flexible and intuitive control over the detecting procedure by the temporal evolution of the Euler-Lagrange equation. Experimental results on various images show that our model can better make background details diminish eventually while luxuriant subtle details in foreground are preserved very well.

1. Introduction

Saliency is an important and basic visual feature for describing image content. It can be particular location, objects, or pixels which stand out relative to their neighbors and thus capture peoples’ attention. The saliency detection technologies, which exploit the most important areas for natural scenes, are very useful in image and video processing such as image retrieval , video compression , and video analysis . However, saliency detection is still a difficult task because it somewhat requires a semantic understanding of the image. Furthermore, the difficulty arises from the fact that most of the natural images contain variant texture and color information. So far, a large number of good algorithms and methodologies have been developed for this task. Saliency detection methods can be roughly categorized as biologically based [4, 5], purely computational , or those that combine the two ideas .

Itti et al.  devise their method based on the biologically plausible architecture proposed by Koch and Ullman , in which multiple low-level visual features, such as intensity, color, orientation, texture, and motion are extracted from images at multiple scales and are used for saliency computing. They determine center-surround contrast using a difference of Gaussians approach. Inspired by Itti’s method, Frintrop et al.  present a method in which they compute center-surround differences with square filters.

Different from the biological methods, the pure computational models  are not explicitly based on biological vision principles. Ma and Zhang  and Achanta et al. [7, 8] measure saliency using center-surround feature distances. Hou and Zhang  devise a saliency detection model based on a concept defined as spectral residual (SR). Liu et al.  obtain the saliency map of images from the technology of machine learning. The model in  achieves the saliency maps by inverse Fourier transform on a constant amplitude spectrum and the original phase spectrum of images. Feng et al.  define the multiscale contrast features as a linear combination of contrasts in the Gaussian image pyramid.

The third category of methods are partly based on biological models and partly on computational ones, that is, the combination of the two ideas. For instance, Harel et al.  create feature maps adopting Itti’s method but perform their normalization by a graph-based approach. In , Bruce and Tsotsos present a saliency computation method within the visual cortex which is based on the premise that localized saliency computation serves to maximize information sampled from one’s environment. Fang et al.  propose a saliency detection model based on human visual sensitivity and the amplitude spectrum of quaternion Fourier transform.

These methods  build up elegant maps based on biological theories and/or computational framework. However, some key characteristics in the object are still neglected in these models. For example, the saliency maps generated by the methods [46, 9, 13] have low resolution. Moreover, the outputs of [4, 5, 13] have ill-defined boundaries, and the methods [6, 9] produce higher saliency values at object edges instead of the whole object. The methods [7, 8, 15] capture the saliency maps of the same size as the input image. Though methods [7, 8] achieve higher precision than methods [46, 9, 13], the information in the background cannot be well suppressed. Additionally, the method  seems difficult to extract subtle details (e.g., the texture in saliency) which are very important for visual perception and are primary visual cue for pattern recognition. Moreover, the study of human attention mechanism is not mature yet. Therefore, if we are concerned with high level application such as image retrieval and browsing, we should exploit some mechanism producing accurate saliency.

In this paper, we focus on the problem of saliency detection in the variational framework. The main advantage of variational methods for image processes is that they can be easily formulated under an energy minimization framework and allow the inclusion of constrains to ensure image regularity while preserving important features. Over the past decades, many researchers have devoted their work to the development of variational models and proposed many good algorithms to solve important topics in image analysis and computer vision, including anisotropic diffusion for image denoising , p-Laplacian evolution for image analysis , nonlocal p-Laplacian evolution for image interpolation , active contour model for image segmentation , and complex Ginzburg-Landau equation for codimension-two objects detection  and image inpainting , respectively. But, to our knowledge, there exist very few saliency detection methods which take benefits of variational framework.

Inspired by the nonlocal p-Laplacian [19, 23] and the complex Ginzburg-Landau model [21, 22], we propose a nonlocal p-Laplacian regularized variational model for saliency detection. Our work is a pure computational model for saliency extraction from still images. The proposed energy functional is described by a diffusion-based regularization, phase transition, and a reaction term for the fidelity. In the energy functional, the nonlocal p-Laplacian is introduced to penalize the intermediate values of image intensity, and the phase transition makes the background vanish while preserving visually prominent features. Our approach offers the following technical features. First, we formulate saliency detection as a phase transition over an image domain and then a variational framework for saliency selection is developed. Various visual features can be detected by minimizing the energy functional in the variational framework. Second, a dynamical formulation follows naturally from the definition of the energy functional. The associated Euler-Lagrange equation is a nonlocal p-Laplacian type diffusion equation with a nonlinear reaction term for saliency extraction and a linear reaction term for the fidelity. It achieves the control of the information flow from original images to saliency maps. So the process of saliency extraction is a nonlinear diffusion. This makes our method quite different from the existing models for saliency detection. Third, our method employs the nonlocal p-Laplacian regularization which restricts the features of the resulting image. Compared to the classical p-Laplacian regularization, the direction of edge curves indicated by nonlocal p-Laplacian is more accurate than the direction indicated by gradient in p-Laplacian equation . Due to the accuracy of our model, our saliency maps can be seen as salient objects directly. Experimental results on various images show that our model can better make background details diminish eventually, while luxuriant subtle details in foreground are preserved very well.

The remainder of this paper is organized as follows. In Section 2, we review the Ginzburg-Landau model and the nonlocal evolution equation. The proposed model is introduced in Section 3. Section 4 presents numerical method, followed by experiments and results in Section 5. This paper is summarized in Section 6.

2. Background 2.1. The Ginzburg-Landau Models

The Ginzburg-Landau equation was originally developed by Ginzburg and Landau  to phenomenologically describe phase transitions in superconductors near their critical temperature. The equation has proven to be useful in many areas of physics or chemistry . A lot of mathematical theories about this matter can be found in the literature . Moreover, Ginzburg-Landau equations have already been used for image processing [21, 22, 27, 28]. Most of them rely on the simplified energy as (1)Eε(u)=12Ω(|u|2+12ε2(1-|u|2)2)dx or on the associated flow governed by the following evolution equation: (2)ut=Δu+1ε2(1-|u|2)u, where ε is a small nonzero constant and u is a complex-value function indicating local state of the material: if |u|1, the material is in a superconducting phase; if |u|0, it is in its normal phase. A rigorous mathematical theory on the Ginzburg-Landau functional shows that there exists a phase transition between the above two states . Minimization of the functional (1) develops homogeneous areas which are separated by phase transition regions. In image processing, homogeneous areas correspond to domains of constant grey value intensities and phase transitions to features.

2.2. Nonlocal Evolution Equations

Recently, nonlocal evolution equations have been widely used to model diffusion processes in many areas . Let us briefly introduce some references of nonlocal problem considered along this work. A nonlocal evolution equation corresponding to the Laplacian equation is presented as follows: (3)ut(x,t)=J*u-u(x,t)=RNJ(x-y)(u(y,t)-u(x,t))dy. The kernel J is a nonnegative, bounded continuous radial function with sup(J)B(0,d) (compact support set). Equation (3) is called a nonlocal diffusion equation since the diffusion of the density at a point x and time t depends not only on u(x;t) but also on all the values of u in a neighborhood of x through the convolution term J*u. This equation shares many properties with the classical heat equation ut=Δu. This nonlocal evolution can be thought of as nonlocal isotropic diffusion.

For the p-Laplacian equation ut=div(|u|p-2u), a nonlocal counterpart was studied mathematically in the literature  (4)ut(x,t)=ΩJ(x-y)|u(y,t)-u(x,t)|p-2(u(y,t)-u(x,t))dy with Neumann boundary condition. It was proved that the solution of (4) converges to the solution of the classical p-Laplace if p>1 and to the total variation flow when p=1 with Neumann boundary conditions when the convolution kernel J is rescaled in a suitable way . The energy functional corresponding to (4) is (5)Ep(u)=1pΩJ(x-y)|u(y,t)-u(x,t)|pdydx.

3. The Proposed Model for Saliency Detection

In this section, we propose a variational model (nonlocal p-Laplacian regularized variational model) whose (local) minima extract salient objects from image background.

3.1. Nonlocal <italic>p</italic>-Laplacian Regularized Variational Model

Let ΩR2 be an image domain. For a given image I:ΩR, and we construct a complex-value image u0 from image I as following. We first rescale the intensity image I(x) into interval [-1,1] by the formula υ0=(-1)k((2I(x)/255)-1) (k=0 or k=1) and assume ω0=1-υ02; then I(x) is identified with real part υ0 of the complex image u0=υ0+ω0i, so that |u0|=1 for all xΩ. In order to extract salient objects from a still image, we propose the following energy functional: (6)E(u)=Ep(u)+ψ(u)+λ2Ω|u-u0|2dx with (7)ψ(u)=12ε2Ω(1-|u|)2dx, where p>2, λ>0, ε is a small constant, u is a complex-valued function, and Ep(u) is defined by energy functional (5). Note that the functional ψ(u) is slightly different from the second term of the Ginzburg-Landau model (1).

In the following, we will explain the proposed energy functional defined as (6).

The functional Ep(u) in (6) serves the purpose of penalizing the spatial inhomogeneity of u(x). As we know, certain penalties on intermediate densities are equivalent to restrictions on the microstructural configuration . So the nonlocal p-Laplacian acts as a regularizer to restrict the feature of the resulting images, physically.

The potential ψ(u) in (6) has clearly a minimum at |u|=1. Thus the minimization of the functional (6) develops homogeneous areas separated by phase transition regions, which makes |u|1 almost everywhere after enough diffusion except for the regions of the visually prominent features.

The third term is a fidelity term which forces u(x) to be a close approximation of the original function u0.

3.2. Behavioral Analysis of Our Model

In calculus of variations, a standard method to minimize the functional E(u) is to find steady state solution of the gradient descent flow equation as (8)ut=-E(u)u, where E(u)/u is Ga^teaux derivative of the functional E(u). Equation (8) is an evolution equation of a time-dependent function with a spatial variable (x,y) in the domain Ω and an artificial time variable t0, and the evolution starts with a given initial function u(x,0)=u0(x). So a dynamical formulation follows naturally from definition of the energy functional (6) as (9)ut=PpJ(u)+1ε2|u|-1(1-|u|)u+λ(u-u0) with the initial condition u(x,0)=u0(x) and the Neumann boundary condition u/n=0 on Ω (where n is the outward unit normal to Ω), where (10)PpJ(u)=ΩJ(x-y)|u(y)-u(x)|p-2(u(y)-u(x))dy. The kernel J:ΩR in (10) is a nonnegative, bounded continuous radial function with sup(J)B(0,d) and satisfies the following properties:

J(-z)=J(z),

J(z1)J(z2), if |z1|<|z2|, and lim|z|J(z)=0,

ΩJ(z)=1.

Equation (9) is a nonlocal p-Laplacian type diffusion equation with nonlinear reaction terms. Here we will explain further the nonlocal p-Laplacian equation. The nonlocal p-Laplacian PpJ(u) in (9) acts as a regularizer to restrict features of the output images. First, the regularizer PpJ(u) shares many properties of the classical p-Laplacian regularization. In the case of saliency detection, we can achieve a reasonable balance between penalizing irregularities (often due to noise) and reserving intrinsic image features by the regularizers with different values of p. Second, the regularizer PpJ(u) improves the classical p-Laplacian regularization based on local gradient because the nonlocal diffusion at a point x and time t depends on all the values of u in a larger neighborhood of x. The evolution process at artificial time t given by (9) is viewed as an anisotropic energy dissipation process. The direction of anisotropic diffusion is indicated by |u(y,t)-u(x,t)|p-2 in a larger neighborhood. It approximates to the direction of edge curve more accurately than the direction indicated by gradient.

We conclude this subsection by discussing dynamical behavior of the formula (9). The temporal evolution of the dynamical formulation (9) makes the energy in (6) decrease monotonically in time. We may make a supposition that the regions with less activity in temporal evolution have rich information and are most likely to attract human attention. Therefore, the irrelevant information will be suppressed gradually, and the visual features can be preserved to the last. This achieves the control of information flow from original images to saliency maps.

4. Numerical Algorithms

In this section, we briefly present the numerical algorithm and procedure to solve the evolution equation (9). In this paper, u is a complex-valued function. Let u=(υ,ω), and we can get the following Euler-Lagrange equations with (9):(11)υt=ΩJ(x-y)((υ(y)-υ(x))2+(ω(y)-ω(x))2)(p-2)/2×(υ(y)-υ(x))dy+1ε2(υ2+ω2)-1/2(1-(υ2+ω2)1/2)υ+λ(υ-υ0),ωt=ΩJ(x-y)((υ(y)-υ(x))2+(ω(y)-ω(x))2)(p-2)/2×(ω(y)-ω(x))dy+1ε2(υ2+ω2)-1/2(1-(υ2+ω2)1/2)ω+λ(ω-ω0)with the initial condition υ(x,0)=υ0(x) and ω(x,0)=ω0(x).

Equation (9) can be implemented via a simple explicit finite difference scheme. Let h and Δt be space and time steps, respectively, and let (i,j)=(ih,jh) be the grid points. Let ui,jn=u(i,j,nΔt) with n0. Then we discretize time variable using explicit Euler method for (9) and we have (12)ui,jn+1-ui,jnΔt=(k,l)Ω[J((k,l)-(i,j))|uk,ln-ui,jn|p-2(uk,ln-ui,jn)]+1ε2ui,jn|ui,jn|-1(1-|ui,jn|)-λ(ui,jn-(u0)i,j). The iteration formulas are given by(13)υi,jn+1-υi,jnΔt=(k,l)Ω[J((k,l)-(i,j))((υk,ln-υi,jn)2+(ωk,ln-ωi,jn)2)(p-2)/2(υk,ln-υi,jn)]+1ε2υi,jn((υi,jn)2+(ωi,jn)2)-1/2(1-((υi,jn)2+(ωi,jn)2)1/2)-λ(υi,jn-(υ0)i,j),ωi,jn+1-ωi,jnΔt=(k,l)Ω[J((k,l)-(i,j))((υk,ln+1-υi,jn+1)2+(ωk,ln-ωi,jn)2)(p-2)/2(ωk,ln-ωi,jn)]+1ε2ωi,jn((υi,jn+1)2+(ωi,jn)2)-1/2(1-((υi,jn+1)2+(ωi,jn)2)1/2)-λ(ωi,jn-(ω0)i,j).

Remark 1.

In all numerical experiments, we choose the following kernel function: (14)J(x):={Cexp(1|x|2-d2)if|x|<d0if|x|d. The constant C is selected such that ΩJ(x)=1.

Remark 2.

For color image, let r,g, and b be red, green, and blue channels of the input image, respectively. I, the intensity channel, is defined to be used in our model, where (15)I=r+g+b3.

5. Experiments and Results

The proposed nonlocal p-Laplacian regularized variational model has been applied to detect saliency in varied image scenes. In our experiments, the function υ0 is initialized to υ0=(-1)k((2I(x)/255)-1) (k=0 or k=1) and ω0=1-υ02. If the saliency in image is brighter than the background, k=1. If the saliency is darker than the background, k=0. We always use the following parameter setting for all the experiments: p=3, ε=0.5, λ=0.1, and time step Δt=0.03. Our saliency maps are displayed by I~=(255·(υ+1))/2, and the initial state I~(x,0)=I~0(x) corresponds to the intensity image I(x). The reason for using I~ as the saliency maps is that I~ is the rescaling of υ, which is anisotropic diffusion from initial data υ0, by the evolution equations (13).

Figure 1 demonstrates the effects of the proposed method on various images with objects having luxuriant subtle details and/or complex background. We compare our saliency maps with four state-of-the-art methods. The four saliency detectors are Hou and Zhang , Harel et al. , Achanta et al. , and Fang et al. , hereby referred to as SR, GB, IG, and AS, respectively. The codes of AS model are cited from http://qtfm.sourceforge.net/. And the results of SR, GB, and IG models are cited from http://ivrg.epfl.ch/supplementary_material/RK_CVPR09/index.html. From Figure 1, we can see that our saliency maps have well-defined borders, highlight whole objects features, and suppress background better than the other methods even in the presence of complex backgrounds. In addition, our saliency maps have higher accuracy than the previous approaches. We can see from Column 6 that the preservation of subtle details in foreground is very good for these test images; for example, subtle details such as the textures in petals, the downy flower of a dandelion, and the hairs of dog are maintained clearly. In methods SR and GB, Col. 2 and Col. 3 show that the net information retained from original image contains very few details and represents a very blurry version of the original image. In method IG, Col. 4 shows that the high frequencies from the original image are retained in saliency maps whereas some details in background are still clear. In method AS, Col. 5 shows that the background information is suppressed better, but some subtle details in saliency are smoothed out, and the saliency maps suffer from “stair-case” effects for smooth-texture salient objects, for example, the egg in Figure 1. Due to the accuracy of our model, our saliency maps can be seen as salient objects directly. However, in order to segment a salient object, the other methods need to binarize saliency map such that ones (white pixels) correspond to salient object pixels while zeros (black pixels) correspond to the background .

Visual comparison of saliency maps for images under complex background. Column 1: original images. Column 2: the SR model . Column 3: the GB model . Column 4: the IG model . Column 5: the AS model . Column 6: our model.

In order to perform an objective comparison of the quality of the saliency maps with other methods, we adopt the precision, recall, and F-measure used by Achanta et al.  and Fang et al.  to evaluate these methods. The quantitative evaluation of this experiment is based on 1000 images which come from the experimental settings of Achanta et al. . This image database includes original images and their corresponding ground-truth saliency maps. The quantitative evaluation for a saliency detection algorithm is to see how much the saliency map from algorithm overlaps with the ground-truth saliency map. And then for a ground-truth saliency map G and the detected saliency map S, we have precision=ΣxgxSx/ΣxSx and recall=ΣxgxSx/Σxgx, with a nonnegative α: Fα=((1+α)*precision*recall)/(α*precision+recall). We set α = 0.3 in this experiment as in the literature  for fair comparison. The comparison results are shown in Figure 2. It can be clear that the overall performance of our proposed model for 1000 images is better than the others under comparison in terms of all three measures.

Overall mean scores of precision, recall, and F-measure from five different algorithms for 1000 images.

6. Conclusion

In this paper, we develop a variational model for saliency detection, which bases on the phase transition theory in the fields of mechanics and material sciences. The dynamics of the system, that is, the temporal evolution from the energy functional, yields information of attention. And the process of saliency extraction is interface diffusion. Compared to the existing models for saliency detection, our method provides flexible and intuitive control over the detecting procedure. Experimental results show that the proposed method is effective in extracting important features in terms of human visual perception.

Acknowledgments

This work was supported by the NSF of China (nos. 61202349 and 61271452), the Natural Science Foundation Project of CQ CSTC (no. cstc2013jcyjA40058), the Education Committee Project Research Foundation of Chongqing (nos. KJ120709 and KJ131209), the Research Foundation of Chongqing University of Arts and Sciences (no. R2012SC20), and the Innovation Foundation of Chongqing (no. KJTD201321).

Fu H. Chi Z. Feng D. Attention-driven image interpretation with application to image retrieval Pattern Recognition 2006 39 9 1604 1621 2-s2.0-33744982236 10.1016/j.patcog.2005.12.015 Guo C. Zhang L. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression IEEE Transactions on Image Processing 2010 19 1 185 198 2-s2.0-72949100573 10.1109/TIP.2009.2030969 MR2744464 Rapantzikos K. Tsapatsoulis N. Avrithis Y. Kollias S. Bottom-up spatiotemporal visual attention model for video analysis IET Image Processing 2007 1 2 237 248 2-s2.0-34249885854 10.1049/iet-ipr:20060040 Itti L. Koch C. Niebur E. A model of saliency-based visual attention for rapid scene analysis IEEE Transactions on Pattern Analysis and Machine Intelligence 1998 20 11 1254 1259 2-s2.0-0032204063 10.1109/34.730558 Frintrop S. Klodt M. Rome E. A real-time visual attention system using integral images Proceedings of the International Conference on computer Vision Systems 2007 Ma Y. F. Zhang H. J. Contrast-based image attention analysis by using fuzzy growing Proceedings of the 11th ACM International Conference on Multimedia (MM '03) November 2003 374 381 2-s2.0-2342589340 Achanta R. Estrada F. Wils P. Süsstrunk S. Salient region detection and segmentation Proceedings of the International Conference on Computer Vision Systems 2008 66 75 Achanta R. Hemami S. Estrada F. Süsstrunk S. Frequency-tuned salient region detection Proceedings of the International on Computer Vision and Pattern Recognition 2009 1597 1604 Hou X. Zhang L. Saliency detection: a spectral residual approach Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07) June 2007 2-s2.0-35148814949 10.1109/CVPR.2007.383267 Liu T. Sun J. Zheng N. N. Tang X. Shum H. Y. Learning to detect a salient object Proceedings of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07) June 2007 2-s2.0-34948869862 10.1109/CVPR.2007.383047 Guo C. Ma Q. Zhang L. Spatio-temporal saliency detection using phase spectrum of quaternion fourier transform Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition (CVPR '08) June 2008 2-s2.0-51949107445 10.1109/CVPR.2008.4587715 Feng S. Xu D. Yang X. Attention-driven salient edge(s) and region(s) extraction with application to CBIR Signal Processing 2010 90 1 1 15 2-s2.0-69249206558 10.1016/j.sigpro.2009.05.017 Harel J. Koch C. Perona P. Graph-based visual saliency Advances in Neural Information Processing Systems 2007 19 545 552 Bruce N. D. B. Tsotsos J. K. Saliency, attention and visual search: an information theoretic approach Journal of Vision 2009 9 3, article 5 2-s2.0-62649143331 10.1167/9.3.5 Fang Y. Lin W. Lee B. S. Lau C. T. Chen Z. Lin C. W. Bottom-up saliency detection model based on human visual sensitivity and amplitude spectrum IEEE Transactions on Multimedia 2012 14 1 187 198 2-s2.0-84856177098 10.1109/TMM.2011.2169775 Koch C. Ullman S. Shifts in selective visual attention: towards the underlying neural circuitry Human Neurobiology 1985 4 4 219 227 2-s2.0-0022388528 Li H. C. Fan P. Z. Khan M. K. Context-adaptive anisotropic diffusion for image denoising Electronics Letters 2012 48 14 827 829 10.1049/el.2011.3994 Kuijper A. Image analysis using p-Laplacian and geometrical PDEs Proceedings in Applied Mathematics and Mechanics 2007 7 1 1011201 1011202 10.1002/pamm.200700014 Zhan Y. The nonlocal p-Laplacian evolution for image interpolation Mathematical Problems in Engineering 2011 2011 11 837426 10.1155/2011/837426 MR2826912 Xin Q. Mu C. Li M. The Lee-Seo model with regularization term for bimodal image segmentation Mathematics and Computers in Simulation 2011 81 12 2608 2616 10.1016/j.matcom.2011.04.005 MR2822272 ZBL1236.94025 Aubert G. Aujol J. F. Blanc-Féraud L. Detecting codimension-two objects in an image with Ginzburg-Landau models International Journal of Computer Vision 2005 65 1-2 29 42 2-s2.0-33244464760 10.1007/s11263-005-3847-y Grossauer H. Scherzer O. Using the complex Ginzburg-Landau equation for digital inpainting in 2D and 3D Lecture Notes in Computer Science 2003 2695 225 236 2-s2.0-33244471614 Andreu F. Mazón J. M. Rossi J. D. Toledo J. A nonlocal p-Laplacian evolution equation with Neumann boundary conditions Journal de Mathématiques Pures et Appliquées 2008 90 2 201 227 10.1016/j.matpur.2008.04.003 MR2437810 Ginzburg V. L. Landau L. D. On the theory of superconductivity Zhurnal Eksperimentalnoi i Teoreticheskoi Fiziki 1950 20 1064 1082 Ipsen M. Sørensen P. G. Finite wavelength instabilities in a slow mode coupled complex Ginzburg-Landau equation Physical Review Letters 2000 84 11 2389 2392 2-s2.0-0001692582 Ambrosio L. Dancer N. Calculus of Variations and Partial Differential Equations: Topics on Geometrical Evolution Problems and Degree Theory 2000 Berlin, Germany Springer 10.1007/978-3-642-57186-2 MR1757706 Li F. Shen C. Pi L. A new diffusion-based variational model for image denoising and segmentation Journal of Mathematical Imaging and Vision 2006 26 1-2 115 125 10.1007/s10851-006-8303-2 MR2283874 Zhai Y. Zhang D. Sun J. Wu B. A novel variational model for image segmentation Journal of Computational and Applied Mathematics 2011 235 8 2234 2241 10.1016/j.cam.2010.10.020 MR2763138 ZBL1207.94026 Bendsoe M. P. Wunderlich W. Variable-topology optimization: status and challenges Proceedings of the European Conference on Computational Mechanics Wunderlich 1999 Munich, Germany