MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 367086 10.1155/2013/367086 367086 Research Article Implicit Active Contour Model with Local and Global Intensity Fitting Energies Xu Xiaozeng 1, 2 He Chuanjiang 1 Makinde Oluwole Daniel 1 College of Mathematics and Statistics Chongqing University Chongqing 401331 China cqu.edu.cn 2 School of Mathematics and Statistics Chongqing University of Technology Chongqing 400054 China cqut.edu.cn 2013 14 5 2013 2013 19 12 2012 28 03 2013 2013 Copyright © 2013 Xiaozeng Xu and Chuanjiang He. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

We propose a new active contour model which integrates a local intensity fitting (LIF) energy with an auxiliary global intensity fitting (GIF) energy. The LIF energy is responsible for attracting the contour toward object boundaries and is dominant near object boundaries, while the GIF energy incorporates global image information to improve the robustness to initialization of the contours. The proposed model not only can provide desirable segmentation results in the presence of intensity inhomogeneity but also allows for more flexible initialization of the contour compared to the RSF and LIF models, and we give a theoretical proof to compute a unique steady state regardless of the initialization; that is, the convergence of the zero-level line is irrespective of the initial function. This means that we can obtain the same zero-level line in the steady state, if we choose the initial function as a bounded function. In particular, our proposed model has the capability of detecting multiple objects or objects with interior holes or blurred edges.

1. Introduction

Implicit active contour models have been extensively studied and successfully used in image segmentation . The basic idea is to evolve a contour under some constraints to extract the desired object. According to the nature of constraints, the existing active contour models can be categorised into two classes: edge-based models , and region-based models . Each of them has its own pros and cons; the choice of them in applications depends on different characteristics of images. In this study, we focus on region-based models and consider images with intensity inhomogeneity.

Unlike edge-based models that utilize typically an edge indicator depending on image gradient to perform contour extraction, region-based models usually use global and/or local statistics inside regions rather than gradient on edges to find a partition of image domain. They generally have better performances in the presence of weak or discontinuous boundaries than edge-based models. Early popular region-based models tend to rely on intensity homogeneous (roughly constant or smooth) statistics in each of the regions to be segmented. Therefore, they either lack the ability to deal with intensity inhomogeneity like the PC (piecewise constant) model  or are excessively expensive in computation like the PS (piecewise smooth) model .

To handle intensity inhomogeneity efficiently, some localized region-based models  have been proposed recently. For example, Li et al.  recently proposed a region-scalable fitting (RSF) active contour model (originally termed as local binary fitting (LBF) model ). The RSF model draws upon intensity information in spatially varying local regions depending on a scale parameter, so it is able to deal with intensity inhomogeneity efficiently. Very recently, Zhang et al.  proposed a novel active contour model driven by local image fitting energy, which also can handle intensity inhomogeneity efficiently. The experimental results in  show that this model is more efficient than the RSF model, while yielding similar results. However, these two models easily get stuck in local minimums for the most of contour initializations. This makes it need user intervention to define the initial contours professionally.

In this study, based on the PC model  and RSF model , we propose a new active contour model, which integrates a local intensity fitting (LIF) energy with an auxiliary global intensity fitting (GIF) energy. The LIF energy is responsible for attracting the contour toward object boundaries and is dominant near object boundaries, while the GIF energy incorporates global image information to improve the robustness to initialization of the contours. The proposed model can efficiently handle intensity inhomogeneity, while allowing for more flexible initialization and maintaining the subpixel accuracy.

The remainder of this paper is organized as follows. Section 2 briefly reviews the PC model  and RSF model . Section 3 introduces the proposed model. Section 4 presents experimental results using a set of synthetic and real images. This paper is summarized in Section 5.

2. Related Works 2.1. Piecewise Constant Model by Chan and Vese

In , Chan and Vese (CV) proposed a region-based active contour model that relies on intensity homogeneous (roughly constant) statistics in each of the regions to be segmented. In the CV model, a contour C is represented implicitly by the zero-level set of a Lipschitz function ϕ:ΩR, which is called a level set function. In what follows, we let the level set function ϕ take positive and negative values inside and outside the contour C, respectively.

Let I:ΩR2R be an input image, and let Hε be the regularized Heaviside function; the energy functional of the CV model is defined as (1)ECV(c1,c2,ϕ)=λ1Ω|I-c1|2Hε(ϕ)dx+λ1Ω|I-c2|2(1-Hε(ϕ))dx+νΩ|Hε(ϕ)|dx, where λ1,λ2>0, ν>0 are constants. The regularized version of H(z) is chosen as (2)Hε(z)=12(1+2πarctan(zε)).c1 and c2 are the global averages of the image intensities in the region {x:ϕ(x)>0} and {x:ϕ(x)<0}, respectively; that is, (3)c1(ϕ)=ΩI(x)Hε(ϕ(x))dxΩHε(ϕ(x))dx,c2(ϕ)=ΩI(x)(1-Hε(ϕ(x)))dxΩ(1-Hε(ϕ(x)))dx.

The solution of the CV model in fact leads to a piecewise constant segmentation of the original image I(x): (4)IG(x)=c1Hε(ϕ(x))+c2(1-Hε(ϕ(x))), where c1 and c2 are the averages of the image intensities in the region {x:ϕ(x)>0} and {x:ϕ(x)<0}, respectively. Such constants may be far away from the original image data, if the intensities outside or inside the contour C={x:ϕ(x)=0} are not homogeneous. As a result, the CV model generally fails to segment images with intensity inhomogeneity.

2.2. Region-Scalable Fitting Model

In order to improve the performance of the global CV model  and the PS model  on images with inhomogeneity, Li et al. [11, 12] recently proposed a novel region-based active contour model. They introduced a kernel function and defined the following energy functional: (5)ERSF(f1,f2,ϕ)=λ1[Kσ(x-y)|I(y)-f1(x)|2×Hε(ϕ(y))dyKσ(x-y)|I(y)-f1(x)|2]dx+λ2[Kσ(x-y)|I(y)-f1(x)|2×(1-Hε(ϕ(y)))dyKσ(x-y)|I(y)-f1(x)|2]dx,+νΩ|Hε(ϕ(x))|dx+μΩ12(|ϕ(x)|-1)2dx, where Kσ is a Gaussian kernel with standard deviation σ, and f1(x) and f2(x) are two smooth functions that approximate the local image intensities inside and outside the contour, respectively. They are computed by (6)f1(x)=ΩKσ(x-y)I(y)Hε(ϕ(y))dyΩKσ(x-y)Hε(ϕ(y))dy,f2(x)=ΩKσ(x-y)I(y)(1-Hε(ϕ(y)))dyΩKσ(x-y)(1-Hε(ϕ(y)))dy.

The solution of the RSF model leads to a piecewise smooth approximation of the original image I(x): (7)IL(x)=f1(x)Hε(ϕ(x))+f2(x)(1-Hε(ϕ(x))), where the smooth functions f1(x) and f2(x) are computed by (6). f1(x) and f2(x) are the averages of local intensities on the two sides of the contour, which are different from the constants c1 and c2 in the CV model, the averages of the image intensities on the two sides of the contour. Therefore, the RSF model can deal with intensity inhomogeneity efficiently. However, it is sensitive to contour initialization (initial locations, sizes, and shapes).

3. The Proposed Model 3.1. Description of Our Model

Given a positive constant ω (0ω1), from (4) and (7), we define the following energy functional: (8)E(ϕ,c1,c2,f1,f2)=ωEG(ϕ)+(1-ω)EL(ϕ)=ω(12Ω|I(x)-IG(x)|2dx)+(1-ω)(12Ω|I(x)-IL(x)|2dx), where (9)IG(x)=c1Hε(ϕ(x))+c2(1-Hε(ϕ(x))),IL(x)=f1(x)Hε(ϕ(x))+f2(x)(1-Hε(ϕ(x))).

Keeping ϕ fixed and minimizing the functional E(ϕ,c1,c2,f1,f2) with respect to c1, c2, f1, and f2, we have (10)c1=ΩI(x)Hε(ϕ(x))dxΩHε(ϕ(x))dx,c2=ΩI(x)(1-Hε(ϕ(x)))dxΩ(1-Hε(ϕ(x)))dx,f1(x)=ΩKσ(x-y)I(y)Hε(ϕ(y))dyΩKσ(x-y)Hε(ϕ(y))dy,f2(x)=ΩKσ(x-y)I(y)(1-Hε(ϕ(y)))dyΩKσ(x-y)(1-Hε(ϕ(y)))dy.

Keeping c1, c2, f1, and f2 fixed and minimizing the functional E(ϕ,c1,c2,f1,f2) with respect to ϕ, we obtain the corresponding gradient descent flow equation: (11)ϕt=δε(ϕ)[ω((I-IG)(c1-c2))+(1-ω)((I-IL)(f1-f2))]=δε(ϕ)[ωFG(ϕ)+(1-ω)FL(ϕ)]=F(ϕ), where (12)δε(z)=Hε(z)=1πεε2+z2.

Like the CV and RSF models, our model is also implemented using an alternative procedure: for each iteration and the corresponding level set function ϕn, we first compute the fitting values ci(ϕn) and fi(ϕn) and then obtain ϕn+1 by minimizing E(ϕ,c1(ϕn),c2(ϕn),f1(ϕn),f2(ϕn)) with respect to ϕ. This process is repeated until the zero-level set of ϕn+1 is exactly on the object boundary.

In the following, we first discuss the properties of FG(ϕ) and FL(ϕ) and then analyze the behavior of (11).

3.2. Properties of <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M66"><mml:msup><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mi>G</mml:mi></mml:mrow></mml:msup><mml:mo mathvariant="bold">(</mml:mo><mml:mi>ϕ</mml:mi><mml:mo mathvariant="bold">)</mml:mo></mml:math></inline-formula> and <inline-formula><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" id="M67"><mml:msup><mml:mrow><mml:mi>F</mml:mi></mml:mrow><mml:mrow><mml:mi>L</mml:mi></mml:mrow></mml:msup><mml:mo mathvariant="bold">(</mml:mo><mml:mi>ϕ</mml:mi><mml:mo mathvariant="bold">)</mml:mo></mml:math></inline-formula>

For the sake of simplicity, we state and prove the properties of FG(ϕ) and FL(ϕ) only for an image consisting of only two distinct gray levels: (13)I(x)={g1,xω,g2,xΩω, where g1,g20 with g1g2, ω and Ωω represent the objects of interest and the background, respectively.

Theorem 1.

Let I(x) be an image by (13). Then one has (14)FG(ϕ)={(Mn-Nm)(g1-g2)2×[(M-N)(N-n)Hε+N((M+n)-(m+N))(1-Hε)]×(N2(M-N)2)-1,inω-(Mn-Nm)(g1-g2)2×[n(M-N)Hε+N(m-n)(1-Hε)]×(N2(M-N)2)-1,inΩω, and so, (15)sign(FG(ϕ))={+sign(Mn-Nm),inω,-sign(Mn-Nm),inΩω, where (16)M=|Ω|,m=|ω|,N=|{ϕ>0}|,n=|ω{ϕ>0}|, in which |Ω| is the area of the region Ω and similarly for others.

Remarks. (i) Due to the discrete nature of image, |Ω| is in fact the number of pixels in the image I(x), and similarly to others.

(ii) The cases of n=m, n=N, and n=0 correspond to the zero-level lines of ϕ(x) which are encircling the object (ω), inside the object and within the background (Ωω), respectively. The cases of Nnm and N=n=m correspond to the zero-level line of ϕ(x) that are partially inside the object and exactly on the object edge, respectively.

(iii) The significance of (15) is that the function FG(ϕ) has the opposite sign in ω (object) and Ωω (background), respectively.

The proof of Theorem 1 is provided in Appendix A. The following result will be used in the proof of Theorem 7, which guarantees that the evolution by (10) converges to a unique stable value after finite time.

Corollary 2.

Let I(x) be an image by (13). Then one has the following.

If Mn-Nm0, then (17)FG(ϕ)(Mn-Nm)(g1-g2)2M4/4,inω,FG(ϕ)-(Mn-Nm)(g1-g2)2M4/4,inΩω.

If Mn-Nm0, then (18)FG(ϕ)(Mn-Nm)(g1-g2)2M4/4,inω,FG(ϕ)-(Mn-Nm)(g1-g2)2M4/4,inΩω.

The proof of Corollary 2 is given in Appendix A.

We call the property in Theorem 1 an adaptive sign-changing property of FG(ϕ). Such property also holds for FL(ϕ), which can be expressed by the following theorem.

Theorem 3.

Let I(x) be an image given by (13). Then one has (19)FL(ϕ)={(Pq-Qp)(g1-g2)2×[(P-Q)(Q-q)Hε(ϕ)+Q((P+q)-(p+Q))(1-Hε(ϕ))]×(Q2(P-Q)2)-1,inω-(Pq-Qp)(g1-g2)2×[q(P-Q)Hε(ϕ)+Q(p-q)(1-Hε(ϕ))]×(Q2(P-Q)2)-1,inΩω, and so, (20)sign(FL(ϕ))={+sign(Pq-Qp),inω,-sign(Pq-Qp),inΩω, where (21)P=ΩKσ(x-ξ)dξ,p=ωKσ(x-ξ)dξ,Q={ϕ>0}Kσ(x-ξ)dξ,q=ω{ϕ>0}Kσ(x-ξ)dξ.

This theorem shows that the local force FL(ϕ) has exactly opposite sign in ω (object) and in Ωω (background).

The following result together with Corollary 2 will be used in the proof of Theorem 7 mentioned later.

Corollary 4.

Under the assumption of Theorem 3, one has the following.

If Pq-Qp0, then (22)FL(ϕ)S(Pq-Qp)(g1-g2)2P4/4,inω,FL(ϕ)-S(Pq-Qp)(g1-g2)2P4/4,      inΩω.

If Pq-Qp0, then (23)FL(ϕ)S(Pq-Qp)(g1-g2)2P4/4,inω,FL(ϕ)-S(Pq-Qp)(g1-g2)2P4/4,      inΩω, where (24)S=min{Kσ(x-ξ):ξΩ)}.

The proofs of Theorem 3 and Corollary 4 are similar to Theorem 1 and Corollary 2, respectively; see Appendix B for details.

3.3. Behavior of Our Model

In this section, we analyze the behavior of our model (10) for image segmentation. We will show that the zero-level line of the evolution function starting with a bounded function finally comes to a unique steady state, which separates the object from the background.

Due to the discrete nature of image, the continuous equation (10) can be discretized in space with space step 1 (implied pixel spacing), but also in time with a time step Δt as follows: (25)ϕi,jk+1=ϕi,jk+Δt·F(ϕi,jk),0ip-1,0jq-1, where p×q corresponds to the image size, Ii,j=I(i,j) and ϕi,jk=ϕ(kΔt,i,j) with k0 and ϕi,j0=ϕ0(i,j).

Corollary 5.

Let Mn0-N0m=|Ω||ω{ϕ0>0}|-|{ϕ0>0}||ω|0.

If Mn0-N0m>0, then (26)FG(ϕk)>(Mn0-N0m)×(g1-g2)2M4/4=A>0,inω(k1,kz+),FG(ϕk)<-(Mn0-N0m)×(g1-g2)2M4/4=-A<0,inΩω(k1,kz+).

If Mn0-N0m<0, then (27)FG(ϕk)<(Mn0-N0m)×(g1-g2)2M4/4=A<0,inω(k1,kz+),FG(ϕk)>-(Mn0-N0m)×(g1-g2)2M4/4=-A>0,inΩω(k1,kz+), where (28)A=(Mn0-N0m)(g1-g2)2M4/8.

We provide the proof of Corollary 5 in Appendix C. Similar analysis can prove the following corollary, Corollary 6.

Corollary 6.

Let Pq0-Q0p0.

If Pq0-Q0p>0, then (29)FL(ϕk)>(Pq0-Q0p)×(g1-g2)2P4/4=B>0,inω(k1,kz+),FL(ϕk)<-(Pq0-Q0p)×(g1-g2)2P4/4=-B<0,inΩω(k1,kz+).

If Pq0-Q0p<0, then (30)FL(ϕk)<(Pq0-Q0p)×(g1-g2)2P4/4=B<0,inω(k1,kz+),FL(ϕk)>-(Pq0-Q0p)×(g1-g2)2P4/4=-B>0,inΩω(k1,kz+), where (31)B=(Pq0-Q0p)(g1-g2)2P4/8.

Theorem 7.

If ϕ0(x) is a bounded function with (Mn0-N0m)(Pq0-Q0p)>0, then there exists a positive integer K such that, for kK, (32)ϕi,jk+1>0,inω,ϕi,jk+1<0,inΩω,ifMn0-N0m>0,Pq0-Q0p>0,(33)ϕi,jk+1<0,inω,ϕi,jk+1>0,inΩω,if  Mn0-N0m<0,Pq0-Q0p<0.

We provide the proof of Theorem 7 in Appendix D.

The significance of Theorem 7 is that if we choose ϕ0(x) such that (Mn0-N0m)(Pq0-Q0p)>0, then the zero-level line of ϕ(x) starting with such initial function ϕ0(x) can finally come to a unique steady state, which separates object from the background.

Remark. Mathematically there does exist ϕ0(x) such that (Mn0-N0m)(Pq0-Q0p)0; in practice, however, we always can guarantee that (34)(Mn0-N0m)(Pq0-Q0p)>0.

3.4. Discussion of Initial Function

Theorem 7 guarantees that the proposed model computes a unique steady state regardless of the initialization; that is, the convergence of the zero-level line {ϕ=0} is irrespective of the initial function. This means that we can obtain the same zero-level line in the steady state if we choose the initial function as a bounded function ϕ0 with (Mn0-N0m)(Pq0-Q0p)>0.

In applications, the initial function ϕ0 can be defined via a simple curve (closed curve or line segment) in image domain. For example, we can choose the initial function as a signed distance to a circle, which is widely used in most of image segmentation models with level set methods. For the proposed model, however, we prefer to define the initial function ϕ0(x) as a piecewise constant or constant function as follows.

If the curve C is a closed curve (e.g., circle or square), then ϕ0(x) is defined by (35)ϕ0(x)={+ρ,(x)inside(C),-ρ,(x)outside(C), where ρ0 is a constant.

If the curve C is a line segment that partitions the image domain Ω into two disjoint regions Ω1 and Ω2 (e.g., the left and right half regions of image domain Ω, respectively) with Ω=Ω1Ω2 and Ω1Ω2=Φ, then ϕ0(x,y) is defined by (36)ϕ0(x)={+ρ,(x)Ω1,-ρ,(x)Ω2, where ρ0 is a constant.

We can also define a zero function as follows: (37)ϕ0(x)=0.

Next, we prove the fact that, with ϕ0(x)=0, ϕ(x) becomes a sign-changing function and satisfy the condition of (Mn1-N1m)(Pq1-Q1p)>0 after the first iteration.

Theorem 8.

If the initial function ϕ0(x)=0 in Ω and (38)c10(x)=f10(x)=2· mean xΩI(x),c20(x)=f20(x)=0, then after the first iteration, one has (39)sign(F(0))={-1,xΩ1,1,xΩ2, where Ω1Ω2=Φ, Ω1Φ, Ω2Φ.

We provide the proof of Theorem 8 in Appendix E.

By (25) and Theorem 8, we have (40)ϕ1(x)=δε(0) mean xΩI(x)×(I- mean xΩI(x)){<0,xΩ1,>0,xΩ2. Therefore, ϕ1(x) became a sign-changing function. Then, using two distinct gray levels of (13) and the above demonstration, we have (41)(Mn1-N1m)(Pq1-Q1p)=(m(M-N1))(p(P-Q1))=mp(M-N1)(P-Q1)>0.

4. Implementation and Experimental Results 4.1. Implementation

In tradition PDE-based methods, a certain diffusion term is usually included in the evolution equation to regularize the evolving function, but which increases the computational cost. Recently,  proposed a novel scheme to regularize the evolving function, that is, Gaussian filtering the evolving function after each of iterations. We adopt this scheme to regularize the evolving function u at each of iteration; that is, ϕn=Gρ*ϕn, where p controls the regularization strength. Such regularization procedure has some advantages over the traditional regularization term, such as computational efficiency and better smoothing effect; see  for more explanations.

The main procedures of the proposed algorithm can be summarized as follows.

Initialize the evolving function ϕ0(x).

Compute ci(ϕ) and fi(ϕ), (i=1,2) using (10).

Evolve the function ϕ according to (11).

Regularize the function ϕ with a Gaussian filter, that is; ϕn=Gρ*ϕn.

Check whether the evolution is finished. If not, return to step 2.

4.2. Experimental Results

In this section, we show experimentally that the proposed model not only can provide desirable segmentation results in the presence of intensity inhomogeneity but also allows for more flexible initialization of the contour compared to the RSF and LIF models.

In our numerical experiments, for our model, we choose the parameters as follows: ε=1.5 for the regularized Dirac function, Δt=0.025 (time step), h=1 (space step), ω=0.1, and n=5. The MATLAB source code of the LIF model algorithm is downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/code/LIF.zip, and the RSF model algorithm is downloaded at http://www.engr.uconn.edu/~cmli/code/RSF_v0_v0.1.rar. All experiments were run under MATLAB 2010b on a PC with Dual 2.7 GHz processor.

We will utilize two region overlap metrics to evaluate the performances of the three models quantitatively. The two metrics are the ratio of segmentation error (RSE)  and the dice similarity coefficient (DSC) [14, 18, 19]. If S1 and S2 represent a given baseline foreground region (e.g., true object) and the foreground region found by the model, respectively, then the two metrics are defined as follows: (42)RSE=N(S1S2)+N(S2S1)N(Ω),DSC=2N(S1S2)N(S1)+N(S2), where N(·) indicates the number of pixels in the enclosed region, and Ω is image domain. The closer the DSC value to 1, the better the segmentation. Since N(S1S2)+N(S2S1) is the number of the pixels mistakenly classified by the model, lower RSE means that there are fewer pixels misclassified; that is, the image can be segmented more accurately. Thus, a perfect segmentation will give DSC=1, RSE=0.

In the first example (Figures 13), we mainly verify the computation of a unique steady state of the zero-level line of ϕ, starting with three types of representative initial functions, that is, signed distance function, piecewise constant functions by (26)-(27), and zero function, respectively. The top row of Figure 1 shows the evolution of an active contour (i.e., zero-level line {ϕ=0}), with the function ϕ initialized to a signed distance function, piecewise constant functions by (26)-(27) with ρ=1, and zero function, respectively, while the bottom row shows the corresponding evolution of ϕ. With such initializations, the zero-level line of ϕ converges to the same steady state.

Segmentations of our model for two real images with ϕ initialized as different functions. Columns 1 to 4: signed distance function, piecewise constant functions by (26)-(27) with ρ=1, and zero function.

Applications of our model to four images (slug, cell, ventriculus sinister MR, and real plane images). The curve evolution process from the initial contour (in the first column) to the final contour (in the fourth column) is shown in every row for the corresponding image.

Comparisons of three models. Rows 1–3: RSF model, LIF model, and our model with ϕ0=0. For the RSF and LIF models, initial and final contours are shown in green and red color, respectively. Columns 1–4: the 4 pictures indicate (a), (b), (c), (d), respectively.

In Figure 2, we show that our model has the capability of detecting multiple objects or objects with interior holes or blurred edges, only starting with a zero function. The contours (zero-level lines) evolution processes are shown in the second column to the forth column.

We choose the segmentation results of the RSF and LIF models as baseline foreground regions and then compute DSC values for the corresponding images. The RSE and DSC values for the four real images are given in Table 1, from which we can see that the proposed algorithm works as well as the RSF and LIF models for images with intensity inhomogeneity.

RSE and DSC for RSF and LIF models and our model.

RSF model LIF model
RSE (%) DSC (%) RSE (%) DSC (%)
Figure 3(a) 0.27 97.17 2.99 98.91
Figure 3(b) 2.71 96.37 0.12 99.13
Figure 3(c) 0.49 87.95 0.45 88.06
Figure 3(d) 0.16 97.66 0.76 98.91

The experimental results shown in Figure 4 validate that our method can also achieve subpixel segmentation accuracy as in . As can be seen from Figures 4(b) and 4(d), both models achieve subpixel segmentation accuracy of the finger boundaries. The final contour accurately reflects the true hand shape.

Segmentation results of both models for a hand phantom. Upper row: LIF model. Lower row: our model with ϕ0=0. Column 2: zoomed view of the narrow parts in hand fingers.

5. Conclusion

In this study, we propose a new active contour model integrating a local intensity fitting (LIF) energy with an auxiliary global intensity fitting (GIF) energy. The LIF energy is responsible for attracting the contour toward object boundaries and is dominant near object boundaries, while the GIF energy incorporates global image information to improve the robustness to initialization of the contours. The proposed model can efficiently handle intensity inhomogeneity, while allowing for more flexible initialization and maintaining the subpixel accuracy. The utility model has the advantages of allowing for more flexible initialization of the contour and the capability of detecting multiple objects or objects with interior holes or blurred edges. But  has not been implemented.

Appendices A. Proofs of Theorem <xref ref-type="statement" rid="thm1">1</xref> and Corollary <xref ref-type="statement" rid="coro1">2</xref> Proof of Theorem <xref ref-type="statement" rid="thm1">1</xref>.

Clearly, {ϕ>0}Φ and {ϕ<0}Φ, owing to the sign-changing property of ϕ(x,y). Thus, by (3), we have (A.1)c1(ϕ)=g1n+g2(N-n)N=(g1-g2)n+g2NN,c2(ϕ)=g1(m-n)+g2[(M-N)-(m-n)]M-N=(g1-g2)(m-n)+g2(M-N)M-N, and so, (A.2)c1(ϕ)-c2(ϕ)=((M-N)[(g1-g2)n+g2N]-N[(g1-g2)(m-n)+g2(M-N)])×(N(M-N))-1=(g1-g2)(Mn-Nm)N(M-N). Therefore, in ω, we get (A.3)FG=(c1(ϕ)-c2(ϕ))((g1-c1(ϕ))Hε(ϕ)+(g1-c2(ϕ))(1-Hε(ϕ)))=(g1-g2)(Mn-Nm)N(M-N)·(g1-g2)N(M-N)×[((M+n)-(m+N))(M-N)(N-n)Hε+N((M+n)-(m+N))(1-Hε)]=(Mn-Nm)×(g1-g2)2N2(M-N)2×[N((M+n)-(m+N))(M-N)(N-n)Hε+N((M+n)-(m+N))(1-Hε)], and in Ωω, we have (A.4)FG(ϕ)=(c1(ϕ)-c2(ϕ))((g2-c1(ϕ))Hε(ϕ)+(g2-c2(ϕ))(1-Hε(ϕ)))=(g1-g2)(Mn-Nm)N(M-N)×[(N(M-N))-1(n(M-N)Hε(ϕ)+N(m-n)(1-Hε(ϕ)))-(g1-g2)×(n(M-N)Hε(ϕ)+N(m-n)(1-Hε(ϕ)))×(N(M-N))-1]=-(Mn-Nm)(g1-g2)2N2(M-N)2×[(n(M-N)Hε(ϕ)N(m-n)(1-Hε(ϕ)))]. This completes the proof of (14). The last assertion (15) follows clearly from the following fact: (A.5)(M-N)(N-n)Hε(ϕ)+N((M+n)-(m+N))(1-Hε(ϕ))Hε(ϕ)+(1-Hε(ϕ))1(A.6)(n(M-N)Hε(ϕ)+N(m-n)(1-Hε(ϕ)))Hε(ϕ)+(1-Hε(ϕ))1. The proof of Theorem 1 is completed.

Now, we give the proof of Corollary 2. (i) Since (A.7)M2=(N+(M-N))2=N2+(M-N)2+2N(M-N)4N(M-N), by (14), (A.5), and (A.7) we have (A.8)FG(ϕ)=(Mn-Nm)(g1-g2)2×[N((M+n)-(m+N))(M-N)(N-n)Hε+N((M+n)-(m+N))(1-Hε)]×(N2(M-N)2)-1(Mn-Nm)(g1-g2)2M4/4,inω, and by (14) and (A.6), (A.9)FG(ϕ)=-(Mn-Nm)(g1-g2)2×[+N(m-n)(1-Hε))(n(M-N)Hε+N(m-n)(1-Hε))]×(N2(M-N)2)-1-(Mn-Nm)(g1-g2)2M4/4,inΩω. This completes the proof of (17), and similarly for (18). The proof is completed.

B. Proofs of Theorem <xref ref-type="statement" rid="thm2">3</xref> and Corollary <xref ref-type="statement" rid="coro2">4</xref> Proof of Theorem <xref ref-type="statement" rid="thm2">3</xref>.

By (6), we have (B.1)f1(x)={ϕ>0}Kσ(x-ξ)I(ξ)dξ{ϕ>0}Kσ(x-ξ)dξ=({ϕ>0}ωKσ(x-ξ)I(ξ)dξ+{ϕ>0}({ϕ>0}ω)Kσ(x-ξ)I(ξ)dξ)×({u>I-}Kσ(x-ξ)dξ)-1=(g1{ϕ>0}ωKσ(x-ξ)dξ+g2{ϕ>0}({ϕ>0}ω)Kσ(x-ξ)dξ)×({u>I-}Kσ(x-ξ)dξ)-1=g1q+g2(Q-q)Q,(B.2)f2(x)={ϕ0}Kσ(x-ξ)I(ξ)dξ{ϕ0}Kσ(x-ξ)dξ=({ϕ0}ωKσ(x-ξ)I(ξ)dξ+{ϕ0}({ϕ0}ω)Kσ(x-ξ)I(ξ)dξ)×({ϕ0}Kσ(x-ξ)dξ)-1=(g1{ϕ0}ωKσ(x-ξ)dξ+g2{ϕ0}({ϕ0}ω)Kσ(x-ξ)dξ)×({ϕ0}Kσ(x-ξ)dξ)-1=g1(p-q)+g2[(P-Q)-(p-q)]P-Q, and so, (B.3)f1(x)-f2(x)=(g1-g2)(Pq-Qp)Q(P-Q). Therefore, in ω, we get (B.4)FL(ϕ)=(m1(ϕ)-m2(ϕ))×((g1-m1(ϕ))Hε(ϕ)+(g1-m2(ϕ))(1-Hε(ϕ)))=(Pq-Qp)(g1-g2)2×[(1-Hε(ϕ))(P-Q)(Q-q)Hε(ϕ)+Q((P+q)-(p+Q))(1-Hε(ϕ))]×(Q2(P-Q)2)-1, and in Ωω, we have (B.5)FL=(m1(ϕ)-m2(ϕ))×((g2-m1(ϕ))Hε(ϕ)+(g2-m2(ϕ))(1-Hε(ϕ)))=-(Pq-Qp)(g1-g2)2×[q(P-Q)Hε(ϕ)+Q(p-q)(1-Hε(ϕ))]×(Q2(P-Q)2)-1. This completes the proof of (19). The last assertion (20) follows clearly from the following fact: (B.6)(P-Q)(Q-q)Hε(ϕ)+Q((P+q)-(p+Q))(1-Hε(ϕ))=Hε(ϕ)(Ω-{ϕ>0})Kσ(x-ξ)dξ·({ϕ>0}-{ϕ>0}ω)Kσ(x-ξ)dξ+(1-Hε(ϕ)){ϕ>0}Kσ(x-ξ)dξ·((Ω+{ϕ>0}ω)-(ω+{ϕ>0}))Kσ(x-ξ)dξ=Hε(ϕ){ϕ0}Kσ(x-ξ)dξ·({ϕ>0}({ϕ>0}ω)Kσ(x-ξ)dξ)+(1-Hε(ϕ))×{ϕ>0}Kσ(x-ξ)dξ·({ϕ0}({ϕ0}ω)Kσ(x-ξ)dξ)Hε(ϕ)({ϕ0}Sdξ)·({ϕ>0}({ϕ>0}ω)Sdξ)+(1-Hε(ϕ))({ϕ>0}Sdξ)·({ϕ0}({ϕ0}ω)Sdξ)=S[(1-Hε(ϕ))(M-N)(N-n)Hε(ϕ)+N((M-N)-(m-n))(1-Hε(ϕ))]S,(B.7)q(P-Q)Hε(ϕ)+Q(p-q)(1-Hε(ϕ))=Hε(ϕ){ϕ>0}ωKσ(x-ξ)dξ·{ϕ0}Kσ(x-ξ)dξ+(1-Hε(ϕ))×{ϕ>0}Kσ(x-ξ)dξ·{ϕ0}ωKσ(x-ξ)dξHε(ϕ)({u>I-}ωSdξ)·({uI-}Sdξ)+(1-Hε(ϕ))({u>I-}Sdξ)·({uI-}ωSdξ)=S[n(M-N)Hε(ϕ)+(1-Hε(ϕ))N(m-n)]S, where S=min{Kσ(x-ξ):ξΩ)}.

Now, we give the proof of Corollary 4. (i) Since (B.8)P24P(P-Q), we have by (19) and (B.6) (B.9)FL(ϕ)=(Pq-Qp)(g1-g2)2×[(1-Hε(ϕ))(P-Q)(Q-q)Hε(ϕ)+Q((P+q)-(p+Q))(1-Hε(ϕ))]×(Q2(P-Q)2)-1(Pq-Qp)(g1-g2)2P4/4,inω, and by (19) and (B.7), (B.10)FL(ϕ)=-(Pq-Qp)(g1-g2)2×[q(P-Q)Hε(ϕ)+Q(p-q)(1-Hε(ϕ))]×(Q2(P-Q)2)-1-(Pq-Qp)(g1-g2)2P4/4,inΩω. This completes the proof of (19), and similarly to (20). The proof is completed.

C. Proof of Corollary <xref ref-type="statement" rid="coro3">5</xref> Proof.

(i) By Corollary 2 (i), we have (C.1)FG(ϕ0)(Mn0-N0m)(g1-g2)2M4/4=A>0,inω,FG(ϕ0)-(Mn0-N0m)(g1-g2)2M4/4=-A<0,inΩω. At the first iteration, by (25) and (C.1), we have (C.2)ϕi,j1=ϕi,j0+Δt·FG(ϕi,j0){ϕi,j0+Δt·δε(ϕ0)A,inω,ϕi,j0-Δt·δε(ϕ0)A,inΩω, or equivalently, (C.3)ϕ1(x)ϕ0(x)+Δt·δε(ϕ0)A>ϕ0(x),inω,ϕ1(x)ϕ0(x)-Δt·δε(ϕ0)A<ϕ0(x),inΩω, which implies that (C.4)Mn1-N1m=|Ω||ω{ϕ1>0}|-|{ϕ1>0}||ω|=(|Ωω|+|ω|)|ω{ϕ1>0}|-(|ω{ϕ1>0}|+|(Ωω){ϕ1>0}|)|ω|=|Ωω||ω{ϕ1>0}|-|(Ωω){ϕ1>0}||ω|>|Ωω||ω{ϕ0>0}|-|(Ωω){ϕ0>0}||ω|=|Ωω||ω{ϕ0>0}|+|ω{ϕ0>0}||ω|-|ω{ϕ0>0}||ω|-|(Ωω){ϕ0>0}||ω|=(|Ωω|+|ω|)|ω{ϕ0>0}|-(|ω{ϕ0>0}|+|(Ωω){ϕ0>0}|)|ω|=|Ω||ω{ϕ0>0}|-|{ϕ0>0}||ω|=Mn0-N0m>0. In a manner similar to the steps used to obtain (26)-(27) and (C.1)– (C.4), we obtain (C.5)FG(I,ϕ1)(Mn1-N1m)(g1-g2)2M4/4>(Mn0-N0m)(g1-g2)2M4/4=A>0,inω,FG(I,ϕ1)-(Mn1-N1m)(g1-g2)2M4/4<-(Mn0-N0m)(g1-g2)2M4/4=-A<0,                      inΩω,ϕ2(x)ϕ1(x)+Δt·δε(ϕ1)A>ϕ1(x),in    ωϕ2(x)ϕ1(x)-Δt·δε(ϕ1)A<ϕ1(x),in    Ωω,Mn2-N2m>Mn1-N1m>Mn0-N0m>0.

D. Proof of Theorem <xref ref-type="statement" rid="thm3">7</xref> Proof.

We prove (32), and similarly to (33). By (25), Corollary 5(i), and Corollary 6(i), we get (D.1)ϕi,jk+1=ϕi,jk+Δt·F(ϕi,jk)=ϕi,jk+Δt·((1-ω)FL(ϕi,jk)+ωFG(ϕi,jk))ϕi,jk+Δt·((1-ω)B·δε(ϕk+1)+ωA·δε(ϕk+1))=ϕi,jk+((1-ω)B+ωA)·δε(ϕk+1)·Δt,inω(k1,kz+),ϕi,jk+Δt·(+ωA·δε(ϕk+1))-((ϕk+11-ω)B·δε(ϕk+1)+ωA·δε(ϕk+1)))=ϕi,jk-((1-ω)B+ωA)·δε(ϕk+1)·Δt,inΩω(k1,kz+), which implies that (D.2)ϕi,jk+1ϕi0+((1-ω)B+ωA·)·i=0k+1δε(ϕi)·Δt,inω,ϕi,jk+1ϕi0-(k+1)A((1-ω)B+ωA·)·i=0k+1δε(ϕi)·Δt,inΩω. Because ϕ0(x) is bounded in domain Ω, there clearly exists a positive integer K such that (D.3)ϕi,jk+1>0,inω,  ϕi,jk+1<0,inΩω,forkK. The proof is completed.

E. Proof of Theorem <xref ref-type="statement" rid="thm4">8</xref> Proof.

From (11), we have (E.1)F(0)=δε(0)[ω((I-IG)·2meanxΩI(x))+(1-ω)((I-IL)·2meanxΩI(x))]=2δε(0)meanxΩI(x)(I-2Hε(0)meanxΩI(x))=δε(0)meanxΩI(x)(I-meanxΩI(x)). Clearly, (E.2)minxΩI(x)<meanxΩI(x)<maxxΩI(x) which implies that (E.3)sign(F(0))={-1,xΩ1,1,xΩ2, where (E.4)Ω1={I(x)<meanxΩI(x)},Ω2={I(x)>meanxΩI(x)}. The proof of Theorem 8 is completed.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve this paper. Besides, this work was supported by the Fundamental Research Funds for the Central University, Grant no. (CDJXS10100006).

Caselles V. Catté F. Coll T. Dibos F. A geometric model for active contours in image processing Numerische Mathematik 1993 66 1 1 31 2-s2.0-34249767150 10.1007/BF01385685 MR1240700 Chan T. Moelich M. Sandberg B. Some recent developments in variational image segmentation UCLA CAM Report cam 06-52, 2006 He L. Peng Z. Everding B. Wang X. Han C. Y. Weiss K. L. Wee W. G. A comparative study of deformable contour methods on medical image segmentation Image and Vision Computing 2008 26 2 141 163 2-s2.0-35848947947 10.1016/j.imavis.2007.07.010 Li C. Xu C. Gui C. Fox M. D. Level set evolution without re-initialization: a new variational formulation Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '05) June 2005 San Diego, Calif, USA 430 436 2-s2.0-24644441054 Li C. Xu C. Gui C. Fox M. D. Distance regularized level set evolution and its application to image segmentation IEEE Transactions on Image Processing 2010 19 12 3243 3254 MR2789710 2-s2.0-78649269028 10.1109/TIP.2010.2069690 Zhou B. Mu C. L. Level set evolution for boundary extraction based on a p-Laplace equation Applied Mathematical Modelling 2010 34 12 3910 3916 MR2659644 2-s2.0-77953036042 10.1016/j.apm.2010.04.003 Wang Y. He C. Adaptive level set evolution starting with a constant function Applied Mathematical Modelling 2012 36 7 3217 3228 10.1016/j.apm.2011.10.023 MR2910612 Chan T. F. Vese L. A. Active contours without edges IEEE Transactions on Image Processing 2001 10 2 266 277 2-s2.0-0035248865 10.1109/83.902291 Tsai A. Yezzi A. Willsky A. S. Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification IEEE Transactions on Image Processing 2001 10 8 1169 1186 2-s2.0-0035425439 10.1109/83.935033 Brox T. Cremers D. On the statistical interpretation of the piecewise smooth Mumford-Shah functional Proceedings of the International Conference on Scale Space and Variational Methods in Computer Vision (SSVM '07) 2007 Ischia, Italy 203 213 Li C. Kao C. Y. Gore J. C. Ding Z. Implicit active contours driven by local binary fitting energy Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR '07) June 2007 Washington, DC, USA 1 7 2-s2.0-34948884623 10.1109/CVPR.2007.383014 Li C. Kao C. Y. Gore J. C. Ding Z. Minimization of region-scalable fitting energy for image segmentation IEEE Transactions on Image Processing 2008 17 10 1940 1949 MR2517277 2-s2.0-52649131914 10.1109/TIP.2008.2002304 Lankton S. Tannenbaum A. Localizing region-based active contours IEEE Transactions on Image Processing 2008 17 11 2029 2039 2-s2.0-54949141082 10.1109/TIP.2008.2004611 Wang L. Li C. Sun Q. Xia D. Kao C. Y. Active contours driven by local and global intensity fitting energy with application to brain MR image segmentation Computerized Medical Imaging and Graphics 2009 33 7 520 531 2-s2.0-77951209202 Zhang K. Song H. Zhang L. Active contours driven by local image fitting energy Pattern Recognition 2010 43 4 1199 1206 2-s2.0-74449089236 10.1016/j.patcog.2009.10.010 Zhang K. Zhang L. Song H. Zhou W. Active contours with selective local or global segmentation: a new formulation and level set method Image and Vision Computing 2010 28 4 668 676 2-s2.0-74749087232 10.1016/j.imavis.2009.10.009 Yuan Y. He C. Variational level set methods for image segmentation based on both L2 and Sobolev gradients Nonlinear Analysis. Real World Applications 2012 13 2 959 966 10.1016/j.nonrwa.2011.09.002 MR2846895 Liu B. Cheng H. D. Huang J. Tian J. Tang X. Liu J. Probability density difference-based active contour for ultrasound image segmentation Pattern Recognition 2010 43 6 2028 2042 2-s2.0-76749089534 10.1016/j.patcog.2010.01.002 Shattuck D. W. Sandor-Leahy S. R. Schaper K. A. Rottenberg D. A. Leahy R. M. Magnetic resonance image tissue classification using a partial volume model NeuroImage 2001 13 5 856 876 2-s2.0-0034851632 10.1006/nimg.2000.0730