CMMM Computational and Mathematical Methods in Medicine 1748-6718 1748-670X Hindawi Publishing Corporation 10.1155/2015/178102 178102 Research Article A Multiscale Constraints Method Localization of 3D Facial Feature Points Li Hong-an 1, 2 Zhang Yongxin 3 Li Zhanli 1, 2 Li Huilin 1 Figueiras Edite 1 College of Computer Science and Technology Xi’an University of Science and Technology Xi’an 710054 China xust.edu.cn 2 School of Mechanical Engineering Xi’an University of Science and Technology Xi’an 710054 China xust.edu.cn 3 School of Information Technology Luoyang Normal University Luoyang 471022 China lynu.edu.cn 2015 11102015 2015 30 03 2015 07 06 2015 23 06 2015 11102015 2015 Copyright © 2015 Hong-an Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms.

1. Introduction

With the development of 3D information acquisition technology, the research of 3D facial feature has gained more and more extensive attention. The automatic localization of the facial feature points is a study hotspot in the field of medical computer vision, which is a precondition for face recognition, face animation, face tracking, and 3D face reconstruction. At present, the 3D facial feature point localization algorithms have not been researched in depth though the 2D facial feature point localization algorithms have matured. Wang et al.  applied the Jet Bunch algorithm of 2D facial feature point localization to the process of 3D facial feature point localization; Xu et al.  proposed a hierarchical filtering algorithm, which could locate the feature points of nasal tip. While these methods are beneficial to detect other facial key feature points, they are insensitive to noises and have the rotation and translation invariance characters with a poor generality, and the localization results are unsatisfactory except the nasal tip feature points. Feng et al.  proposed a feature point localization algorithm based on relative angle histograms. In a 3D point distribution model, the relative angle histograms of all points are calculated firstly, and then the features are located according to the similarity of the histograms. The algorithm is of high efficiency and stability, but it locates the feature points just in a small scope with an inaccurate result. Li and Da  suggested a 3D facial feature point localization algorithm that combined a priori knowledge with differential characteristics, but the result is not satisfactory because of the unique and complex diversity of the face model.

In order to solve these problems, we put forward a 3D facial feature point localization method based on the relative angle histograms and multiscale constraints. The cluster point set of the facial features is created firstly, and then the multiscale integral characteristics are used to locate the feature points accurately. As a result, the accuracy of the feature point localization is improved.

2. Model Preprocessing 2.1. Coordinate System Transformation

CT (computed tomography) images are acquired from living samples in a hospital, whose contour lines are extracted using the method of combining the improved snake algorithm with the ray method . In order to obtain the 3D face sample that is constructed by a single layer triangular mesh, the 3D surface is reconstructed using the Ganapathy algorithm to connect the neighbor contour lines . When the CT data are collected in a hospital, the postures of every body’s head are different, and unavoidably the reconstructed single layer 3D models would lie in different coordinate systems. For the convenience of defining the feature points, all the models are unified to a uniform coordinate system.

A face model ( F M ) is usually constructed by a triangular mesh which is composed of vertexes ( V ) and triangular facets ( F ), so F M could be expressed as a linear representation, F M = ( V , F ) . v i is the i th point of the F M , 1 i N , and the coordinate of v i is expressed by v i = ( x i , y i , z i ) T . The center of the F M is computed by O = ( 1 / N ) i = 1 N v i , and then the point set V is transformed into the coordinate system whose origin is O .

The point set V is taken as a 3D random variable, and a positive definite covariance matrix C could be obtained from the transformed V . Three biggest eigenvalues λ 1 , λ 2 , λ 3 ( λ 1 λ 2 λ 3 0 ) of C are calculated, and the eigenvector responding to the eigenvalues is E = [ e 1 , e 2 , e 3 ] , where E is an orthogonal matrix, and E T E = I , where I is a unit matrix.

According to the related knowledge of algebra, C E = λ E , C could be calculated out. Define U as a linear combination of V , U = E T ( V - O ) , and U is orthogonalised through the equation C o v ( U ) = E T C E = Λ , and then the variance among sets U i could be expressed as (1) C o v U i , U j = C o v e i T , e j T = e i T C e j i = λ i i = j 0 i j . U i ( 1 i 3 ) is taken as the i th principal component of variable V , and then we construct the coordinate system shown in Figure 1, whose origin is O and coordinate axes are U 1 , U 2 , U 3 .

The coordinate system of a model.

2.2. Feature Point Definition

The feature points of a 3D face must have connotations and lie in a cognizable key position. In the same sample the feature points are distinct from their neighbor points on geometrical characteristics, while in different samples the features of the same feature points are similar. To meet the experimental requirements, a total of 39 feature points covering the front half face are defined according to the knowledge of anthropology, anatomy, and the feature points defined by MPEG4 expert group [7, 8]. The positions of the feature points are shown in Figure 2.

Feature points of the single layer face model.

3. Feature Point Cluster Based on the Relative Angle Histograms 3.1. Relative Angle Histograms

After the face model is unified into a uniform coordinate system, the next step is to calculate the relative angle histograms of points in the face model. According to Feng et al. , any point v i of the face model expresses the angle between vector v i v j ( 1 j N , j N ) and U 1 , and after the angle is transformed its value range is [ 0,2 π ] .

The relative angles of the point in the face model describe a spatial relationship between every point, which possesses rotation, translation, and scaling invariance and better noise robustness. Any point of the model has N - 1 relative angles and then has N - 1 relative angle distributions. The relative distributions have statistical characteristics, and the distribution curves describe in detail the global characteristics of every point in the model.

Calculate the j th relative angle A n g i , j ( 1 j N , i j ) of any point v i in the model. The value range [ 0,2 π ] is divided into 40 parts. Count the number of points in every region for every A n g i , j , and the statistical result is the relative angle histogram of point v i . Figure 3 shows the feature points’ relative angle distributions of the forehead center and right mouth corner as well as those which are superimposed by the different samples. It is obvious that the different feature points have different distributions, while the same feature points in the different samples have similar distributions. The similar feature points are selected to be clustered into one group using the cluster method  and to set up the cluster set of the feature points.

The relative angle distribution of the feature points. (a) The feature points of the forehead center: the left is the relative angle distribution of forehead center feature points and the right is the relative angle distribution of superimposed forehead center feature points. (b) The feature points of the right mouth corner: the left is the relative angle distribution of right mouth corner feature points and the right is the relative angle distribution of superimposed right mouth corner feature points.

3.2. Feature Point Cluster

There are 39 feature points in a standard face model a and their relative angle histograms of the feature points are denoted by h i ( a ) , i = 1,2 , , 39 . The unlocated face sample is denoted by b . The number of the vertex is L and the relative angle histogram of each feature point is h j ( b ) , j = 1,2 , , L . Let N i denote the cluster number of the i th feature point, and the steps to determine the i th feature point cluster set of the model b are as follows.

Step 1.

According to the prior knowledge, the representative points are selected as the initial cluster center C C i of each group.

Step 2.

Calculate the similarity between C C i and L points in the model b and sort them in decreasing order.

Step 3.

Select the maximum similarity N i points as the cluster set p c of the i th feature point in the model b .

Figure 4 shows the cluster point set results of the nasal tip and the forehead center in a model whose feature points are to be located. Experiments prove that the fewer cluster points would contain the optimum feature points in the conspicuous feature point place of a face, such as the nasal tip and the mouth corner, while, in the smoother place, such as the forehead center, more cluster points would be needed. The next task is to select precise feature points from the cluster point set.

The result of the point cluster.

The nasal tip

The forehead center

4. Multiscale Integral Features Extraction 4.1. Volume Integral Features

The volume integral features are defined as multiscale feature values, which could weaken the noise influence with bigger robustness than the differential values, and measure the concave-convex level of the model surface.

The volume integral invariant of a surface point p is defined as a functional integration in the local sphere region of this point , whose math expression is (2) V r p = B r p χ D x d x , where B r ( p ) is a sphere, which takes p as the sphere center and r as the radius. As is shown in Figure 5. χ D ( x ) is a surface indicator function. When the point x is in the D region, χ D = 1 ; while x is out of the D region, χ D = 0 . The geometrical significance of V r ( p ) is a part of the sphere volume which locates in the lateral of the surface.

2D representation of the volume integral invariant.

Equation (2) is expanded to a polynomial and its limitation value is computed under the condition r 0 , so the functional relationship between the volume integral invariant and mean curvature k H is (3) V r p = 2 π 3 r 3 - π k H 4 r 4 + o r 5 .

From (3) we know that the value of the volume invariant is related to the radium of integral sphere region, so the longer the radius is, the bigger the value of volume invariant becomes. Whether there was noise in the interior of the integral region, the impact on the result of the volume invariant is small, so the volume invariant is taken as a feature which possesses robustness. In practice, the geometric feature of the surface on the point p is defined as c h a r V r ( p ) , which is the ratio of the volume invariant to the volume of sphere region. The equation is (4) c h a r V r p = V r p B r p . According to the calculated values of c h a r V r ( p ) , we describe the geometric shape of the surface on the point p as convex, approximate plane, and concave. When the scale is r and c h a r V r ( p ) < 1 / 2 , the shape of the surface on the point p is convex; when c h a r V r ( p ) 1 / 2 , the shape of the surface on the point p is an approximate plane; when c h a r V r ( p ) > 1 / 2 , the shape of the surface on the point p is concave.

4.2. Multiscale Feature Extraction

Concave-convex is a relative concept. Concave-convex level will transform with the scale change. As is shown in Figure 6, under a large scale the curve in a box is convex, while under a small scale it will be smooth when the local part is amplified.

Multiscale theory.

Multiscale geometrical feature extraction  means that the feature extraction algorithm extracts the same feature adopting multiple threshold values. As is shown in Figure 7, the surface geometrical features of the point p on the surface are extracted under 3 scales: r 1 , r 2 , and r 3 . The large scale can reduce the noise impact more, while the small scale can more precisely evaluate the integral invariant. The large scale should be combined with the small scale in the feature extraction process. Therefore, adopting the multiscale feature extraction method can not only precisely describe the local features but also reduce the noise influence.

Multiscale feature extraction.

4.3. Precise Localization of Feature Points

We calculate the integral features of cluster points p c and point p for three different scales of 1, 2, and 3 times of the average side length in the triangular mesh. The distance square root S is defined on the integral features. As is shown in (5), every point in p c is calculated, as well as the S of the standard feature points under the three scales. The point p c , the minimum value of S , is chosen as a feature point of the model p . Consider (5) S = i = 1 i = 3 c h a r V r i p - c h a r V r i p c 2 .

5. Experimental Comparison and Analysis

We design a feature point of 3D face localization system (FaceLS) according to the proposed feature point localization algorithm. Forty sets of 3D point distribution face model data are selected randomly from a monolayer face sample library, which belongs to the North West University, and are taken as the face samples of the feature points which will be located. The algorithm is tested and compared with the localization method of Feng et al. . All the programs are operated by Windows 7, Core i5 processors, 2.8 GHz, 2 G RAM, and Matlab 2010.

The evaluation method of the feature point localization is as follows: given a distance threshold value ε , ( X , Y , Z ) is the accurate position of each feature point in the untested models, and the Euclidean distance d between two points can be computed as follows: (6) d = x - X 2 + y - Y 2 + z - Z 2 .

The steps of our method are as follows.

Step 1.

According to the prior knowledge, the representative points are selected as the initial cluster center C C i of each group.

Step 2.

Calculate the similarity between C C i and L points in model b and sort them in decreasing order.

Step 3.

Select maximum similarity N i points as cluster set p c of the i th feature point in model b .

Step 4.

We calculate the integral features of cluster points p c and point p for three different scales of 1, 2, and 3 times of the average side length in the triangular mesh. The distance square root S is defined on the integral features. As is shown in (5), every point in p c is calculated, as well as S of the standard feature points under the three scales. The point p c , the minimum value of S , is chosen as a feature point of the model p .

Step 5.

Given a distance threshold value ε , judge whether the point is correct with (6). If d ε , the feature point localization is considered to be correct; otherwise it has a certain error.

Figure 8 shows the results of two methods: one is based on our method and the other is the relative angle histogram localization method of Feng et al.  for the same feature point pending model (only 28 feature points are showed, and part of the feature points are coincident from the front view). Using the two localization methods, Figure 9 shows the accuracy rate of 39 feature points in a range of the given distance threshold value. The red dotted line indicates the relative angle histogram localization method of Feng et al. , and the blue solid line presents our method. The localization result of our method is more precise. Table 1 shows the mean accuracy rates of our method are higher than those of Feng et al. .

The mean accuracy rates of two feature point localization methods.

Method The mean accuracy rate/%
Nose Left eye Right eye Mouth
Feng et al.  62.0 78.3 77.6 63.8
Our method 89.2 84.5 85.6 71.3

Result comparison of two feature point localization methods.

Our method

Relative angle histogram method

Comparison of the accuracy rate for two feature point localization methods.

The feature point cluster set is determined firstly by the cluster algorithm and the relative angle histogram algorithm, and then the feature points are located accurately by the steady multiscale integral features, so the accuracy rate of localization is improved greatly. The localization method based on relative angle histograms has limitations, because it locates feature points to a smaller range through comparing the similarity of relative angle histograms and this method is suitable for conspicuous feature points, while the accuracy will be decreased for the inconspicuous feature points.

6. Conclusions

In this work we propose a method that combines the relative angle histograms with the multiscale constraints for localization of the feature points in a 3D face. The feature point cluster set is determined through the cluster algorithm and the relative angle histograms, and then feature points are located accurately by the multiscale integral features, which could avoid many wrong matches in the result caused by the precision and similarity of the local geometrical characteristic. Experimental results show that this method performs well and the accuracy rate of localization is improved. But the accuracy rate of localization is nonideal under the condition of feature points on smooth part of a face, so this should be the next research direction.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (Grant no. U1261114), the Key Technologies R&D Program of Henan Province (no. 142102210637), and the Xi’an University of Science and Technology Cultivation Foundation Project (2014032). The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.

Wang Y. Chua C.-S. Ho Y.-K. Facial feature detection and face recognition from 2D and 3D images Pattern Recognition Letters 2002 23 10 1191 1202 10.1016/s0167-8655(02)00066-1 2-s2.0-0036680802 Xu C. Tan T. Wang Y. Quan L. Combining local features for robust nose location in 3D facial data Pattern Recognition Letters 2006 27 13 1487 1494 10.1016/j.patrec.2006.02.015 2-s2.0-33745815118 Feng J. Ip H. H. S. Lai L. Y. Linney A. Robust point correspondence matching and similarity measuring for 3D models by relative angle-context distributions Image and Vision Computing 2008 26 6 761 775 10.1016/j.imavis.2007.08.018 2-s2.0-40949117208 Li X. L. Da F. P. 3D face recognition based on profile and rigid regions Journal of Image and Graphics 2010 15 2 266 273 Li H.-A. Zhang J. Kang B. He Y. A 3D surface reconstruction algorithm based on medical tomographic images Journal of Computational Information Systems 2013 9 19 7873 7880 10.12733/jcis7055 2-s2.0-84887244346 Oliva J.-M. Perrin M. Coquillart S. 3D reconstruction of complex polyhedral shapes from contours using a simplified generalized voronoï diagram Computer Graphics Forum 1996 15 3 C397 C408 10.1111/1467-8659.1530397 2-s2.0-0030244740 Cao C. Weng Y. Zhou S. Tong Y. Zhou K. Face warehouse: a 3D facial expression database for visual computing IEEE Transactions on Visualization and Computer Graphics 2014 20 3 413 425 10.1109/tvcg.2013.249 2-s2.0-84897435826 Li H.-A. Zhang J. Zhang L. Kang B. A new control curve method for image deformation TELKOMNIKA: Telecommunication Computing Electronics and Control 2014 12 1 135 142 10.12928/telkomnika.v12i1.1979 2-s2.0-84904624265 Guo J. Feng J. Ye H. S. Zhou G. Retrieval angle clustering histogram and clustering for 3D model retrieval Journal of Image and Graphics 2010 15 11 1644 1649 Pottmann H. Wallner J. Yang Y.-L. Lai Y.-K. Hu S.-M. Principal curvatures from the integral invariant viewpoint Computer Aided Geometric Design 2007 24 8-9 428 442 10.1016/j.cagd.2007.07.004 2-s2.0-35248855412 Yang Y. L. Lai Y. K. Hu S. M. Helmut P. Robust principal curvatures on multiple scales Proceedings of the 4th Eurographics Symposium on Geometry Processing (SGP '06) 2006 223 226 Liu M. Vemuri B. C. Amari S.-I. Nielsen F. Shape retrieval using hierarchical total Bregman soft clustering IEEE Transactions on Pattern Analysis and Machine Intelligence 2012 34 12 2407 2419 10.1109/tpami.2012.44 2-s2.0-84867817160 Xu K. Ai Y.-H. Wu X.-Y. Application of multi-scale feature extraction to surface defect classification of hot-rolled steels International Journal of Minerals, Metallurgy and Materials 2013 20 1 37 41 10.1007/s12613-013-0690-y 2-s2.0-84872478135