Fingerprint Classification Combining Curvelet Transform and Gray-Level Cooccurrence Matrix

,


Introduction
As a type of human biometrics, fingerprint has been widely used for personal recognition in forensic and civilian applications because of its uniqueness, immutability, and low cost.An automatic recognition of people based on fingerprints requires matching of an input fingerprint with a large number of fingerprints in a database.However, the database can be huge (e.g., the FBI database contains more than 70 million fingerprints), such a task can be very expensive in terms of computational burden and time.In order to reduce the search time and computational burden, fingerprints in the database are classified into several prespecified types or subclasses.When an input fingerprint is received, a coarse level matching is applied to determine which subclass the input belongs to, and then at a finer level, it is compared to samples within the subset of the database for recognition.While such a scheme is obviously more efficient, the first step, that is, fingerprint classification, must be accurate and reliable and hence has attracted extensive research in recent years [1][2][3][4][5][6][7][8][9][10][11][12][13].
Fingerprints are classified based on their shapes, and in literature it is common to have five classes as shown in Figure 1 (x):(A) (x = a, b, c, d, e), including whorl (W), right loop (R), left loop (L), arch (A), and tent arch (T).Although these five classes appear very different to us as a person, automatically classifying a fingerprint by a machine is in fact a very challenging pattern recognition problem, due to the small interclass variability, the large intraclass variability and the difficulty for poor quality fingerprints.Fingerprint classification is carried out by analysis and comparison of the features.Over the past decade various approaches have been proposed by means of different types of features, such as singularities [1][2][3], orientation field [4][5][6], and statistical and frequency features [7][8][9][10].The methods based on singularities [1][2][3] accomplish fingerprint classification according to the number and relative position of the core and delta points.The approaches using orientation field [4][5][6] partition the fingerprint orientation field into "homogeneous" orientation regions and the relational graphs of these regions are used to make fingerprint classification.Gabor filter [9] can also be used to extract the fingerprint features for classification where the input image is decomposed into four component images by four Gabor filters, and the standard deviation of the component images in each sector to generate the feature vector, and uses a K-nearest neighbor classifier to make fingerprint classification.
In 2001, Tico et al. [10] proposed an approach to use wavelet transform as features for fingerprint recognition.In [10], a wavelet decomposition on 6 octaves of each fingerprint image was performed, and the normalized  2 -norm of each wavelet subimage is computed to extract a feature vector of length 18.The experimental database contains 168 fingerprint images collected from 21 fingers (8 images per finger).The algorithm achieves the accuracy of 100% when wavelet basis Symmlet 6 or Symmlet 9 was employed.The work in [10] shows that the wavelet features are suitable for matching complex patterns of oriented texture such as fingerprints.However, wavelets are characterized by isotropic scaling (e.g., the standard orthogonal wavelet transform contains wavelets in the directions of primary vertical, primary horizontal, and primary diagonal only) and hence their abilities to resolve directional features are limited.So wavelets are not able to detect curved singularities effectively.
Inspired by the success of wavelets, a number of other multiresolution analysis tools have also been proposed with the aim to present better the edges and other singularities along curves.These tools include contourlet, ridgelet, and Curvelet.In recent years, researchers have used Curvelets for fingerprint image enhancement [11,12] and for fingerprint recognition [13].Comparing to the limited directional features of wavelets, Curvelets are more powerful as they are able to describe a signal by a group of matrices at multiscale and multidirection.Furthermore, with the increase of the scale, the number of directions is much finer.
In 2008, Mandal and Wu [13] proposed to use Curvelet transform for fingerprint recognition and achieved the accuracy of 100%.However, the performance of the proposed algorithm was only tested on a small database of 120 fingerprint images (containing only 15 individuals).Furthermore, in order to ensure the accuracy, the technique requires manual detection of the core of the fingerprint image.Also, before extracting the Curvelet features, the technique needs complex image enhancement process which includes estimation of local ridge-orientation, estimation of local ridgefrequency across the fingerprint, filtering of the image, and binarization (conversion of a gray-scale fingerprint image to binary image).
In this paper, we present a novel fingerprint classification algorithm.Firstly, we use fast discrete Curvelet transform warping (FDCT WARPING) to decompose the original image into five-scale Curvelet coefficients and construct the Curvelet filter by Curvelet coefficients relationship at adjacent scales to smooth the discontinuities of ridges and remove the noise in the fingerprint image.Secondly, we calculate four gray-level cooccurrence matrices (GLCMS) of Curvelet coefficients at the coarsest scale and calculate 16 texture features based on 4 GLCMS.Furthermore, we construct 49 direction features of Curvelet coefficients at the other four scales.Finally, these combined Curvelet-GLCM based features act as the feature set to a K-nearest neighbor classifier.
In the following sections, we will present the details of our fingerprint classification algorithm.Section 2 presents our noise filtration scheme and our feature extraction scheme.Section 3 presents some experimental results achieved on NIST-4 databases.Finally Section 4 draws the conclusions and outlines the open problems.

Fingerprint Alignments.
Considering the translation and rotation between template images and probe images, this paper adopted the algorithm in [14] to accomplish the fingerprint image registration.The algorithm used the reference points of the central area to calculate the parameters of translation and rotation, which is much more general when there are no cores in the fingerprint images.

Fast Discrete Curvelet Transform (FDCT).
Curvelets were proposed by Candese and Donoho [15], constituting a family of frames that are designed to represent the edges and other singularities along curves.Conceptually, the Curvelet transform is a multiscale pyramid with many orientations and positions at each length scale and needle-shaped elements at fine scale.This pyramid is nonstandard, however.Indeed, Curvelets have useful geometric features that set them apart from wavelets and the likes.For instance, Curvelets present highly anisotropic behavior as it has both variable length and width.At fine scale, anisotropy increases with decreasing scale, in keeping with power law.
In 2006, Candès et al. proposed two fast discrete Curvelet transforms (FDCT) [16].The first one is based on unequally spaced fast Fourier transforms (USFFT) [16], and the other is based on the wrapping of specially selected Fourier samples (FDCT WARPING) [16].Curvelets by warping have been used for this work, because this is the fastest Curvelet transform currently available [16].
After The implementation of FDCT WARPING is as follows.
Step 1. 2D FFT (fast Fourier transform) is applied on Step 2. Resample f[ 1 ,  2 ] at each pair of scale and direction ,  in frequency domain, yielding the new sampling function: where where tan Mathematical Problems in Engineering 5 Step 4. Apply the inverse 2D FFT to each f, , hence collecting the discrete coefficients  , .The number of scales can be calculated by

Fingerprint Image FDCT and Noise Filtration
where min( 1 ,  2 ) returns the minimum of  1 and  2 and ceil(⋅) rounds the input to the nearest integers greater than or equal to input.
Noted that if FDCT WARPING chooses the Curvelets for the coefficients at the finest level  scales , the number of orientation at scale  scales can be determined by (2).On the other hand, when choosing the wavelets for the coefficients at the finest level  scales , there is only one angle at the finest level  scales .In this paper, we adopt the wavelets for the coefficients at the finest level  scales .As a result, there is only one angle at the finest level  scales .
In this paper, according to (4), we can get the number of scales  scales = 5.According to (5), the number of angle at scale 4 is 32, at scale 3 is 32, and at scale 2 is 16.At scale 4, the number of angular panels of each quadrant is 8.At scale 3, the number of angular panels of each quadrant is 8.At scale 2, the number of angular panels of each quadrant is 4.After decomposition, the original image was divided into three levels: coarse, detail, and fine.The low-frequency coefficients were assigned to coarse.The high-frequency coefficients were assigned to fine.The middle-frequency coefficients were assigned to detail.According to FDCT WARPING [16], the scale  is from finest to coarsest scale and angle  starts at the top-left corner and increases clockwise.
The acquisition of Curvelet coefficients is as follows.
Suppose that In this paper, floor(⋅) rounds the input to the nearest integers less than or equal to the input.
(1) Construct the right and left windows along the horizontal direction, denoted by row vector  −1 and  −1 , respectively.Consider (2) Construct the right and left windows along the vertical direction, denoted by row vector  −2 and  −2 , respectively.Consider and  −2 are normalized by (3) Construct the two sub-low-pass filters, denoted by row vector  lowpasssub1 and  lowpasssub2 , respectively.Consider where  1 = 2 × floor( 1 ) + 1,  2 = 2 × floor( 2 ) + 1. (4) Construct a low-pass filter at scale 5, denoted by matrix and where (⋅)  is the transpose vector or matrix of the input vector or matrix.
(5) Construct a high-pass filter at scale 5, denoted by matrix  hipass 5 , which has the same size as  lowpass 5 .Consider is filtered by  hipass 5 , hence generating the filtered high-pass signal at scale 5   hipass 5 , which has the same size of where the filter at scale 5 has the following range: Step 3. Acquire the Curvelet coefficients at scale 4 and angle 1 to angle 32,  4, ,  = 1, 2, . . ., 32.
Suppose that The filter at scale 4 has the following range: (1) Construct a low-pass filter at scale 4 and angle 1 in the same way as at scale 5, (2) Construct a high-pass filter at scale 4 and angle 1 in the same way as at scale 5, (3)   lowpass 5 is filtered by  lowpass 4 , hence generating the filtered low-pass signal at scale 4 (4)   lowpass 5 is filtered by  hipass 4 , hence generating the filtered high-pass signal at scale 4   hipass 4 , which has the same size as that of   lowpass 5 .(5) Determine the discrete locating window of wedge wave at scale 4 and angle 1.
The Curvelet coefficients at scale 4 are divided into the four quadrants.The quadrant label of Curvelet coefficients is denoted by quad, and four quadrants are denoted by quad = 1, 2, 3, 4, respectively.Each quadrant has 8 angles.In the first quadrant, quad = 1, angle ranges from 1 to 8, in the second quadrants, quad = 2, angle ranges from 9 to 16, in the third quadrants, quad = 3, angle ranges from 17 to 24, in the fourth quadrants, and quad = 4, angle ranges from 25 to 32.
Suppose that where  quad−4 denotes the number of angle at each quadrant at scale 4; in this paper,  quad−4 = 8.
The first wedge wave endpoint along the vertical orientation is denoted by The length of the first wedge wave is denoted by ) . ( The width of the wedge wave is denoted by The slope of the first wedge wave is The left line vector is denoted by  left line = ( 1, ) 1×( wedge ) , where  1, = round(2 −  end (1) +  slope × ( − 1)) ⋅ ⋅ ⋅  = 1, 2, . . .,  wedge .
The discrete locating window of wedge wave  data is filtered, yielding the matrix  data-tran = (  1 , 2 )  wedge × wedge , where The matrix  data-tran is rotated, yielding the matrix  data 2 where rot 90(, ) rotates matrix  counterclockwise by  × 90 ∘ degrees.
Note that the discrete locating window of wedge wave at scale 3 can be calculated by where wedge-3 ,  wedge-3 is the length and width of the discrete locating window of wedge wave at scale 3, and  col 3 is condition vector at scale 3.
Note that the discrete locating window of wedge wave at scale 2 can be calculated by where and  col 2 is condition vector at scale 2.
The detailed structure of the Curvelet coefficients obtained by FDCT WARPING is shown in Table 1.

Fingerprint Image Noise Filtration Technique.
Noise always arises from the acquiring fingerprint images.The noise may result in the vagueness and many discontinuities of ridges (or valleys) in the image and thus affects accurate feature extraction and recognition.So, it is necessary and important to denoise in fingerprint images.
The relationship of the Curvelet coefficients between the different scales is similar to the relationship of the wavelet coefficients; that is, there exists strong correlation between them.
From Table 1, there are 16 and 32 orientations at scale 2 and scale 3, respectively.Each Curvelet coefficient matrix is at scale 2 and each orientation corresponds to two adjacent matrices generated at scale 3. The ridges in a fingerprint image correspond to the Curvelet coefficients with large magnitude at scale 2. Each Curvelet coefficient matrix at scale 2 is decomposed into two Curvelet coefficient matrices at scale 3 and at two adjacent orientations.The corresponding two Curvelet coefficient matrices at scale 3 also have large magnitude, while the magnitude of the Curvelet coefficients corresponding to the noise dies out swiftly from scale 2 to scale 3. So, we use the direct spatial correlation of Curvelet coefficients at scale 2 and scale 3 to accurately distinguish ridges from noise.For scales 4 and 5, we adopt hard thresh method to filter the noise.Finally, we reconstruct all the Curvelet coefficients by the technique [17] and accomplish fingerprint image filtration.
The proposed noise filtration technique has the following steps.

Mathematical Problems in Engineering
Step 1. Noise filtration of Curvelet coefficient matrices at scale 2 and scale 3, respectively.This section details the major steps of the proposed noise filtration algorithm of Curvelet coefficient matrices generated at scales 2 and 3.
(5) When filtering the Curvelet coefficient matrix  2, , if any of the filtered Curvelet coefficient matrices (  ,   ,   ,and   ) equal to ,  2, is considered as noise and assigned to .Where  is a matrix with all elements are zero.( 6) repeat (1) to (5) with 16 times (the number of orientation of Curvelet coefficients at scale 2).
Step 2. Noise Filtration of Curvelet coefficient matrices at scale 4 and scale 5, respectively.
The Curvelet coefficient matrices generated at scale 4 are filtered by where   , are the filtered Curvelet coefficient matrices, | ⋅ | is the complex modulus operation, and thresh is the threshold; in this paper, thresh = 1.5, and where (, ) is the size of matrix  , .The Curvelet coefficient matrices generated at scale 5 are filtered in the same way as the Curvelet coefficient matrices generated at scale 4. Note that at scale 5 thresh = 2.
Step 3.After the coefficients at scale 2, 3, 4, and 5 are filtered, we reconstruct the coefficients using the technique [16] and accomplish image noise filtration.
in (45) can be acquired by the statistics of difference of correlation coefficients in adjacent directions at the same scale.Finally,   = 100 is selected by many experiments.Figure 1 shows the noise filtration results of five types of fingerprint image by proposed noise filtration algorithm.
From Figure 1 (x):(B) (x = a, b, c, d, e), we can see many discontinuities of ridges in the original image can be smoothed after filtering and the direction of ridge is well followed, which founds the good basis for accurate feature extraction and recognition.We demonstrate Curvelet coefficients at different scales of five types of filtered images in Figure 2.
As Figure 2(a) to Figure 2(e) show, there are strong orientations in the Curvelet coefficients images.The white parts in the images represent partial edges of the ridge of fingerprint image in different orientations.Meanwhile, it means the significant Curvelet coefficients of images.The low-frequency (coarse scale) coefficients are stored at the center of the display.The Cartesian concentric coronae show the coefficients at different scales; the outer coronae correspond to higher frequencies.There are four strips associated with each corona, corresponding to the four cardinal points; these are further subdivided in angular panels.Each panel represents coefficients at a specified scale and along the orientation suggested by the position of the panel.
2.4.Fingerprint Feature Extraction.Haralick et al. [18] first proposed gray-level cooccurrence matrix (GLCM) for texture descriptions in the 1970s.It is still popular until today and widely used in various texture classifications [19][20][21][22][23], because of its good statistic performance.The GLCM is a second order statistics method which describes the spatial interrelationships of the grey tones in an image.
GLCM contains elements that are counts of the number of pixel pairs, which are separated by certain distance and at some angular direction.Typically, the GLCM is calculated in a small window, which scans the whole image.The texture feature will be associated with each pixel.
In our studies, GLCM is computed based on two parameters, which are the distance between the pixel pair  and their angular relationship . = 1 and  are quantized in four directions (0 ∘ , 45 ∘ , 90 ∘ , and 135 ∘ ).For image , defined square where {⋅} = 1 if the argument is true and {⋅} = 0, otherwise.The signs ± and ∓ in (10) mean that each pixel pair is counted twice: once forward and once backward in order to make the GLCM diagonally symmetric.For each direction,  0 and  1 are shown in Table 2.
The procedures of feature extraction are as follows.
Step 3. Calculate averaged  1 -norm of Curvelet coefficients in 16 directions at the third scale  3, and acquire 16 texture features according to (54).
Step 4. Calculate averaged  1 -norm of Curvelet coefficients in 16 directions at the fourth scale  4, and acquire 16 texture features according to (54).
Step 5. Calculate averaged  1 -norm of Curvelet coefficients at the fifth scale  5,1 and acquire 1 texture feature.
Note that in Step 2, Step 3, and Step 4, we only calculate the averaged  1 -norm of Curvelet coefficients in even directions to assure the classification accuracy and reduce the recognition time.So, a feature vector containing 57 components for each image can be extracted.-4) is one of the most important benchmarks for fingerprint classification.Most published results on fingerprint classification are based on this database.For comparison with other approaches, we also perform our fingerprint classification algorithm on this database for the five-class fingerprint classification problem.Since fingerprint classes A (arch) and T (tented arch) have a substantial overlap, it is very difficult to separate these two classes.Therefore, we also report our results for the four-class classification problem, where classes A and T have been merged into one class.NIST-4 contains 4000 fingerprints of size 480 × 512 pixels, taken from 2000 fingers.Each finger has two impressions (f and s).The first fingerprint instances are numbered from f0001 to f2000 and the second fingerprint instances are numbered from s0001 to s2000.All fingerprints in this database are used in our experiment.We form our training set with the first 2,000 fingerprints from 1,000 fingers (f0001 to f1000 and s0001 to s1000) and the test set contains the remaining 2,000 fingerprints (f1001 to f2000 and s1001 to s2000).

Datasets. NIST special fingerprint database 4(NIST
To eliminate the large difference between the feature vectors, each feature vector V  is normalized according to where V  () represents the th element of vector V  , V min () denotes the minimum of the th element of all the row vectors, and V max () denotes the maximum of the th element of all the row vectors.and T).To simplify the training procedure, we make use of only the first label of a fingerprint to train our system.For testing, however, we make use of all the labels for a fingerprint and consider the output of our classifier to be correct if the output matches any one of the labels.This is in line with the common practice used by other researchers in comparing the classification results on the NIST-4 database.Classification accuracy does not always increase with increasing  of the -nearest neighbor classifier; there exists an optimal value of  for finite training sample size classification problems.According to the method in [24], in our experiments, 10 nearest neighbors ( = 10) are considered.The classification results of our proposed approach are shown in Table 3.The diagonal entries in Table 3 show the number of test patterns from different classes which are correctly classified.
From Table 3, we can conclude that the proposed algorithm achieves an accuracy of 94.6 percent for the five-class classification task.For the four-class classification task (where classes A and T were collapsed into one class), an accuracy of 96.8 percent is achieved.
Experiment.To evaluate the performance of the proposed algorithm, we have compared the proposed approach to wavelet-based, GLCM-based, and Curvelet-based, respectively.We use wavelet transform to decompose gray images into five scales wavelet coefficients using wavelet bases "Symmlets 4, 5, 6, 8, and 9" and calculate averaged  1 -norm of wavelet coefficients at each scale.Finally, WT feature vector with dimension of 16 are acquired.The reason using "Symmlets 4, 5, 6, 8, and 9" is that in the work of Tico et al. [10] best results were obtained with these five wavelet bases.In Table 4, we show the comparison results for the five-class classification.
From Table 4, we can conclude that our algorithm achieves higher accuracy of classes W by reducing the misclassification of W as L or R. Our algorithm also achieves higher accuracy of classes R by reducing the misclassification of R as A. Finally, our algorithm achieves higher accuracy of classes A by reducing the misclassification of A as T. Also, Curvelet-based is better than wavelet-based and GLCM-based.The reason is CT can better capture the direction of fingerprint ridge than WT and GLCM.Furthermore, the proposed algorithm can provide much more information on the ridge direction by combining the good statistic performance of GLCM and well capturing the direction of CT.
Most of misclassifications in the proposed approach are caused by heavy noise in the poor quality fingerprints, where it is very difficult to correctly extract Curvelet coefficients.

Conclusion
In this paper, we present an efficient fingerprint classification algorithm that uses CT and GLCM to model the feature set of fingerprint.There are two main contributions in this paper.Firstly, we construct Curvelet filter that can smooth the discontinuities of ridges and remove the noise in the original image.As a result, the direction of ridge is well followed.Secondly, in combination with the effectiveness of CT and GLCM, we propose to construct a 53-dimensional feature vector as classifier input that can represent curves singularities and the statistics in fingerprint image with compact feature.We have tested our algorithm on the NIST-4 database and a very good performance has been achieved (94.6 percent for the five-class classification problem and 96.8 percent for the four-class classification problem with 1.8 percent rejection).These good performances of the proposed algorithm could be ascribed to the high information contents of Curvelet features and to the combination of GLCM and CT.
Our system takes about 1.47 seconds on a AMD E-350 PC to classify one fingerprint, which needs to be improved.Since image decomposition (filtering) steps account for 82 percent of the total compute time, special purpose hardware for Curvelet transform can significantly decrease the overall time for classification.
Curvelet transform, several groups of Curvelet coefficients are generated at different scales and angles.Curvelet coefficients at scale  and angle  are represented by a matrix  , , and scale  is from finest to coarsest scale, and angle  starts at the top-left corner and increases clockwise.Suppose that ( 1 ,  2 ), 1 ≤  1 ≤  1 , 1 ≤  2 ≤  2 denotes original image and f[ 1 ,  2 ] denotes 2D discrete Fourier transform;  1 ,  2 is the size of original image.

Table 3 :
Fingerprint classification results on NIST-4.Experiment Results and Analysis.The performance of a fingerprint classification algorithm is often measured in terms of accuracy.The accuracy is computed as the ratio between the number of correctly classified fingerprint and the total number of fingerprints in the test set.Each image is labeled with one or more of the five classes (W, R, L, A,

Table 4 :
Comparison of accuracy on NIST-4 for the five-class classification.