Implementing Fusion Technique Using Biorthogonal Dwt to Increase the Number of Minutiae in Fingerprint Images

Biometric devices identify persons based on the minutiae extracted from ﬁ ngerprint images. Image quality is very important in this process. Usually, ﬁ ngerprint images have low quality and in many cases they are obtained in various positions. The paper focuses on increasing minutiae detected number by fusing two ﬁ ngerprint images obtained in various positions. Biorthogonal wavelets have advantages compared to orthogonal wavelets. Fusion is performed in wavelet domain by implementing biorthogonal wavelet. Terminations and bifurcations are extracted from the original and fused images using licensed software Papillon 9.02 and manually extraction by an expert. Biorthogonal Wavelet transform is implemented in the image fusion process, yielding in the increased number of the minutiae compared to the original one. Di ﬀ erent biorthogonal wavelets are experimented and various results are obtained. Finding the appropriate wavelet is important in the fusion process since it has a direct impact in the number of minutiae extracted. Based on the number of minutiae and MSE results, the appropriate wavelet to be used in the fusion process is de ﬁ ned.


Introduction
One way to identify persons is by using fingerprint images. The image quality has a direct impact on the extracted minutiae, based on the number and position of which person identification is done. Usually, only some fingerprint images of the same finger are obtained. In most of the cases they have low quality, making person identification difficult. Increasing the quality of fingerprint images becomes important. The image fusion process is widely used in image enhancement. Images are firstly aligned and then the fusion process is realized in wavelet domain. Biorthogonal wavelets have advantages compared to other orthogonal wavelets especially for their reversibility [1].
The process of image fusion consists of superimposing two or more images of the same object, taken in various positions from the same source or different ones, prealigned to each other. This process is realized in the wavelet domain transformation, modifying the respective coefficients [2]. The basic schema of image fusion is shown in Figure 1.
Different-view images often contain a large amount of complementary and redundant information. Multiple fingerprint images will be integrated to increase information, reduce uncertainty and redundancy of the fused image by processing coefficients in wavelet domain. Specific algorithms are used to combine relevant information from two or more images into a single image, producing a more valuable one [2,3]. To implement the fusion process, images must be firstly aligned and wavelet fusion is adopted.
The information obtained from these images is often complementary. The fusion of these images leads to an increase in the amount of information in the fused image. The fusion implemented in wavelet domain processes the coefficients in the three different ways.
The process of image fusion consists in two steps, the images are aligned in the spatial domain and then fusion in wavelet domain is implemented.
In most cases, only a few fingerprints can be found, usually in various positions with low quality, for example the traces left by the thief, which are often the only way to identify him. In the paper is proposed the combination of these low quality images to obtain a fingerprint where the number of minutiae is greater than every original image. Fusion process is realized in wavelet domain. Biorthogonal wavelets show advantages over other wavelets in terms of reconstruction.
Section 2 describes wavelet transformation procedure and biorthogonal implementation. Section 3 treats the image registration and Section 4 analyzes the fusion process using wavelets coefficients. Section 5 gives the methodology applied here. Experimental results are shown in Section 6 and conclusions in Section 7.

Wavelet Transform
Wavelet transform processes signals in different scales. Wavelets are oscillations with zero average as shown in equation (1). Every scale component has its own frequency range resulting in component resolution. By combining with the known part of the signal, wavelets are used to obtain the unknown part of it. The scaled and shifted versions of the mother wavelet, are multiplied and integrated with respective portions of the signal yielding the wavelet coefficients, as shown in equation (2). Wavelet transformation formula is presented in equation (3).
In formula (3), y(t) is the input signal, Ψ * ðt − T/aÞ is the conjugated, shifted and scaled version of the mother wavelet ΨðtÞ: Discrete Wavelet Transform (DWT) represents image multiresolution using wavelet sub-bands. DWT achieves high compact energy in lower sub-bands making it widely used in image applications. In DWT, an image is represented as a sum of weighted shifted and scaled wavelets. It is decomposed as a pair of waveforms: one for the low frequencies of the image, represented by the scaled function, and the other for high frequencies, represented by the wavelet function. Multiresolution decomposition of the image consists in dividing it in two sub-bands, low-pass and high-pass subband and iterating the division on the low-pass sub-band. Low-pass digital filter H and high-pass digital filter G are derived, respectively, from scaled and wavelet functions [3]. The transfer functions of non-recursive four-tap filters are presented in equation (4) and (5) [4].
In DWT wavelets are discretely sampled. Figure 2 illustrates the DWT with 3 levels of decomposition.
2.1. Biorthogonal Wavelet Family. One of the advantages of the Biorthogonal wavelet family over other wavelet families is the linear phase, an important feature in the reconstruction of images from the waveform transformation coefficients. It uses one wave for decomposition and another one for synthesis. In Figure 3 are presented the decomposition wavelet and the reconstructed one for biorthogonal transformations bior2. 2    Journal of Sensors Biorthogonal wavelet family is part of the classical orthogonal signals. The dual scaling function is presented in equation (6) and the dual wavelet function in equation (7).
The relation between dual scaling coefficients is shown in equation (8) and (9).
Biorthogonal wavelets provide linearity as well as accurate and symmetrical reconstruction of signals and images. They have more freedom degrees than orthogonal wavelets, allowing two multiresolution analyses: wavelet functions ð f ϕ j ,k ðxÞ and g Ψ j ,k ðxÞÞ yield the basis for f V j and f W j , respectively. The most important advantage of biorthogonal wavelet transform is the perfect reconstruction. Orthogonal wavelet provides an orthogonal matrix with unitary transformation, whereas biorthogonal wavelet provides an invertible matrix, which is a reverse matrix. Low-pass and high-pass filters in biorthogonal transform have different lengths, moreover low-pass filter exhibits symmetry while high-pass filter may be symmetric or asymmetric [6].

Image Registration
Image registration is the process of spatially aligning two images obtained from the same device or from different ones. Image registration methods use feature points of feature lines. Feature points are identified by unique neighborhoods in an image. Feature line has a position and an orientation [7]. To spatially align two images, one is geometrically transformed according to the other. Transformation means rotation and translation.
3.1. Image Orientation. Fingerprint images are usually obtained in different orientations. They have a preferred orientation. Reliable determination of this preferential orientation enables both images to be brought to the same orientation, facilitating the correspondence between them. The orientation of the image depends on the geometric appearance of the model inside the image. The fingerprint images obtained usually do not have enough overlap, which means that they have different orientations. Image orientation is determined by the aggregation of local orientations, while local orientations are determined by the intensity of geometric gradients in small neighborhoods. Geometric orientations are more reliable than intensity gradients [7,8]. Noises significantly influence the evaluation of image orientation. The greatest impact is on geometric orientations [7,9]. To reliably determine the orientation of an image, the image must be independent of the absolute intensities and the digital domain should be considered as a continuous one to reduce the geometric noise.
For the sequence of points P along a contour P = fp i = ðx i , y i Þ: i = 0, ⋯, n − 1g the approximating curve of these points is pðuÞ = ½xðuÞ, yðuÞ, where: and xðu i Þ ≈ x i and yðu i Þ ≈ y i , for i = 0, ⋯, n − 1.
The rational Gaussian curve formulation (RaG) is used to create the curve from the set of pixels as shown in formula (10). Bior2.

Journal of Sensors
Equation (11) is for the open curve.
Equation (12) is for the closed curve.
To calculate the orientation of the line at a point, the tangent direction must be defined as in equation (13).
ϕðuÞ is the gradient direction at point p i in the contour. The calculation of geometric gradients enables the determination of image orientation by the set of points. Histogram is computed from geometric gradients. Orientation line passes from the center of the image with orientation defined by the highest peak of the histogram [7,10].

3.2.
Determining the Rotation between Two Images. The calculation of the rotation angle between the two images will be done in two steps, at the beginning the difference in orientation between the orientation lines of the images will be determined and afterwards the difference at the sub degree level will be calculated [7].
One of the histograms moves cyclically relative to the other and at each shift the Euclidean distance will be determined. The position which gives the smallest Euclidean difference will be taken as the rotation angle between the two images.
Denoting by ϕ the rotation angle, H 1 (ϕ) and H 2 (ϕ) respective histograms, m number of bins in the histogram (usually 180), the Euclidian distance between two images is calculated as in equation (14).
2. Select any point from the reference image with coordinates (x r , y r ) and a random point from the image template with coordinates (X t , Y t ).
3. By solving the equations the values t x andt y are determined. 4. If |tx| < d and |ty| < d, then ++H(t x + d, t y + d).
5. If ++t < t m go to step 2. 6. Denote with (t xmax , t ymax ) peak histogram coordinates, return t x = t xmax -d and t y = t ymax -d.
Algorithm 2: Determining the translational difference between two-point sets.  1. The vector H is defined with 360 elements, all elements are initialized with 0, t =0, t m represents the number of all possible lines in the reference image. 2. A pair of points is randomly selected P r1 and P r2 in the reference image and their homologous P t1 and P t2 in the test image. 3. The angle of rotation θ of the line P r1 P r2 to have the same orientation as the line P t1 P t2 is determined.
6. Increment t. 7. If t > tm then go to step 8, otherwise go to step 2. 8. Return the max value of H[θ].
Algorithm 1: Determining the angle of rotation between two feature points sets by clustering. 4 Journal of Sensors The minimum value of D gives the rotation angle between the two images. The angle of rotation should be as precise as possible, too small displacements of the images will lead to inaccurate image register, especially in large images.
To determine the difference of two images in sub degree level of the angle ϕ, a quadratic curve is defined: ½ðϕ − 1Þ, D ðϕ − 1Þ, ½ϕ, DðϕÞ, ½ðϕ + 1Þ, Dðϕ + 1Þ. The minimum value of the curve is determined which will be the difference in rotation at the sub degree level [7].

Detection of Feature Points.
To register the two images, the geometric transformation pattern must be defined. A suitable registration model is defined by specific points in each image and the correspondences between them are determined. Coordinates of the homologous points define     In the paper methods where feature points are possibly rotation invariant are used. This is critical in finding homologous points in images [7,11].

Homologous Points.
To register images, the transformation parameters must be defined, which are determined based on homologous points. Firstly, the feature points are defined and then the correspondence between them is determined [12]. Feature point descriptors are matched to find the homologous points. Incorrect homologous points are removed using RANSAC [13].
Denoting by P ri for i =1 ÷ m and P ti for i =1 ÷ n feature points, respectively, in reference and test images, homologous points can be found only for these sets. Denoting by (x r , y r ) the coordinates of feature points in the reference image and (x t , y t ) those on the test image, the relation between them for rigid transformation is shown in equations (15) and (16).
Firstly the rotation is determined and then the transformation.
For each random pair of homologous points in reference and test image, the rotational angle between respective lines will be found. The histogram of the angles of rotation is constructed, the peak value of which determines the angle of rotation between the two images [7].
Once the angle of rotation θ is determined, the displacement parameters t x and t y must be specified, using the equations (17) and (18).
Choosing a point from each point set, assuming that they are correspondences, based on equations (17) and (18) the displacement parameters t x and t y are determined.
The corresponding points will produce the same offset parameters, while the non-corresponding points will produce offset parameters that fall within the range of values. A 2D histogram of values is constructed and after a certain number of iterations the peak value begins to be created in it.
Denoting by d = max{number of rows, number of col-umns} and D =2d+1, 2D histogram has dimensions DxD. Denote with t m the maximum number of points in both point sets. The translational difference between two point sets is determined according to Algorithm 2.

Wavelet Fusion of Images
The wavelet transform is a time-frequency analysis method which selects the appropriate frequency band adaptively based on the characteristics of the signal [14].
Wavelet transform analyzes the signal in the timefrequency domain. At each level of transformation, the lower sub-band L and the upper sub-bands LH, HL, HH are obtained [14]. Iteratively, the sub-band L continues to decompose further. Wavelet transform with N decomposition levels convert the image into 3 N +1 frequency subbands. The low frequency filter and the high frequency filter in the wavelet transform are named as the scale function and the wavelet function, respectively. In discrete wavelet transform (DWT), the signal is converted into scale coefficients and wavelet coefficients, usually denoted by H and G, respectively, [15]. The low, horizontal, vertical and diagonal coefficients, denoted, respectively, as C j+1 , D h j+1 , D v j+1 and D d j+1 , are given in formula (19).
where j represents the layer of decomposition [16].
To reconstruct the image from wavelet coefficients, formula (20) is used.
Wavelet fusion processes the wavelet coefficients. It consists in two steps, in the first one the images are spatially transformed, in the second one the images undergo to the DWT and their respective low and high coefficients are manipulated according to some algorithms. The inverse DWT (IDWT) is applied to the new coefficients and the fused image is obtained as illustrated in the schema of Figure 4. Experimental fingerprint images are gotten in [17].
Different wavelet types can be used in the image fusion process. The biorthogonal wavelet family uses different wavelets for decomposition and synthesis, reaching the most perfect reconstruction of the signal from the coefficients. The low and high coefficients are processed according to

Methodology
Information from the same image obtained from different sources or in different positions, is often complementary and redundant. The complementary information from two or more images of the same object, is accomplished from the fusion process. This process is applied if the images are geometrically registered with each other. They must be geometrically transformed into spatial domain to completely match geometric positions [18]. It is realized based on putative points. Steps for matching fingerprint images in conformity with one another, are as follows.
In each image, the features must be extracted associated with their respective descriptors. Corresponding image features are found using these descriptors. The putative points are determined from correspondences and one image is geometrically transformed and rotated according to the other.
Fusion is a limited quality process in the spatial domain yielding artifacts in the fused image mainly around the contours [19].
Aligned images must undergo the fusion process in wavelet domain. Wavelet transform is applied on both aligned fingerprint images, resulting in low and high frequency components [20]. The biorthogonal wavelet family exhibits good characteristics on image reconstruction by coefficients compared to other orthogonal wavelet families.
Matrices 9 Journal of Sensors s(4,:) = size_of_detail_coefficients_of_level_1 [21]. The fusion process is realized "fusing" wavelet coefficients. Low pass coefficients of the fused image are obtained by averaging the low pass coefficients, while the high frequency coefficients are obtained by taking the respective maximum values of the high frequency coefficients from each of the images transformed into the wavelet domain. The schema for obtaining low and high coefficients from wavelet fusion process is shown in Figure 5.
Inverse DWT is applied on new coefficients resulting in the fused image. It obtains more information than each of original fingerprint images.
Extraction minutiae algorithm is applied to extract minutiae from the original fingerprint images as well as the fused one. Terminations and bifurcations from the extracted minutiae are calculated [22].
To analyze the amount of information obtained in a fused image and to compare it with the amount of information in the original image, minutiae are extracted from both images. The extraction of minutiae from images is enabled from licensed and free software, but this is also realized manually by an expert [22,23]. At the end of the extracting process terminations and bifurcations are determined.
The choice of wavelets is very important in the wavelet transform. The wavelet selection depends on the information required to be extracted from the image.
The main characteristics offered by the wavelets are compact support, symmetry, orthogonality and regularity in smoothness. Filter banks are responsible for frequency selectivity in sub-band coding systems. Ideal filters with infinite duration insert alias in frequency separation. But spatial localization requires filter support as small as possible, which is determined by the order of the filter. High-order filters result in edge blurring, high-frequency localization, and increase of vanishing moments. Low-order filters provide better localization in spatial domain and are more suitable for high spectral activity images [24]. Therefore, in image fusion a balance must be determined between the filter order and the frequency localization. The optimal solution depends on the spectral content of the image.
In the Biorthogonal wavelet family, the order of reconstruction and decomposition filters is from 1.1 to 6.8. Different biorthogonal wavelets yield different results in the fusion process. Finding the right minutiae is crucial in the accurate person identification. Different wavelets result in different number of minutiae extracted from the software, so the right wavelet has direct impact in the minutiae extraction number.
MSE is a parameter which measures the quality of an image [21]. It is widely used in this field. The focus of the paper is the increase of the quantity of information, expressed in the number of minutiae extracted from the fingerprint image. This consists in the increase of difference between the two fingerprint images, the original one and the fused one. The MSE is an important parameter in evaluating the difference between the two images. A larger value of the MSE implies a higher amount of information [22]. Since this information is carried from the minutiae, the fused image will have more minutiae than the original one.

Experimental Results
There are various public databases with fingerprint images, in this paper images are taken from FVC2004 [17,25].
In this database, fingerprint images are taken in different positions and with different damages. The images 200_L0_ 1.bmp and 200_L0_3.bmp are taken in consideration for experimentation. Fingerprint images are in different position and angles. The second image is spatially changed and oriented according to the first image, the result is shown in Figure 6. This orientation is based on the putative point of both images, as illustrated in Figure 7.
The two fingerprint images, the original and the oriented one, are merged to obtain a fused image. The fusion process is performed in wavelet domain, manipulating the low pass and high pass transformation coefficients.
The amount of information is estimated by the number of true minutiae extracted from the software. There are different software, with different extraction algorithms implemented in different programming languages. Free software results in a relatively high number of false minutiae, due to image noise, rate of fingerprint damage and incorrect minutiae extraction algorithm. The increase of accuracy is treated in many papers [12,[26][27][28]. Licensed software are much more accurate than the free ones, whose results are comparable with the ones obtained manually by an expert [29,30].
In the fused images, the increased number of false minutiae is due to the artifacts caused by the fusion process, Figure 8 illustrates it.
Wavelet transformation decomposition level is very important and with high impact in image processing. Low decomposition levels have less effect in noise reduction, high decomposition level may compromise image information [31][32][33]. Based on literature review and the experimental results, the biorthogonal wavelet family with decomposition level 3 is used [33]. The coefficients of low and high pass In the paper, low frequency coefficients are averaged and the maximum value for the corresponding high frequency coefficients is taken. This method fuses the information carried by each image and at the same time reduces the redundancy. Inverse DWT is than applied on the modified coefficients to obtain the fused image. Figure 8 illustrates results from a free extraction software [23]. Bifurcations are shown in green points and terminations are shown in red ones.
Due to noises, damages, artifacts caused from image processing and the inaccuracy of free software, a large number of minutiae are extracted, as shown in Figure 9. Results analysis shows that many of them are false minutiae, they are not true terminations or bifurcations. Practically it is known that terminations and bifurcations cannot be too close to each other, and this is reflected in the extraction software by setting the distance between spurious minutiae not less the 10.
Applying the free minutiae extraction software [23] for the original images 200_L0_0.bmp and 200_L0_1.bmp and the fused one, removing minutiae outside the region of interest (ROI), the obtained results are shown in Table 1.
According to the results obtained from the free software [23] it is noticed that the number of terminations in fused image is increased compared to the original images, while the number of bifurcations is decreased. The detailed visual analysis highlights a high number of false bifurcations incorrectly extracted from free software as seen in the zoomed parts shown in Figures 10 and 11.
On each of them minutiae extraction algorithm is applied, from which terminations and bifurcations are obtained.
Results are shown in Table 2.
Analyzing the results obtained from the measurements, presented in Table 2, it is noticed that the number of minutiae (terminations and bifurcations) in the images where fusion wavelet transform is used, has increased compared to the original figure. This increase is shown from the graphic representations in Figures 13-15. Figure 13 shows the number of minutiae obtained from Papillon 9.02 and manually by an expert, applied on the original and the fused images. It is noted that the number of minutiae obtained from fused images is higher than those of the original image. Figure 14 shows the number of terminations obtained from the original image and fused ones. It is noticed that the number of terminations has increased in the images where biorthogonal wavelet fusion is applied. Figure 15 shows the number of bifurcations extracted from the original image and the fused ones. It is noticed that the number of bifurcations has increased when biorthogonal wavelet fusion is applied.
Based on the analysis of the received results, it is noticed that the number of true minutiae in the fused image, determined by licensed software and manually by an expert, is approximately doubled compared to the original ones.
Experimental results show that different biorthogonal wavelets conclude in different images and as a result in  [34,35]. The results are presented in Table 3.
In DWT the correct selection of the wavelet used is important. The experimental results show that for different wavelets yields in different MSE results, which also effects the number of true minutes extracted. Measurement results are presented in Table 4.
Moreover, a correlation of values is observed between the automatic method and the manual one.
The higher MSE value means that the original image with noise and the fused one differ as much as possible between them. According to the manual method results, as the most accurate, and the higher MSE value, it is concluded the most suitable for fingerprint images is wavelet bior1.1.

Conclusions
Increasing the number of minutiae extracted from fingerprint images increases the identification accuracy of the biometric system. In many cases fingerprint images of unidentified persons are found in several different positions but with poor quality. The aim of the paper is to increase the number of minutiae extracted from fused fingerprint images. The fusion process is implemented in wavelet domain using biorthogonal DWT. The algorithm is applied in fingerprint images taken from the public database FVC2004. Minutiae extraction is performed through licensed Papillon 9.02 software and manually by the expert. The results show the number of minutiae in the fused image is increased. Correlating the results from minutiae extraction with the values of MSE yields the optimal wavelet from biorthogonal family. For fingerprint images from FVC2004 public database, the optimal wavelet of the biorthogonal family used in the fusion process is bior1.1.

Data Availability
Fingerprint images are taken from FVC2004 database. The data used to support the findings of this study are included within the article.

Conflicts of Interest
The authors declare that they have no conflicts of interest.