An entropy-histogram approach for image similarity and face recognition

,


Introduction
Facial recognition technology will change society in many ways.Face recognition system is available nowadays.Face recognition has been used in many commercial and law enforcement applications [1] such as some of the airport's systems, and ATM and electronic payment started to test facial recognition in real events.This availability of efficient face recognition algorithms leads to the fact that it can be used in real-time security issues where there is nowhere to hide.
People can recognize each other by the spectacular diversity of facial features and this is essential to the formation of complex societies.The face has the capability to send emotional signals, either voluntarily or involuntarily.The current biometric system technology reads faces as efficiently as humans do.Holy places use face recognition to track the presence of worshipers; retailers also use facial recognition technology to monitor thieves or to arrest suspects.Face recognition technology helps to verify the ID of the ridehailing driver, verify the permits of tourists to enter tourist places, and let people buy things with a smile [2].
Face recognition is falling within the similarity of images where it is possible to recognize the face by finding the similarity and dissimilarity between the image stored in the database and the current image of the same person [3,4].There are different approaches for face recognition, especially statistical and information-theoretic.In this paper, we focus on an information-theory approach to design a similarity measure capable of testing similarity for the purpose of face recognition.
The measurement of image similarity is a significant point in the applications of the real world and several fields like optical character recognition (OCR), identity authentication, human-computer interfacing, surveillance, and other pattern recognition tasks [5].
To measure the similarity between two digital images, there is a simple method to calculate the similarity which is mean squared error.The advantage of mean squared error is easy to calculate, but, at the same time, it is not accurate for pattern recognition.
It is possible to use information-theory approach in image processing for analysis if we consider the image is a two-dimensional random variable, giving rise to the use of information-theoretic measures (such as joint histogram) to define similarity and recognition measures between images [6].There are two useful measures of information which are Shannon entropy and Renyi entropy measures.In this paper, we present an information-theoretic image similarityrecognition measures and show its superior performance versus the similarity measures SSIM, FSIM, and FSM and recognition measures ZMSIM and ZESIM.
This paper handles the information-theoretic approach and includes the following sections: Section 2 describes the related works that give high-performance measures; Section 3 presents the design of novel information-theoretic measures based on entropy, combined with the joint histogram; Section 4 shows simulation results and performance analysis; and Section 5 presents conclusions and future work.

Related Works
There are several works that addressed the face recognition approach and images similarity measure by employing the information theory and entropy concepts.All previous work has solved a high level of challenges of the face recognition and image similarity to support this system to work in real time.The authors developed SSIM and explained the performance of SSIM by using some examples [7].Zhang et al. (2011) presented a feature similarity index measure (FSIM) for image-quality assessment.Phase congruency was used as a primary feature in the feature similarity index measure, whereas the gradient magnitude was used as a secondary feature in the feature similarity index measure to compute the feature similarity index.Experiment results were on the six-benchmark image-quality assessment database.Later we will demonstrate that entropy metrics performance is much higher than FSIM [8].
The feature extraction using the discrete cosine transform (DCT) with the approach of illumination normalization in the domain of the logarithm is proposed by Arindam Kar et al. in 2013 [9]; then, in the second step, they applied the entropy measures on the discrete cosine transform coefficients.Finally, they applied the kernel entropy component analysis with an extension of arc cosine kernels on the extracted DCT coefficients; their system was tested on the four databases such as FERET, AR, FRAV2D, and AT&T.
In 2013 Darshana Mistry et al. used the concepts of entropy measure, joint entropy, and joint histogram to find the similarity between two digital images and test these measures on the brain images as a database [10] In 2014, Lee et al. suggested a method for face recognition using Shannon entropy and fuzzy logic [11].This method is based on combination of Shannon and fuzzy logic.In this work, the use of entropy is to calculate the element ratio between two faces images and the use of fuzzy logic is to calculate the entropy membership with one.
In 2015, Yulong Wang et al. introduced a MEEAR (Minimum Error Entropy-based Atomic Representation) framework for facial recognition system.MEEAR is based on the minimum error entropy (MEE) model to be more robust under noise condition [12].MEEAR can be used for developing new classifiers.MEEAR can provide distinctive representation vector by reducing the atomic norm regularized Renyi's entropy of the reconstruction error.
Images similarity index based on entropy function and group theory is proposed by Y. G. Suarez et al. in 2015 [13].An algebraic group theory of images is considered in this image similarity index.Images subtraction is provided in this similarity index by inner law.
In 2016 Q. R. Zhang et al. proposed the Improved Relative Entropy (IRE) method for face recognition approach.The IRE method is based on Shannon entropy and it is more accurate than Linear Discriminant Analysis and Locality Preserving Projections methods.The experimental results of IRE using CMU PIE and YALE B databases showed the high performance of the IRE versus LDA and LPP [14].
The system of emotion recognition based on facial expression is proposed by Y. D. Zhang et al. in 2016 [15].Seven different facial expressions are considered in this approach such as sad, happy, angry, surprised, disgust, neutral, and fear.To extract the features the biorthogonal wavelet entropy has been used and utilizes the fuzzy multiclass support vector machine as a classifier.
To improve the kernel entropy component analysis (KECA), X. Ruan et al. in 2017 [16] did this improvement in three stages.Extract the features of faces by using Gabor wavelets in the first stage.In the second stage use the algorithms of nonlinear dimension reduction.In the third and last stage, use the k-nearest neighbor to the final classification on the fusion of different weighted multiresolution image of a human face.
FRIQA (Full-Reference Image-Quality Assessment) is an algorithm proposed by Y. Ren et al. in 2017 [17].In FRIQA algorithm, the local entropy of images is analyzed in the first step, and then it calculates the similarity of local entropy between two images (reference image and the distorted version of it).Finally, the quality is computed for distorted version of the reference image from local energy similarity.
In a recent development, the authors in [18] introduced state-of-the-art FSM which combines the SSIM and FSIM methods.Canny edge detector has been used in FSM.The performance of FSM is tested under Gaussian noise condition and a wide range of PSNR, using FEI and AT&T databases.Experimental results show that the proposed FSM is better than the SSIM and FSIM approaches in similarity and recognition of human faces.

A Brief on Efficient Similarity and Recognition Measures
The distance between two sets of various data points based on a given norm is called a "similarity measure."If we have a dataset and a function that gives a large distance between this set and members of a database, except probably one member, then we have a similarity algorithm that can detect similarity between given data and members of a database.
In this paper, two information-theory measures are designed based on entropies combined with a joint histogram of two images.Performance comparison is considered with wellknown similarity and recognition measures.All the methods of recognition of the face image depend on the extraction of certain features of the images; the similarity shows the features of the statistical correlation or informatics correlation.To find the similarity between two images, several approaches are utilized; some are used for face and facial expression recognition.Here we present a brief description of well-known similarity and recognition measures for the sake of performance comparison, which is overviewed as follows.

Structural Similarity Index Measure (SSIM).
The structural similarity index measure (SSIM) is one of the most popular metrics used to find the similarity between two images.Zhou Wang et al. proposed this measure in 2004 [7].SSIM has been widely utilized for many algorithms of digital image-processing systems and image-quality assessment.The technique used in structural similarity is based on using statistical measurements, and it has an ability to extract the statistical image features for image recognition purpose such as standard deviation () and mean (), to get a definition for a distance function that can measure the SSIM between a training image and a test image.The measure is given by this formula: where (, ) is a structural similarity measure of a statistical similarity between the test image () and training image ().
The quantity   is the statistical mean of pixels in image ,  2  is the statistical variance of pixels in the image ,   is the statistical mean of pixels in image , and  2  is the statistical variance of pixels in image .The quantities  1 and  2 are constants:  1 = ( 1 ) 2 where  is a small constant and  is a maximum value of pixels;  2 = ( 2 ) 2 where  = 255.

Feature Similarity Index Measure (FSIM).
In 2011, Zhang et al. [8] presented a feature-based similarity index for image-quality assessment (FSIM), which has become a very common measure to find the similarity in images.The phase congruency () was used as a primary feature and the gradient magnitude () was used as a secondary feature in feature similarity index measure to compute feature the similarity index.To calculate the similarity between images the FSIM definition is used: where Ω means the whole image spatial domain,  is a phase congruency, and   is a similarity resulting from the combined similarity measure for phase congruency   () and similarity measure for gradient   (), as given by the formulas where  and  are parameters used to adjust the relative importance of phase congruency () and gradient magnitude (GM) features.The phase congruency  is given by the equation where  is a small positive constant and where (x) = ∑    (x) and (x) = ∑    (x),   (x) = (x) *    , and   (x) = (x) *    , noting that    and    are even and odd symmetric filters on scale  and " * " denotes convolution.The function (x) is a 1D signal obtained after arranging pixels in different orientations.The local amplitudes   (x) are defined as where x is the position on scale .

FSM:
A Feature-Based Rational Measure.In 2017 NA Shnain et al. [18] have proposed a new structure for image similarity measure.The new structure is a rational function of measure with different statistical properties.FSM combines the best features of the well-known SSIM and FSIM approaches, trading off between their performances for similar and dissimilar images.Canny's edge detection in FSM is used as a distinctive structural feature, where (after processing by Canny's edge filter) two binary images,   and   , are obtained from the original two images  and .FSM can be given by [18] F (, ) where Φ stands for the feature similarity index measure (FSIM) and  stands for the structural similarity measure (SSIM).The constants are given the values  = 5,  = 3, c = 7, and  = 0.01, while (, ) is a correlative function given by where  o and  o represent the image means.This function is not applied here to the original images themselves but to their edge-detected versions using (  ,   ).

Zernike-Moments Approach for Image Recognition.
Zernike moments provide an efficient, rotation-invariant, and noise-resistant approach for image and face recognition, including the complicated effect of face expressions [19].
Zernike moments are rotation-independent as they are defined in polar coordinates (, ) ∈  2 , with the help of Zernike radial functions   () that are defined as follows [20]: where ,  are integers that satisfy the conditions: | − | is even and || ≤ .In the 2-dimensional radial domain, Zernike moments are defined as follows: where * indicates complex conjugation.In order to use these moments for image recognition, we should approximate them in the discrete Cartesian coordinate system.Therefore, we perform a linear transformation of the image Cartesian coordinates (, ) from the inside of the unit circle {(, ) : || ≤ 1} to the inside of the square {(, ) : ,  = 0, 1, . . ., −1} as follows: with We extract face features as various Zernike moments (which we call here Zernike domain) and then define a similarity measure after imposing a distance measure in this domain.In this work we will consider Euclidean and Minkowski distance metrics.Features of an image x can be represented by a vector of selected Zernike moments, Z x .These distance measures are applied to feature vectors Z x , Z y of two images in the Zernike domain.They are defined as follows.

Minkowski Metric
In this work we will use  = 3 for Minkowski metric.When these two metrics are applied to test similarity between the Zernike-domain image features Z x , Z y (for two images x and y), we call the two Zernike-based similarity measures Zernike-Euclidean Similarity (ZESIM) and Zernike-Minkowski Similarity (ZMSIM).From another viewpoint, we establish a comparison with an efficient, rotation-invariant method for face recognition based on Euclidean and Minkowski distance in the Zernike domain.We selected Zernike feature vector as (15) Also, we used  = 3 for Minkowski metric.Comparison showed that the proposed measures outperform Zernike-Euclidean (ZESIM) and Zernike-Minkowski (ZMSIM) recognition approaches.This is so despite the fact that Zernike measures are so powerful that they apply to face expression recognition as well as face recognition.

ISSIM: A Functionally Relative 2D Histogram-Based
Similarity Measure.In [21] an efficient information-theoretic measure (called ISSIM) has been proposed.This measure used a functionally normalized error function based on 2D joint histogram between the two images ,  under testing as follows: where   and   represent elements of the joint histogram between two images,  is a small positive number to avoid division by zero.Normalization has been done relative to the function ℎ  (the histogram of the reference image) and the maximum pixel value  = 255.Other (scalar) normalization steps have been added to ensure that the proposed measure stays inside [0, 1].
3.6.The Proposed Measures.Researchers have proposed several similarity and recognition metrics used in imageprocessing field; each has its weaknesses and strengths.
The most disturbing problem in image similarity for face recognition is the confusing high similarity given by a specific measure between the reference image and other images in the database.
In this paper, we propose novel information-theoretic similarity-recognition measures for image similarity and face recognition.The proposed measures reduce confusion when used in face recognition by giving a very small similarity between unrelated images.Information theory has already been applied to pattern recognition [22]; here we apply it to design two similarity-recognition measures that are useful for face recognition and image similarity.The two measures apply the concept of entropy (Shannon & Renyi) to image a joint histogram as a probabilistic distribution.The names Renyi Similarity Measure (RSM) and Shannon Similarity Measure (SHS) are given to the new measures, according to the use of Renyi and Shannon entropies.Performance tests have been applied against the popular metrics SSIM, FSIM, ZMSIM, ZESIM, and FSM.Additional tests also include comparisons with the information-theoretic ISSIM.

Shannon and Renyi Entropies.
Entropy is the expected value of the information.Entropy has several applications in statistical mechanics, coding theory, statistics, and related areas.Emerging fields have also used entropy, such as image similarity [23].The most significant entropy in applications is Shannon entropy, whose mathematical formula is given by where  represents the entropy,  is discrete random variable  = {  ,  2 , . . .,   }, and (  ) is probability of event   ,  ∈ [0, 1].Here the probabilistic events are the elements of the 2D joint histogram between two images (test and reference images).
Renyi entropy is another significant measure of information, given by where  ≥ 0,  ̸ = 1,  is a discrete random variable, and  is corresponding probabilities for  = 1, . . ., .This entropy is a mathematical generalization of Shannon entropy.
The main difference between Shannon entropy and Renyi entropy is the placement of the logarithm in the entropic equations, giving a flexible measure of the entropy as a result of the parameter , enabling several measurements of dissimilarity [24].This entropy, if applied to a joint histogram, gives high performance for face recognition.

A Joint Entropic-Histogram Similarity and Recognition
Measure.In a huge database for digital images like a face database, there might be identical histograms for very different images.This fact will be a problem when researchers want to compare images using a histogram as a distinctive feature.To solve this problem, Pass et al. [25] proposed an alternative to the classical histogram, called a joint histogram, which includes additional information without losing the powerful feature of the histogram.The joint histogram is based mainly on selecting a set of local pixel features to construct a multidimensional histogram.
A 2D joint histogram entry   (, ) for two images  and  represents the probability that a pixel intensity value  from image  cooccurs with pixel intensity value  from image .The normalized joint histogram for two images x and y of size  ×  is defined here as follows: where or Now we apply the entropy to measure the information held in the joint histogram that represents the joint probability of pixel cooccurrence.Note that both  and  range from 0 to  = 255.First, the Shannon entropy measure is applied to get Shannon-Histogram Similarity Measure (SHS) as follows: where  = (:) reshapes the 2D joint histogram  into a one-dimensional column vector  via the colon operator, as defined in MATLAB, with a new dimension 1 × ( ⋅ ).
Applying the Renyi entropy in this approach gives the Renyi-Histogram Similarity Measure (RSM) as follows: where  ≥ 0;  ̸ = 1.Using other entropies could be more helpful.However, this is beyond the scope of this paper at the moment and will be investigated in future works.

Motivation.
One of the most difficult challenges for researchers in measuring image similarity for face recognition is that there is a high level of scepticism about the similarity between the reference image and test image in the same database, particularly when the image has low resolution or distortion in terms of illumination or background changes.
The differences in facial expressions and head poses for human faces often give rise to scepticism.Official government security systems do not rely entirely on face recognition  systems, because the latter still suffer from challenges such as different facial expressions, illumination, and changing shape with age.However, a face recognition system can be very supportive of current routine security systems.
In this work, we have contributed to reducing these challenges regarding similarity of images, especially for the purpose of face recognition.We proposed new image similarity measures that can be utilized in face recognition.These measures are built using an information-theory approach; they proved to be very accurate in finding similarity between face images with more confidence than existing images similarity and image recognition measures.Our method is motivated by the problem of finding image similarity in large databases, where reduced confidence may open the door for big confusion.
The aim of this work is to provide metrics to find similarity between images for the purpose of face recognition; also, this can be used in case of nonface images.High performance and accuracy are the main features of proposed measures as compared to existing measures.Although other measures may have the ability to find the similarity between images (even for face recognition), the proposed measures have high confidence by giving almost a near-zero value in case of different images, while other measures give a nontrivial amount of similarity when comparing different images.

Experimental Results and Performance
We have implemented the proposed measures on MATLAB and tested their performance against other measures as follows.

Test Environment: Image Databases.
In this work, we used well-known face databases, AT&T and FEI [26,27], as a test environment.AT&T database as shown in Figure 1 has 40 persons each, with 10 different poses (including facial expressions); hence the total number of AT&T face images used in this test is 400 images.FEI database, as shown in Figure 2, has 200 persons, each with 14 different poses  (including facial expressions), and the total number of FEI face images used in this test is 700 images as part of it.
We divided the AT&T and FEI databases into two subgroups: testing group and the training group.In training group, we choose a random face image from the database to be a reference image, and then we select a different facial expression and pose from the testing group, for the same person as a challenging image to test the performance of measures in recognition and similarity.
On the other hand, there are several publicly available image databases in the image similarity community, including TID2008 and image and video-communication (IVC).Both are used here for algorithm validation and comparison.TID2008, as shown in Figure 3, contains 25 reference images and 1,700 distorted images (25 reference images × 17 types of distortions x 4 levels of distortions) [28].The IVC database as shown in Figure 4 has 10 original images and 235 distorted images generated from four different processes: JPEG, JPEG2000, LAR coding, and blurring [29].
For each reference image in the TID2008 and IVC databases, we use six complex distorted versions as image poses to test, compare, and prove that the proposed SHS & RSM outperforms the existing measures in terms of a recognition and similarity tests.
Note that although we obtained good results using this standard database, better results could be obtained using the Viola-Jones face detection algorithm [30], local analysis [31,32], or hybrid analysis [33].Note using face detection algorithm with giving emphasis to the relevant face features while ignoring artefacts.ZMSIM.The criterion for good performance is the amount of confusion in deciding whether an image belongs to a database or not.This confusion is measured by the difference in similarity produced (by a specific measure) between the reference image and the database images, with a focus on the best match and the second-best match.If a measure gives little difference in similarity between the same persons, then the confusion is high and the performance is low.

Results and Discussion.
To evaluate the performance of the proposed SHS and RSM against SSIM, FSIM, ZSIM, ZMSIM, and the state-of-the-art FSM, we have to describe the experimental procedure in detail.
In this paper, we use four challenging datasets which are AT&T, FEI, IVC, and TID2008.AT&T and FEI are used to test the performance of all the measures in terms of face recognition, and TID2008 and IVC are used to test the performance of all the measures in terms of image similarity in the figures listed below.
Figure 5 refers to the test image, person number 17 in AT&T database with all poses, and note that we chose pose 10 as a reference image.Figure 6 shows the result of applying the proposed measures (SHS & RSM) and existing measures for the sake of recognizing a specific person recognized as an indicated in Figure 5, and here the similarity differences between best and second-best match are   = 0.9579,   = 0.9374,   = 0.6595,   = 0.2869,   = 0.5568,   = 0.4942,   = 0.7662.
Figure 7 refers to the test image, person number 17 in AT&T database with all poses, and note that we chose pose 3 as a reference image.Figure 8 shows the result of applying the proposed measures and existing measures for the sake of recognizing a specific person that we chose in Figure 7, and here the similarity differences between best and second-best match are   = 0.9554,   = 0.9355,   = 0.5860,   = 0.2549,   = 0.4124,   = 0.3260, and   = 0.7513.Note that the measures have also been tested using different facial expressions with different illumination and different head pose in the same databases.Such cases represent the current challenges of any face recognition or image similarity measure.Results show more confidence in our proposed measurement.
In Figure 9 we chose person number 20 in AT&T database as a test image and we chose pose number 10 as a reference image.Figure 10 shows the result of applying SHS and RSM measures against SSIM, FSIM, ZSIM, and ZMSIM and the recent FSM measures for the specific person recognition as indicated in Figure 9, and here the similarity differences between best and second-best match are   = 0.957,   = 0.9265; and   = 0.7056,   = 0.0580,   = 0.4328,   = 0.3339, and   = 0.7513.
FEI database is used which represents the most face recognition challenges because it is containing a different facial expression with different illumination (white homogenous background) and different head pose (about 180 degrees).
Figure 11 refers to the test image, person number 17 in FEI database with all poses; note that we chose pose number 8 as a reference image.Figure 12 shows the result of applying the proposed measures and existing measures for the sake of recognizing a specific person that we chose in Figure 11, and here the similarity differences between best and second-best match are   = 0.9138,   = 0.8036,   = 0.1975,   = 0.1865,   = 0.2976,   = 0.2899,   = 0.6280.
Figure 13 refers to person number 28 with all poses in FEI database as a test image; note that we chose pose number 6 as a reference image while Figure 14 shows the result of applying the proposed SHS and RSM against existing measures to recognizing a specific person that we chose in Figure 13, and here the similarity differences between best and second-best Here the similarity differences between best and second-best match are   = 0.957,   = 0.9265,   = 0.7056,   = 0.0580,   = 0.4328,   = 0.3339, and   = 0.7093.match are   = 0.9437,   = 0.8686; and   = 0.3085,   = 0.2508,   = 0.6784,   = 0.6503,   = 0.7334.
As face recognition can be pose-dependent, we did averaging of similarity confidence measure for every pose in the AT&T dataset.The global average can be obtained as the mean of all these subaverages.Let   denote the similarity confidence when the image of person  with pose  is the reference image while recognizing person  between 40 people under pose .Then the global confidence average is taken as 1 shows the performance of the proposed SHS & RSM versus other methods.Note that the average similarity difference of other databases gave nearly similar results.It is clear that the only near match to the proposed measures is the recently proposed FSM.
The preparation of the database that is more suitable for this approach (e.g., in security applications) should take into consideration some important factors like lighting, expression, and viewpoint, while the reference image should consider the same factors.It is clear that the proposed joint entropic-histogram measures give more confident decisions in face recognition and image similarity, whereas other measures, although they decide the proper person correctly, give low confidence in their decision.
Using a database with distorted images in the test of image similarity and image recognition measures is a real challenge to the proposed and existing measures.In this work, we tested the SHS & RSM on distorted images for the sake of image similarity and image recognition.The figures listed below show that the proposed methods are still superior versus others.
Figures 15, 16, 17, and 18 have three images: (a) is the original reference image from databases, (b) is the distorted version of the reference image, and (c) represents the performance of our proposed image similarity-recognition measures compared with the existing measures.The proposed RSM & SHS demonstrates better performance in terms of recognition and similarity confidence.Although the other measures correctly decide the proper image with maximum similarity, they give low confidence in their decision because there are many cases of distrust (big similarities with wrong images) in their decisions (similarities).This is a big challenge when we employ these measures in security recognition tasks.SHS and RSM give more confidence to decide the proper image from a database.
The difference in the values of the peaks of each measure is a new feature showing the high performance of the proposed measures (SHS & RSM).If the distance between the highest match and the second-best match is higher, that means the measure has better performance and vice versa; i.e., if the distance is less, that means the measure has been confused in deciding the best match by giving a nontrivial similarity between the different images.The new feature of recognition confidence can be very useful in security systems of big databases.
Receiver Operating Characteristics (ROC).ROC graph is used for performance evaluation of classifiers; it is a twodimensional graph in which true positive rate (tpr, also called hit rate and recall) is plotted versus false positive rate (fpr, also called false alarm rate), defined as follows [34]:  (false positives).Tables 2 and 3 show FPR and TPR using AT&T database (40 persons), while Figure 19 shows an ROC graph with 8 classifiers (similarity measures), including the recent information-theoretic ISSIM [33].The difference  between the best match and the second-best match is used as a confidence measure in this experiment used to confirm that a face image belongs to the database.Thresholds of confidence are used as given by in the following vector:  = [0.1 .3.5 .7 .8.9.95].To confirm that a face image does not belong to the database, the measure 1 −  is used, with the same thresholds as above.Note that when all images are not related to a specific test image (i.e., the image does not belong to the database), we expect low values for  since only similarity features (including correlative, structural, and information-theoretic features) can push  up.

Conclusions
This paper presented an efficient approach for face recognition and image similarity.The approach is based on an information-theoretic similarity measure derived using the entropy of a 1D version of the 2D joint histogram between two images.Two entropies have been used, Shannon and Renyi, giving rise to two measures: Shannon-Histogram Similarity (SHS) and Renyi Similarity Measure (RSM).The performance of RSM and SHS was tested against efficient existing similarity metrics feature-based similarity (FSIM), structural similarity (SSIM), and also Zernike-moments recognition approaches, specifically Zernike-Euclidean Similarity (ZESIM) and Zernike-Minkowski Similarity (ZMSIM) and the state-of-the-art FSM.A comparison with a recent information-theoretic ISSIM has also been considered.Experimental results showed superior performance for the proposed measures in terms of correct decisions with minimal confusion in face recognition and image similarity, using the AT&T and FEI face databases and TID2008, and IVC image databases.Confusion in recognition is introduced as a performance factor, measured as the difference between the similarity produced by the best match and that produced by the second-best match.
In this work, global face analysis has been applied, where the whole image is treated at once.Although good results were obtained using a standard database, difficulties may arise in practice.The Viola-Jones face detection algorithm and local analysis of face images played a significant role in improving face recognition.The authors intend to pursue this point in future works and extend their previous studies on

False Positive Rate
Figure 19: ROC graphs for 8 similarity measures with 6 different confidence thresholds (bold for proposed measures).Graphs of the proposed measures are always above y=x curve (top-left corner, meaning more benefits than disadvantages).

Figure 1 :
Figure 1: Various face poses for a single person from the AT&T face database.

Figure 2 :
Figure 2: Various face poses for a single person from the FEI face database.

Figure 3 :
Figure 3: Eight TID2008 reference images used for the test and comparison of image similarity measures.

Figure 4 :
Figure 4: Ten IVC reference images used for the test and comparison of image similarity measures.

Figure 5 :
Figure 5: Poses for person no.17 in AT&T database.Reference pose is number 10 as indicated.

Figure 7 :Figure 8 :
Figure 7: Poses for person no.17 in AT&T database.Reference pose is number 3 as indicated.

Figure 9 :Figure 10 :
Figure 9: Poses for person no.20 in AT&T database.Reference pose is number 10 as indicated.

Figure 11 :Figure 12 :Figure 13 :
Figure 11: Poses for person no.17 in FEI database.Reference pose is number 08 as indicated.

Figure 15 :
Figure 15: Performance of recognition measures using original image and distortion of the original image.(a) The reference image.(b) The distorted version of it.(c) Performance of SSIM, FSIM, FSM, ZESIM, ZMSIM, and SHS and RSM using TID2008 database.

Figure 16 :
Figure 16: Performance of recognition measures using original image and distort of the original image.(a) The reference image.(b) The distorted version of it.(c) Performance of SSIM, FSIM, FSM, ZESIM, ZMSIM, and SHS and RSM using TID2008 database.

Figure 17 :
Figure 17: Performance of recognition measures using original image and distort of the original image.(a) The reference image.(b) The distorted version of it.(c) Performance of SSIM, FSIM, FSM, ZESIM, ZMSIM, and SHS and RSM using IVC database.

Figure 18 :
Figure 18: Performance of recognition measures using original image and distortion of the original image.(a) The reference image.(b) The distorted version of it.(c) Performance of SSIM, FSIM, FSM, ZESIM, ZMSIM, and SHS and RSM using IVC database.
Criterion.Performance of the proposed measures has been tested against other efficient similarity and recognition metrics: SSIM, FSIM, FSM, ZESIM, and

Table 1 :
The global average similarity difference of best match and second-best match within all persons.

Table 2 :
True positive rate (tpr) according to the threshold vector of confidence using AT&T database, with pose 10 (of each of the forty persons) as a reference image (included in the database).