A Robust Illumination Normalization Method Based on Mean Estimation for Face Recognition

An illumination normalization method for face recognition has been developed since it was difficult to control lighting conditions efficiently in the practical applications. Considering that the irradiation light is of little variation in a certain area, amean estimation method is used to simulate the illumination component of a face image. Illumination component is removed by subtracting the mean estimation from the original image. In order to highlight face texture features and suppress the impact of adjacent domains, a ratio of the quotient image and its modulus mean value is obtained. The exponent result of the ratio is closely approximate to a relative reflection component. Since the gray value of facial organs is less than that of the facial skin, postprocessing is applied to the images in order to highlight facial texture for face recognition. Experiments show that the performance by using the proposed method is superior to that of state of the arts.


Introduction
Face recognition is one of the most active research focuses due to its wide applications [1,2].Face recognition can be used in many areas including security access control, surveillance monitor, and intelligent human machine interface.The accuracy of face recognition is not ideal at present.Among numerous adverse factors for face recognition, appearance variation caused by illumination is one of the major problems which remain unsettled.Many approaches such as illumination cones method [3] and 9D linear subspace method [4] have been proposed to solve illumination problems and improve the face recognition.The main drawbacks of the approaches mentioned above are the need of knowledge about the light source or a large volume of training data.To overcome this demerit, region-based image preprocessing methods are proposed in [5][6][7].These methods introduced some noise to make global illumination discontinuous.
Some illumination normalization methods are proposed to deal with the problem of varied lighting, which does not require training images with low computational complexity [8,9], such as multiscale retinex (MSR) [10], waveletbased normalization technique (WA) [11], and DCT-based normalization technique (DCT) [12].The extracted facial feature is poor and has messy histogram.In order to highlight facial texture features, some methods are proposed including adaptive nonlocal means (ANL) [13], DoG filtering (DOG) [14], steerable filter (SF) [15], and wavelet denoising (WD) [16].Overall gray values after processing are different with varying degrees and partial facial feature information is removed in above methods.Retina modeling (RET) [17] is an improved method based on DOG [14].Gradientfaces (GRF) [18] uses image gradient orientation to extract the illumination invariant feature.Weberfaces (WEB) [19] extracted the illumination invariant feature through computing relative gradient.
Aiming at improving some limits in illumination processing for face recognition, a new illumination normalization method has been proposed.Considering that the irradiation light is of little variation in a certain area, a mean estimation method is used to simulate the illumination component of a face image.Illumination component is removed by subtracting the mean estimation from the original image.In order to standardize the overall gray level of different facial images, a ratio matrix of the quotient image and its modulus mean value is obtained.The exponent result of the ratio is ISRN Machine Vision closely approximate to a relative reflection component.Since the gray value of facial organs is less than the facial skin, postprocessing is applied to the images to highlight facial texture for face recognition.The first contribution of the developed approach is that the performance is more robust in processing illumination variation for face recognition than that of state-of the arts.The second contribution is that the proposal can get some distinctive facial texture features for face recognition.The third contribution is to do fast illumination normalization in an ordinary hardware from a cluttered face image with some challenging illumination variations without any hypothesis for the face image contents in advance.The fourth contribution is to reduce the size of space required for image storage.Comparative study with some state of the arts has indicated the superior performance of the proposal.
The organization of the rest paper is as follows.In Section 2, mean estimation illumination normalization method is described.Some experimental results and comparisons are shown in Section 3. Some conclusions are given in Section 4.

Mean Illumination Estimation
Smoothing techniques are often used to estimate the illumination of the face image [13,14].According to the illumination-reflection model, the intension of a face image (, ) can be described as where (, ) is the reflection component and (, ) is the illumination component.Since (, ) depends only on the surface material of an object, it is the intrinsic representation of a face image.Suppose (, ) has little changed value in a small area when light source is remote.In order to separate the two components, logarithm transformation is applied to (1)  (, ) = ln  (, ) = ln  (, ) + ln  (, ) . ( The mean estimate of (, ) is obtained as follows: where  × is local neighborhood of the pixel (, ) for image mean convolution processing, (, ) is the locations of pixels in the neighborhood (, ), and  is the length of  × which will be discussed later.The quotient image is computed from ( 2) and (3) to eliminate (, ) as follows: where The  is a very small value which can be omitted.Equation ( 4) can be rewritten as where (, ) represents the ratio between the current point's reflectance and the  × average reflectance.
It indicates the differences of materials between the point (, ) and surrounding.When they are both the same kind of substance, the value of (, ) is 0.
The reflectance of facial skin is usually greater than that of facial features.Consider where  is the number of rows in image,  is the number of columns in image, and  × refers to the entire image range. represents the average gray value ratio of the facial skin and facial features.The difference of the global gray level in processed images is reduced as where  is a scaling tuning parameter and will be discussed later.
To suppress facial background noise and highlight facial texture features, the postprocessing is done as follows: where (, ) is the final result image which is used to recognize face and  is the minimum value of ô(, ).[⋅] is rounded to the nearest integer operation.(, ) refers to the relative reflection component of facial texture and skin.It can minimize the impact of illumination variations.

Experimental Results
To evaluate the performance of the proposed method, some challenging face databases with different illuminations are selected to test it.These face datasets include Yale B [20] and Extended Yale B [21], CMU-PIE [22], and CAS-PEAL-R1 [23].Besides, some state of the arts including RET [17], GRF [18], and WEB [19] are selected to compare with the proposed method.Since PCA [24] and LDA [25] are highly sensitive to illumination variation, they are used to test illumination normalization in the same conditions.In order to build a fair comparison with state of the arts, all these investigated methods are implemented with the parameters as the authors recommended.3.1.Parameters Analysis.In order to get reasonable value for  and , we select the Yale B face database to do face recognition.In Yale B face database, the frontally light face image (the light-source direction is 0 ∘ ) of each subject is selected as a training sample; the others are taken as a testing set.PCA is used to do face recognition.Some experimental results are shown in Figure 1, where  is set from 0.2 to 5 at intervals of 0.2,  is set from 5 to 20 at intervals of 1. From Figure 1, the performance of face recognition is best when  is set 11 and  is 2.2, respectively.
The value of  will not be changed because the facial reflectance of different people varies little.The value of  will be changed according to the width of face variation: where wid is the width of face.wid 0 is the width of face in Yale B face database and is set 168.  0 is 11.  2. The facial texture is highlighted if the proposal is adopted for illumination normalization in Figure 2. In order to evaluate quantitatively the performance among the investigated approaches, some experimental results on Yale B and Extended Yale B are shown in Figures 3∼4, respectively.Face recognition results change acutely in some illuminations for RET [17], GRF [18], and WEB [19] in Figures 3∼4.In order to further demonstrate the different performance of the investigated methods, some statistical results of face recognition are given in Tables 1∼2, respectively.

Comparisons on
One can note that the proposed method has the best performance for face recognition and the best stability in the experiments, and it is more robust than that of others at Yale B and Extended Yale B illuminations.  [17], (c) some results of GRF [18], (d) some results of WEB [19], and (e) some results of the proposal.

Comparisons on CMU-PIE.
The CMU-PIE face database includes 68 subjects under 21 illuminations, in which the number of subjects is more than Yale B+ Extended Yale B. Some results for the same person under different illuminations are shown in Figure 5.The facial texture is highlighted also if the proposal is adopted for illumination normalization from Figure 5.
In order to evaluate quantitatively the performance among the investigated approaches, the face images from CMU-PIE face database are numbered from 1 to 21 according to different illuminations.One illumination of each subject is selected randomly as a training sample; the others are taken as the testing set.Some experimental results are shown in Figure 6.
Face recognition changes obviously in some illumination conditions for RET [17], GRF [18], and WEB [19] from Figure 6, while the face recognition is 100% if the proposal is used to normalize illumination.Some statistical results of face recognition are given in Table 3 to further demonstrate the different performance of the investigated methods.

Comparisons on CAS-PEAL-R1.
The illumination of CAS-PEAL-R1 face database is completely different from the ones mentioned above, in which the number of illumination conditions is not the same for each subject.Besides, it is more difficult for face recognition since the face image for the same person presents different poses and scale changes.Moreover, there exists illumination interference in this database; some examples for illumination interference are given in Figure 7.
If we take Figures 7(a In order to build a fair comparison among the investigated approaches, 233 subjects under 10 different illuminations are chosen, cropped, and resized to 168 × 192.One illumination of each subject is selected randomly as a training sample; and the others are taken as a testing set for face recognition.Some results for illumination normalization to  [17], (c) some results of GRF [18], (d) some results of WEB [19], and (e) some results of proposal.the same person under different illuminations are shown in Figure 8.
One can find that the facial texture is highlighted if the proposal is adopted for illumination normalization in Figure 8.
Some quantitative results of face recognition among the investigated approaches are shown in Figure 9.
Face recognition changes acutely in some illumination conditions for RET [17], GRF [18], and WEB [19] in Figure 9, while the face recognition rate changes gently to the proposal.Some statistical results of face recognition are given in Table 4 to further illuminate performance difference among the investigated methods.
Compared with other methods, the proposed method has superior performance for face recognition in the above experiments.

Running Time and Storage Space.
From the above experiments compared with some state of the arts, the proposed method has superior performance for illumination normalization.In order to test its performance in real-time processing, we use all face images of Yale B and extended Yale B with size of 192 × 168 for testing.All the experiments are tested in Matlab7.10.0 at the PC with PIV 3.3 GHz CPU 3.41 GB RAM.In order to make a fair comparison, the dataset is processed 10 times repeatedly.The average running times for different approaches are given in Table 5 which shows that the performance of the proposed method in real-time processing is better than that of RET [17] and WEB [19].
To test the size of required storage space, above face images are saved as TIF format based on LZW [26] compression algorithm.The size of required storage space for each subject including images formed under 64 different kinds of illumination conditions is shown in Table 6.The required storage space of the proposed method is the minimum.
From the above experiments, one can notice that performance of the proposed method in illumination normalization for face recognition is better than that of the state of the arts.

Conclusions
An illumination normalization method for face recognition is proposed.The proposed method can extract some distinct facial features for face recognition from face images under some challenging lighting conditions.The face texture features are highlighted and some adverse impacts of adjacent domains are suppressed which helps to improve face recognition.Experimental results show that the proposed method has much better performance than that of state of the arts by comparison.[17], (c) some results of GRF [18], (d) some results of WEB [19], and (e) some results of proposal.In the future, we will do more feature extraction to improve the performance of illumination normalization for face recognition.

Figure 1 :
Figure 1: Face recognition results based on Yale B face database with different  and .(a) 3D face recognition rate, (b) average face recognition rate with different , and (c) face recognition rate with different  as  is 2.2.

Figure 2 :
Figure 2: Comparison of the investigated methods in Yale B and Extended Yale B. (a) Original images, (b) some results of RET[17], (c) some results of GRF[18], (d) some results of WEB[19], and (e) some results of the proposal.

Figure 3 :Figure 4 :
Figure 3: The experimental results on Yale B. (a) Some results of PCA and (b) some results of LDA.
) and 7(b) as a group of training samples set and Figures 7(c) and 7(d) are selected as a test set, there exist most matches in illumination condition between Figures 7(a) and 7(d).The same situation occurs to Figures 7(b) and 7(c).It indicates that the illumination interference leads to errors of face recognition easily.

Figure 5 :
Figure 5: Comparisons among the investigated methods in CMU-PIE.(a) Original images, (b) some results of RET[17], (c) some results of GRF[18], (d) some results of WEB[19], and (e) some results of proposal.

Figure 6 :
Figure 6: The experimental results on CMU-PIE.(a) Some results of PCA and (b) some results of LDA.

Figure 9 :
Figure 9: The experimental results on CAS-PEAL-R1.(a) Some results of PCA and (b) some results of LDA.

Table 1 :
The statistical results of recognition rates on Yale B.
The face images are numbered from 1 to 64 according to different illumination conditions.One illumination condition of each subject is selected randomly as a training sample; the others are taken as the testing set.Some experimental results after illumination normalization are shown in Figure Yale B and Extended Yale B. Yale B face database includes 10 subjects under 64 illumination conditions.Yale B+ Extended Yale B face database includes 38 human subjects under 64 illumination conditions.The size of images is 168 × 192.

Table 2 :
The statistical results of recognition rates on Yale B+ Extended Yale B.

Table 3 :
The statistical results of recognition rates on CMU-PIE.

Table 4 :
The statistical results of recognition rates on CAS-PEAL-R1.

Table 6 :
The size of required storage space per subject in different methods.