Robust Face Recognition via Block Sparse Bayesian Learning

Face recognition (FR) is an important task in pattern recognition and computer vision. Sparse representation (SR) has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function, and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study we use block sparse Bayesian learning (BSBL) to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model based algorithms. Experimental results on the Extended Yale B, the AR and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases.


Introduction
Owing to the rapid development of network and computer technologies, face recognition(FR) plays an important role in many applications, such as video surveillance, man-machine interface, digital entertainment and so on. Many methods of FR have been developed over the past two decades [1,2,3,4,5]. Basically, FR is a typical problem of classification.
In a typical FR system, besides the face detection and face alignment, there are two main stages in the process of FR. One is feature extraction, which obtains a set of relevant information from a face image for further classification. Because of the huge size of face images, it is desired to extract features from each face image, which have lower dimensions and facilitate recognition. Lots of feature extraction methods have been proposed, such as PCA [3,6], LPP [5] and LDA 2. Related work 2.1. Face recognition via sparse representation We first describe the basic SRC method [11] for face recognition. Given training faces of all K subjects, a dictionary matrix is formed as follows where 1 , v i,2 , . . . , v i,n i ] ∈ R m×n i , and v i, j is the j-th face 1 of the i-th subject. Then, a vectorized test face y ∈ R m×1 is represented under the dictionary matrix as follows y = Φx = v 1,1 x 1,1 + · · · + v 1,n 1 x 1,n 1 + · · · + v i−1,n i−1 x i−1,n i−1 +v i,1 x i,1 + v i,2 x i,2 + · · · + v i,n i x i,n i +v i+1,1 x i+1,1 + · · · + v K,n K x K,n K (2) where x [x 1,1 , · · · , x i,1 , · · · , x i,n i , · · · , x K,n K ] T is the representation coefficient vector. In the basic SRC method, it is suggested that if the new test face y belongs to a subject in the training set, say the i-th subject, then under a sparsity constraint on x, only some of the coefficients x i,1 , x i,2 , · · · , x i,n i are significantly nonzero, while other coefficients, i.e. x j,k ( j i, ∀k), are zero or close to zero.
Mathematically, the above idea can be described as the following sparse representation problem x 0 = arg min where x 0 counts the number of nonzero elements in the vector x. Once we have obtained the solution x 0 , the class label of y can be found by where δ j (x) : R n → R n is the characteristic function which maintains the elements of x associated with the j-th class, while sets other elements of x to zero. However, finding the solution to (3) is NP-hard [22]. Recent theories in compressed sensing [23,24] show that if the true solution is sparse enough, under some mild conditions the solution can be found by solving the following convex-relaxation problem Further, to deal with small dense model noise, the problem (5) can be changed to the following one where ǫ is a noise-tolerance constant. Many ℓ 1 -minimization algorithms can be used to find the solution to (5) or to (6), such as LASSO [25] and Basis Pursuit Denoising [26].
In a practical face recognition problem, the coefficient vector x 1 (or x 0 ) is not only sparse but also block sparse. To see this, we can rewrite the sparse representation problem (2) as follows where x j ∈ R n j ×1 is the coefficient vector associated with the j-th class, and x [x T 1 , · · · , x T K ] T . When a test face y belongs to the j-th class, ideally only elements in x j are significantly nonzero. In other words, only the block x j has significantly nonzero norm. Clearly, this is a canonical block sparse model [19,27]. Many algorithms for the block sparse model can be used here. For example, in [20] it is suggested to use the following algorithm: This is a natural extension of basic ℓ 1 -minimization algorithms, which imposes ℓ 2 norm on block elements and then ℓ 1 norm over blocks. It has been shown that exploiting the block structure can largely improve the estimation quality of x 0 [27,28,29]. However, one should note that when the test face belongs to the j-th class, not only the representation coefficient block x j is a nonzero block, but also its elements are correlated in amplitude. The correlation arises because the faces of the j-th class in the training set are all correlated with the test face, and thus the elements in x j are mutually dependent. It is shown that exploiting the correlation within blocks can further improve the estimation quality of x 0 [21,30] than only exploiting the block structure.
Therefore, in this study we propose to use block sparse Bayesian learning (BSBL) [21] to estimate x 0 by exploiting the block structure and the correlation within blocks. In the next section we first briefly introduce sparse Bayesian learning (SBL), and then introduce BSBL.

SBL and BSBL
SBL [31] was initially proposed as a machine learning method. But later it has been shown to be a powerful method for sparse representation, sparse signal recovery and compressed sensing.
3.1. Advantages of SBL Compared to LASSO-type algorithms (such as the original LASSO algorithm, Basis Pursuit Denoising, Group Lasso, Group Basis Pursuit, and other algorithms based on ℓ 1 -minimization), SBL has the following advantages [32,33].
1. Its recovery performance is robust to the characteristics of the matrix Φ, while most other algorithms are not. For example, it has been shown that when columns of Φ are highly coherent, SBL still maintains good performance, while algorithms such as LASSO have seriously degraded performance [34]. This advantage is very attractive to sparse representation and other applications, since in these applications the matrix Φ is not a random matrix and its columns are highly coherent. 2. SBL has a number of desired advantages over many popular algorithms in terms of local and global convergence. It can be shown that SBL provides a sparser solution than Lasso-type algorithms. In particular, in noiseless situations and under certain conditions, the global minimum of SBL cost function is unique and corresponds to the true sparsest 4 solution, while the global minimum of the cost function of LASSO-type algorithms is not necessarily the true sparsest solution [35]. These advantages imply that SBL is a better choice in feature selection via sparse representation [36]. 3. Recent works in SBL [21,37] provide robust learning rules for automatically estimating values of its regularizer (related to noise variance) such that SBL algorithms can achieve good performance. In contrast, LASSO-type algorithms generally need users to choose values for such regularizer, which is often obtained by cross-validation. However, this takes lots of time for large-scale datasets, which is not convenient and even impossible in some scenarios.
3.2. Introduction to BSBL BSBL [21] is an extension of the basic SBL framework, which exploits a block structure and intra-block correlation in the coefficient vector x. It is based on the assumption that x can be partitioned into K non-overlapping blocks: Among these blocks, few blocks are nonzero. Then, each block x i ∈ R n i ×1 is assumed to satisfy a parameterized multivariate Gaussian distribution: with the unknown parameters γ i and B i . Here γ i is a nonnegative parameter controlling the blocksparsity of x. When γ i = 0, the i-th block becomes zero. During the learning procedure most γ i tend to be zero, due to the mechanism of automatic relevance determination [31]. Thus sparsity at the block level is encouraged. B i ∈ R d i ×d i is a positive definite and symmetrical matrix, capturing the intra-block correlation of the i-th block. Under the assumption that blocks are mutually uncorrelated, the prior of To avoid overfitting, all the B i will be imposed by some constraints and their estimates will be further regularized. The model noise n y − Φx is assumed to satisfy p(n; λ) ∼ N(0, λI), where λ is a positive scalar to be estimated. Based on the above probability models, one can obtain a close-form posterior. Therefore, the estimate of x can be obtained by using the Maximum-A-Posteriori (MAP) estimation, providing all the parameters λ, , one can use the Type II maximum likelihood method [38,31]. This is equivalent to minimizing the following cost function where Θ denotes all the parameters, i.e., Θ There are several optimization methods to minimize the cost function, such as the expectation-maximum method, the boundoptimization method, the duality method and so on. This framework is called the BSBL framework.
BSBL not only has the advantages of the basic SBL listed in Section 3.1, but also has another two advantages: 1. BSBL provides large flexibility to model and exploit correlation structure in signals, such as intra-block correlation [21,30]. By exploiting the correlation structures, recovery performance is significantly improved. 2. BSBL has the unique ability to find less-sparse [39] and non-sparse [30] true solutions with very small errors 3 . This is attractive for practical use, since in practice the true solutions may not be very sparse, and existing sparse signal recovery algorithms generally fail in this case.
Therefore, BSBL is promising for pattern recognition. In the following we use BSBL for face recognition. Among a number of BSBL algorithms, we choose the bound-optimization based BSBL algorithm [21], denoted by BSBL-BO 4 .

Face recognition via BSBL
As stated in Section 2, we use BSBL-BO to estimate x 0 , denoted by x BSBL , and then use the rule (4) to assign a test face y to a class.
In practice, a test face y may contain some outliers, i.e., y = y 0 +ǫ, where y 0 is the outlier-free face image and ǫ is a vector whose each entry is an outlier. Generally, the number of outliers is small, and thus ǫ is sparse. Addressing the outlier issue is important to a practical face recognition system. In [11], an augmented sparse model was used to deal with this issue. We now extend this method to our block sparse model, and use BSBL-BO to estimate the solution. In particular, we adopt the following augmented block sparse model: where n is a vector modeling dense Gaussian noise, Φ [Φ, I] and x [x T , ǫ T ] T . Here I is an identity matrix of the dimension m × m. Clearly, x is also a block sparse vector, whose first K blocks are the blocks of x and last m elements are m blocks with the block size being 1 5 . Thus, (12) is still a block sparse model, and can be solved by BSBL-BO. Once BSBL-BO obtains the solution, denoted by x ǫ BSBL , its first K blocks (denoted by x BSBL ) and its last m elements (denoted by ǫ) are used to assign y to a class according to We now take the Extended Yale B database [40] as an example to show how our method works. As shown in SRC [11], we randomly select half of the total 2414 faces (i.e., 1207 faces) as the training set and the rest as the testing set. Each face is downsampled from 192 × 168 to 24 × 21 = 504. The training set contains 38 subjects. Each subject has about 32 faces. Therefore, in our model K = 38, and n 1 ≈ · · · ≈ n K ≈ 32. The matrix Φ has the size 504 × 1207, and thus the matrix Φ has the size 504 × 1711. The procedure is illustrated in Fig. 1. Fig. 1 (a) shows that a test face (belonging to Subject 4) can be linearly combined by a few training faces. Most of the coefficients estimated by BSBL-BO (i.e., x BSBL ) are zero or near zero and only those associated with the test face are significantly nonzero. Fig. 1 (b) shows the residuals y−Φδ j ( x BSBL ) 2 for j = 1, · · · , 38. The residual at j = 4 is 0.0008, while the residuals at j 4 are all close to 1, which makes it easy to assign the test face to Subject 4. See Section 5.1.1 for more details.

Experimental results
To demonstrate the superior performance of BSBL, we performed experiments on three widely used face databases: Extended Yale B [40], AR [41] and CMU PIE [42] face databases. The face images of these three databases were captured under varying lighting, pose or facial expression. The AR database also has occluded face images for the test of robustness of face recognition algorithms. Section 5.1 shows experimental results on face images without occlusion, and Section 5.2 shows experimental results on face images with three kinds of occlusion.

Face recognition without occlusion
For the experiments on face images without occlusion, we used downsampling, Eigenfaces [3,6], and Laplacicanfaces [5] to reduce the dimensionality of original faces. We compared our method with three classical methods, including Nearest Neighbor (NN) [8], Nearest Subspace (NS) [9], and Support Vector Machine(SVM) [10]. We also compared our method with recently proposed sparse-representation based classification methods, including the basic sparserepresentation classifier (SRC) [11] and the block-sparse recovery algorithm via convex optimization (BSCO) [20]. For NS, the subspace dimension was fixed to 9. For BSCO, we used the P ′ ℓ 2 /ℓ 1 algorithm [20] which has been shown to be the best one among all the structured sparsitybased classifiers proposed in that work. Fig. 2. Sample face images of 2 individuals from the Extended Yale B database. 1st row: ten sample face images of the first subject. 2nd row: ten sample face images of the third subject.

Extended Yale B database
The Extended Yale B database [40] consists of 2414 frontal-face images of 38 subjects (each subject has about 64 images). In the experiment, we used the cropped 192 × 168 face images which were captured under various lighting conditions [43]. Two subjects are shown in Fig. 2 for illustration (for each subject, only 10 face images are shown). We randomly selected half face images of each subject as the training set and the rest as the testing set. We used downsampling, Eigenfaces, and Laplacicanfaces to extract features from face images. The dimensions of extracted features were 30, 56, 120 and 504 respectively.
Experimental results are shown in Fig. 3, where we can see our method uniformly outperformed other algorithms regardless of used features. Particularly, our method had better performance when using Laplacianfaces. The superiority of our method was much clearer when the feature dimension was smaller and Laplacianfaces were used. For example, when the feature dimension was 56, our method achieved the highest rate of 98.9%, while NN, NS, SVM, SRC and BSCO achieved the rate of 83.5%, 90.4%, 85.0%, 91.7% and 79.4%, respectively. Higher performance using low dimensional features is attractive for recognition, since lower feature dimension generally implies the computational load is accordingly lower.

AR database
The AR database [41] consists of more than 4000 front-face images of 126 human subjects. Each subject has 26 images in two separated sessions, as shown in Fig. 4. This database includes more facial expression and facial disguise. We chose 100 subjects (50 male and 50 female) in this experiment. For each subject, seven face images with different illumination and facial expression (i.e., the first 7 images of each subject) in Session 1 were selected for training, and the first 7 images of each subject in Session 2 for testing. All the images were converted to gray mode and were resized to 165 × 120. Downsampled faces, Eigenfaces and Laplacianfaces were applied with the dimension of 30, 54, 130 and 540. Experimental results are shown in Fig. 5.
From Fig. 5(a), we can see that our algorithm significantly outperformed other classifiers when using downsampled features. However, our method did not achieve the highest rate when using Eigenfaces and Laplacianfaces. This might be due to the small block size in this experiment (n 1 = n 2 = · · · = n 100 = 7). Although our method did not uniformly outperform other algorithms when using different face features, the recognition rate achieved by our method using downsampled faces (96.7%) was not exceeded by other algorithms using any face features.

CMU PIE database
The CMU PIE database [42]  face images of 68 subjects(24 images for each subject) for this experiment. The first subject in this subset is shown in Fig. 6, which varies in pose, illumination and expression. All the images were cropped and resized to 64 × 64. For each subject, we randomly selected 10 images for training, and the rest (14 images for each subject) for testing. Downsampled faces, Eigenfaces and Laplacianfaces were applied with four dimensions, i.e., 36, 64, 144 and 256. Experimental results are shown in Fig. 7. From Fig. 7(a), we can see that sparse-representation-based classifiers usually outperformed classical ones in this dataset. Among the sparse-representation-based classifiers, BSBL and BSCO achieved higher recognition rates than SRC. For BSBL and BSCO, BSBL slightly outperformed BSCO with downsampled faces and Laplacianfaces while BSCO outperformed BSBL with Eigenfaces. Specifically, for each feature space, BSBL achieved the highest recognition rate of 95.80% with downsampled faces and 94.12% with Laplacianfaces while BSCO achieved 98.42% with Eigenfaces, which was the highest one in this experiment. Nevertheless, BSBL outperformed BSCO in 8 out of 12 different combinations of dimensions and features.

Face recognition with occlusion
For the experiments on face images with occlusion, we used downsampling to reduce the size of face images and compared our method with NN [8], SRC [11] and BSCO [20].

Face recognition with pixel corruption
We tested face recognition with pixel corruption on 3 subsets of the Extended Yale B database: 719 face images with normal-to-moderate lighting conditions from Subset 1 and 2 for training and 455 face images with more extreme lighting conditions from Subset 3 for testing. For each test image, we first replaced a certain percentage(0% -50%) of its original pixels by uniformly distributed gray values in [0,255]. Both the gray values and the locations were random and hence unknown to the algorithms. We then downsampled all the images to the size of 6×5, 8×7, 12×10 and 24 × 21 respectively. Two corrupted face images were shown in Fig. 8(a)-(b).
Results are shown in Table 1. It can be seen that in almost all dimensions and corruption, BSBL achieved the highest recognition rate when compared with NN and SRC, and the performance gap between our algorithm and the compared algorithms was very large. For example, when the dimension was 12 × 10 and 50% pixels were corrupted, BSBL achieved the recognition rate of 67.25%, while SRC only had a recognition rate of 46.37%. Meanwhile, BSBL outperformed BSCO in 17 out of 24 dimension and corruption situations. Fig. 9(a) shows the recognition rates of the four algorithms at different pixel corruption levels with the dimension of

Face recognition with block occlusion
In this experiment, we used the same training and testing images as those in the previous pixel corruption experiment. For each test image, we replaced a randomly located square block with an unrelated image(the baboon image in SRC [11]), which occluded 0% -50% of the original testing image. We then downsampled all the images to the size of 6 × 5, 8 × 7, 12 × 10 and 24 × 21 respectively. Two occluded face images were shown in Fig. 8(c)-(d). Table 2 shows the recognition rates of NN, SRC, BSCO and BSBL on different dimensions and percentages of occlusion. Again, BSBL outperformed the compared algorithms in most cases. For example, when the occlusion percentage ranged from 10% to 50% and the face dimension was 12 × 10, BSBL achieved about 8.35%-13.19% higher recognition rate than BSCO, as shown in Fig. 9(b).

Face recognition with real face disguise
We used a subset of AR database to test the performance of our method on face recognition with disguise. We chose 799 images of various facial expression without occlusion (i.e., the first 4 face images in each session except a corrupted image named 'W-027-14.bmp') for training. We formed two separate testing sets of 200 images. The images in the first set were from the neutral expression with sunglasses (the 8th image in each session) which cover roughly 20% of the face, while the ones in the second set were from the neutral expression with scarves (the 11th image in each session) which cover roughly 40% of the face. All the images were resized to 9 × 6, 13 × 10, Results are shown in Table 3. In the case of neutral expression with sunglasses, both SRC and NN acheived higher recognition rates than BSCO and BSBL. However, in the case of neutral expression with scarves, BSBL outperformed NN, SRC and BSCO significantly. Totally, BSBL achieved the highest recognition rates 72.50% and 74.5% with the dimensions of 27 × 20 and 42 × 30 respectively for the two testing sets, while SRC achieved the highest recognition rates 28.25% and 44.00% with the other two dimensions.

Conclusions
Classification via sparse representation is a popular methodology in face recognition and other classification tasks. Recently it was found that using block-sparse representation, instead of the basic sparse representation, can yield better classification performance. In this paper, by introducing a recently proposed block sparse Bayesian learning (BSBL) algorithm, we showed that the BSBL is a better framework than the basic block-sparse representation framework, due to its various advantages over the latter. Experiments on common face databases confirmed that the BSBL is a promising sparse-representation-based classifier.