Two-dimensional principal component analysis algorithm (2DPCA) can be performed in the batch mode and can not meet the real-time requirements of the video stream. To overcome these limitations, the incremental learning of the candid covariance-free incremental PCA (CCIPCA) is innovated to the existing 2DPCA, and the called incremental 2DPCA (I2DPCA) is firstly presented to incrementally compute the principal components of a sequence of samples directly on the 2D image matrices without estimating the covariance matrices. Therefore, the I2DPCA can improve the feature extraction speed and reduce the required memory. However, the variations between the column direction, generally neglected, are also useful for the high-accuracy object recognition. Thus, another incremental sequential row-column 2DPCA algorithm (IRC2DPCA), based on the proposed I2DPCA algorithm, is also proposed. The IRC2DPCA can compress the image matrices in the row and column direction, and the feature matrices extracted by the IRC2DPCA are with less dimensions than the I2DPCA. The substantial experimental results show that the IRC2DPCA, compared with the other three algorithms, can improve the convergence rates and the recognition rates, compress the dimensions of the feature matrices, and reduce the feature extraction time and the classification time.
National Key R & D Plan of China2017YFB1303304Tianjin Science and Technology Planed Project of China17ZXZNGX00110Tianjin Natural Science Foundation Key Project of China16JCZDJC304001. Introduction
Object recognition has been an active research field in computer vision to classify the objects from the images or the sequent arriving video frames, widely used in the pattern recognition fields of human face [1, 2] and authentication [3–5]. Since the dimension of the original image space obtained by the camera is high, the dimension-reduction and feature extraction algorithms are usually employed before classification and recognition to improve the object recognition speed. It can be widely applied in the fault diagnosis scheme for planetary gearboxes [6], the early fault diagnosis of rolling bearings [7], and the thermal nondestructive testing imagery [8].
Principal component analysis (PCA) is a classical feature extraction and dimension-reduction algorithm. Usually, the eigenvectors corresponding to the larger eigenvalues are selected as the projection axes, and the high dimension data can be projected to the lower dimension subspaces for reducing the dimensions. The PCA, combined with the infrared thermograph, has been used in various application cases [9–11]. For the PCA-based algorithms, the 2D image matrices must be transformed into the 1D column vectors to construct the covariance matrices. Concatenating the entries in the 2D matrices into the 1D vectors often leads to the high-dimensional sample spaces, and it is difficult to accurately evaluate the eigenvectors of the covariance matrices. Furthermore, computing directly the eigenvectors of the covariance matrix is very time-consuming [12]. Yang et al. [12] proposed a two-dimensional principal component analysis algorithm (2DPCA), which can extract the eigenvectors directly from the image matrices without the image-to-vector transformations. It can preserve the spatial structure feature of the images, reduce the dimensions of the covariance matrix, and solve the small sample size problem of the PCA [13, 14]. Subsequently, several 2DPCA extended algorithms were proposed [15–17]. All these extended algorithms ignored the longitudinal feature of the images, and the image information, contained in the feature matrices, is not comprehensive enough. To overcome the weaknesses of the 2DPCA, Yang et al. [18] proposed the sequential row-column 2DPCA algorithm (RC2DPCA). It can completely preserve the feature information of the images by the twofold feature extractions in the row and column directions and can obtain the feature matrices with less dimensions.
In terms of the aforementioned PCA, 2DPCA and RC2DPCA, these algorithms are performed in batch modes. It means that all training sample data have to be available before the principal components can be estimated. If we incorporate additional training data into the existing projection matrix, the matrix has to be retrained with the whole training data. For these batch algorithms, the extraction time and the required memory will increase rapidly with the increase of the training data. Eventually, it will lead to the problem that the feature extraction speed can not meet the updating speed of the training data [19]. The incremental PCA (IPCA) does not need to obtain all training data before the feature extraction; the feature information is updated gradually with the sequent input of the samples. Compared with the batch algorithms, the IPCA can reduce the required memory because it does not need to save the historical training data. In addition, the IPCA can make full use of the historical training results for the subsequent training, and the appended training time can be greatly reduced. The first category of the IPCA algorithms [20–23] allows for the incremental learning using the eigenspace method. It can update the coefficients of the images using the updated eigenspaces without having to retain the original images. Since the new samples are added one by one and the least significant principal components are discarded to preserve the dimensions of the subspaces, the method also suffers from the problem of unpredicted approximation error. The second category can estimate the principal components without directly computing the covariance matrix [24, 25]. The candid covariance-free IPCA (CCIPCA) algorithm developed by Weng et al. [24] can estimate the principal components of the high-dimensional image matrices quickly with a good convergence performance. The candid covariance-free incremental principal component thermography (CCIPCT) [8, 26] presented the application of the CCIPCA in the thermography, and it can provide lower computational complexity as compared to the principal component thermography (PCT). Li et al. [27] proposed an extension method based on the CCIPCA to incrementally update the feature extractors for the camera identification. However, there exist the problems of high-dimensional image vectors since the eigenvectors are statistically determined by the covariance matrix, no matter which method is adopted for obtaining the matrix.
In this paper, an incremental 2DPCA algorithm (I2DPCA) is developed. It can overcome the limitations of the IPCA and the batch 2DPCA. The proposed I2DPCA can compute incrementally the principal components directly on the 2D image matrix without estimating the covariance matrix. It can reduce the extraction time and the required memory. However, the I2DPCA only compresses the image matrices in the row direction, and it requires more coefficients for the image representation. It will lead to a slow classification speed and a large memory requirement for the large-scale databases. Therefore, another incremental RC2DPCA algorithm (IRC2DPCA), based on the I2DPCA algorithm, is proposed to reduce the dimension of the feature matrices by the twofold feature extraction in the row and the column directions. Both the extraction time and classification time are reduced by the IRC2DPCA. Finally, using the block database and the ORL face database as the experimental samples, the convergence rates of the principal components, the classification rates, the computation time, and the required memory are compared and analyzed.
2. The Proposed I2DPCA Algorithm2.1. The 2DPCA Algorithm
The 2DPCA, proposed by Yang et al. [12], can directly compute the covariance matrix for the image matrices without the image-to-vector transformation. Consider the training set which consists of n training image matrices X′1,X′(2),⋯,X′(n), and the size is p×q. The covariance matrix in the row direction can be constructed directly by(1)Sr=1n∑i=1nX′i-X¯nTX′i-X¯n,where X¯n=1/n∑i=1nX′(i) represents the mean of all the training image matrices. Note that Sr is a q×q matrix. Since Sr is a positive semidefinite matrix, there exists an orthogonal matrix Wr′ which satisfies the equation Sr=Wr′∑Wr′T. The eigenvectors wr1′,wr2′,⋯,wrkr′ corresponding to the first kr largest eigenvalues of Sr are selected to construct the projection matrix Wr′,(2)Wr′=wr1′,wr2′,⋯,wrkr′.
Projecting X′(i) onto Wr′, we can obtain a p×kr feature matrix as(3)Y′i=X′iWr′.
The 2DPCA can compute directly the covariance matrix without transforming the image matrices into vectors. However, the 2DPCA is a batch algorithm, therefore all the image sample matrices should be acquired in advance and saved before training; the existing covariance matrix has to be retrained with all training data when a new image is appended. With the increase of the training data, the exaction speed can not meet the updating speed of the new training data. Hence, it can not be applied to the training of the continuous high-dimensional sample data or the real-time video stream data.
2.2. The Proposed I2DPCA Algorithm
To overcome the limitations of the batch learning and the real-time problem, we propose a I2DPCA algorithm based on the CCIPCA [24]. Suppose that the 2D training image matrices are acquired sequentially, X′(1),X′(2),⋯, are possibly infinite, and each 2D image matrix X′(i), i=1,2,⋯, is of size p×q. Assume that the input samples satisfy the zero mean Gaussian distribution [24], and these samples must be centralized in advance. The centralization matrices can be expressed by X(i)=X′(i)-X¯(n). The covariance matrix Sr in the row direction from the n samples can be constructed as(4)Sr=1n∑i=1nXTiXi,where Sr is p×q. Different from the 2DPCA, it can only be estimated as an intermediate step without directly computing. The eigenvector αr of the matrix Sr satisfies(5)vr=λrαr=Srαr,where λr is the corresponding eigenvalue. By replacing the Sr of (4) and replacing the αr of (5) with its estimator αr(i) at each step i, we can rewrite (5) as(6)vrn=1n∑i=1nXTiXiαri,where vr(n) is the nth step estimator of vr. Considering λr=vr, and αr=vr/vr, we can approximately estimate αr(i) as vr(i-1)/vr(i-1). The following incremental form can be given as(7)vrn=1n∑i=1nXTiXivri-1vri-1.
For the incremental estimation, (7) can be written as the following recursive form:(8)vrn=n-1-lnvrn-1+1+lnXTnXnvrn-1vrn-1,where l is a positive parameter called the amnesic parameter. When a new image matrix is appended, the mean matrix needs to be updated. The incremental recursive form of the mean matrix can be updated by(9)X¯n=n-1nX¯n-1+1nX′i.Equation (8) only can estimate the first dominant eigenvector. To compute the i+1th eigenvector, the 2D image matrix is subtracted from its projection on the ith order estimated eigenvector vri(n) as(10)Xi+1n=Xin-XinvrinvrinvriTnvrin,where X1(n)=X(n). The obtained residual Xi+1n is used as the input data of the iteration step. In this way, the first kr dominant eigenvectors are estimated, and the projection matrix Wr can be obtained by(11)Wr=wr1,wr2…wrkr.
Projecting X′(i) onto Wr, we can obtain the p×kr feature matrix as(12)Z′i=Xi′iWr.
The I2DPCA does not need to obtain all the sample data at once before training. Instead, a recursive update of the eigenvector can be performed when a new image sample is input. Therefore, the algorithm does not save all the image samples, and it can reduce the required memory. Without directly computing the covariance matrix, the I2DPCA can estimate the eigenvectors of the training image matrices in a recursive form. However, the I2DPCA only works in the row direction of the image matrices, ignoring the information in the column direction. Furthermore, the dimension of the feature matrix, extracted by the I2DPCA, is still high (p is large). More time and more memory are needed in the classification process.
3. The Proposed IRC2DPCA Algorithm
The I2DPCA can realize the incremental learning and reduce the feature extraction time and the required memory. However, the classification time and the required memory are still large due to the high dimension of the feature matrix extracted by the I2DPCA. And the I2DPCA only considers the transverse features of the image sample matrix but neglects its longitudinal features, so that the extracted image feature information is not comprehensive enough. Therefore, based on the I2DPCA, the feature matrix which has been extracted in the row direction continues to be extracted in the column direction so that the final feature matrix contains enough information of the images in both the row and column directions.
The p×kr feature matrices Z′(i), i=1,2,⋯, extracted by the I2DPCA, are used as the training matrices of the IRC2DPCA. Z(i)=Z′(i)-Z¯(n) express the centralization matrices. And then the p×p covariance matrix Sc, different from Sr of the I2DPCA, can be expressed as(13)Sc=1n∑i=1nZiZTi.
The incremental form with the amnesic parameter can be expressed by(14)vcn=n-1-lnvcn-1+1+lnZnZTnvcn-1vcn-1.
Similarly, the updating mean matrix can be obtained by the following recursive form:(15)Y¯n=n-1nY¯n-1+1nY′i.
We can compute the high order eigenvectors associated with Sc matrix by generating observations only in the complementary space. In this way, the first kc dominant eigenvectors are estimated, and the projection matrix in the column direction Wc can be constructed by(16)Wc=wc1,wc2…wckc.
The feature matrix, extracted by the twofold feature extractions in the sequential row and column directions, can be computed by(17)R=WcTZ′i=WcTX′iWr.
The resulting feature matrix R is a kc×kr matrix, much smaller than the I2DPCA feature matrix Z′(i). The IRC2DPCA algorithm can be described as follows.
Step 1 (feature extraction in the row direction).
Using formula (8) of the I2DPCA, the first dominant estimators can be obtained sequentially with the sequent input of the original image matrix X′(i). The first kr higher order estimators can also be computed by formula (10) with the increase of the number of the order. The first kr eigenvectors corresponding to the first n samples can be gotten by αr=vr/vr to construct the matrix Wr. And then the feature matrix Z′(i) can be obtained by formula (12).
Step 2 (feature extraction in the column direction).
Using the results of the I2DPCA as training matrix Z′(i), the IRC2DPCA can compute the first kc dominant estimators by the formula (14). Similarly, we can get the second projection matrix Wc by the formula (16).
Step 3 (the final feature matrix).
Project all the original image matrices X′(i) onto the matrices of Wr and Wc, and the final feature matrices R can be gotten by formula (17).
The incremental learning of the IRC2DPCA can satisfy the learning of the continuous high-dimensional sample data or the real-time video stream data. At the same time, the algorithm allows the information of the image matrices in the row and column directions to intermingle without losing their separate identities. In addition, the dimension of the feature matrices is kc×kr, much smaller than that of the I2DPCA. It can effectively reduce the dimension of the feature matrices and the classification time.
4. Experiments and Analysis
Two sets of databases are used to evaluate the performances of the proposed algorithms. The block database contains four kinds of blocks (the green cylinder, the green triangular prism, the red cube, and the yellow cuboid), each having 500 different samples. The size of each image is 120×120 pixels, and all images are grayscale processed before the experiments. Figure 1 shows the 10 images of each kind of blocks. The ORL face database contains images from 40 persons, each providing 10 different images. The size of each image is 112×92 pixels. Figure 2 shows the 40 images in the ORL. All programs run on the computer whose CPU is E5-2620 v4, the main frequency is 2.10GHz, and the memory is 32 GB. The operation system is 64-bit Windows 10, and the programming environment is VS2010+Opencv2.4.10.
40 sample images in the block database.
40 sample images in the ORL database.
4.1. Experiments on the Block Database
The 2DPCA [12] and the RC2DPCA [18] are only batch algorithms without the convergence. The amnesic parameter is set to be 2, and the convergence experiments are performed on the I2DPCA and the IRC2DPCA, respectively. The correlation between the estimated eigenvectors wri′,wci′ and the accurate ones wri, wci computed by the batch methods is represented by their inner product wri′·wri or wci′·wci. The larger the inner product, the more correlated the estimated and the accurate eigenvectors.
In the experiment, 500 red cube samples are input in turn. The convergence rates of the I2DPCA and the IRC2DPCA are shown in Figures 3 and 4, respectively. It can be seen that the first five eigenvectors of the IRC2DPCA show an obvious convergent tendency, while the later three ones have poor convergence. The convergence rates of the first five eigenvectors of the IRC2DPCA can reach 90.6% after all samples are input incrementally. For the I2DPCA, only the first three eigenvectors have an obvious convergence. The convergence rates of the five ones can only reach 77.7%. Hence, the estimated eigenvectors of the IRC2DPCA are closer to the accurate ones than those of the I2DPCA.
The convergence rates of the I2DPCA with the red cubes. (a) The first four eigenvectors and (b) the later four ones.
The convergence rates of the IRC2DPCA with the red cubes. (a) The first four eigenvectors and (b) the later four ones.
The convergence rate is influenced by the sample-to-dimension ratio n/d [24]. The higher the ratio, generally, the easier the convergence. In this experiment, the original sample image is 120×120 pixels. The image dimension of the 2DPCA is 120×120=14400, much higher than that of the IRC2DPCA (120×8=960). Thus, when the sample number reaches 500, the sample-to-dimension ratio of the IRC2DPCA is 0.521, while that of the I2DPCA is only 0.035. With the same number of samples, the sample-to-dimension ratio of the IRC2DPCA is larger than the I2DPCA, and the convergence rate of the IRC2DPCA is higher than that of the I2DPCA.
To improve the convergence of the algorithms, 500 red cube samples are input repeatedly; the convergence rates of the I2DPCA and the IRC2DPCA are shown in Figures 5 and 6, respectively. We can see that the first eight eigenvectors of the IRC2DPCA show a more obvious tendency to converge than those of the I2DPCA. To achieve a higher convergence rate, the sample number should be at least 1000. If the video stream is input, the video should last for at least 37s.
The convergence rates of the I2DPCA with the repeated input. (a) The first four eigenvectors and (b) the later four ones.
The convergence rates of the IRC2DPCA with the repeated input. (a) The first four eigenvectors and (b) the later four ones.
The classification experiments are performed on the block database and the k-nearest neighbour algorithm (KNN) is employed for classification. Through many experiments, we can find that the feature matrices dimension always fluctuates around a certain value and the fluctuation value is very small when the highest classification rates can be obtained for the four algorithms. Therefore, in the experiment, a specific feature dimension for each algorithm is selected for testing. L samples are randomly selected from each kind of blocks for training, and the remaining ones are used for testing. To ensure the adequate training samples, L≥20 is needed.
The highest classification rates and the corresponding feature matrix dimensions are given in Table 1. The classification rates of the four algorithms can remain on an upward tendency with the increase of the samples. The performance of the IRC2DPCA is superior to the other three algorithms in the classification rate. When the highest classification rate of 99.8% is obtained, the feature matrix dimension of the IRC2DPCA is only 8×8=64, far smaller than that of the I2DPCA; that is, 120×4=480.
Comparison of the classification rates on the block database.
L
2DPCA [12](120 × 4)
I2DPCA(120 × 4)
RC2DPCA [18](8 × 8)
IRC2DPCA(8 × 8)
25
0.927
0.927
0.899
0.901
50
0.929
0.929
0.947
0.938
75
0.935
0.927
0.948
0.939
100
0.937
0.930
0.965
0.940
125
0.955
0.954
0.965
0.947
150
0.956
0.960
0.955
0.960
175
0.953
0.961
0.958
0.963
200
0.954
0.961
0.961
0.964
225
0.994
0.994
0.995
0.995
250
0.994
0.994
0.999
0.998
The IRC2DPCA is also superior to the I2DPCA in the classification time. Table 2 shows that the average classification time per image of the I2DPCA is 0.0015s, and the IRC2DPCA has less average classification time of 0.0002s, mainly because the IRC2DPCA has a feature matrix with relatively fewer dimensions. However, the average feature extraction time per image of the IRC2DPCA is 0.0516s, slower than that of the I2DPCA which is 0.0267s, because the IRC2DPCA needs to do the feature extraction in both the row and column directions.
Comparison of the running time on the block database.
L
2DPCA [12]
I2DPCA
RC2DPCA [18]
IRC2DPCA
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
25
1.511
1.870
2.648
1.862
2.198
0.302
5.974
0.317
50
4.350
2.010
6.418
1.953
5.615
0.354
11.416
0.341
75
8.951
2.574
8.342
2.582
10.856
0.399
17.532
0.343
100
15.391
2.374
10.921
2.371
16.602
0.408
24.384
0.377
125
22.740
2.386
14.921
2.479
23.951
0.374
28.263
0.409
150
32.838
2.500
15.425
2.498
33.435
0.427
33.930
0.420
175
43.921
2.515
18.526
2.514
43.558
0.399
39.185
0.399
200
55.803
2.708
21.606
2.686
57.873
0.432
44.589
0.432
225
71.662
2.715
23.958
2.713
69.667
0.432
49.155
0.431
250
82.509
2.690
23.998
2.694
88.56
0.427
54.198
0.427
Table 3 shows the required memory in the feature extraction and the classification. It can be seen that the memory of the 2DPCA and the RC2DPCA will increase with the increase of the samples, and this is because both algorithms are batch algorithms. The required memory of the I2DPCA and IRC2DPCA remains basically unchanged. When facing large samples, the advantages of the two incremental algorithms in memory will be more significant.
Comparison of the running memory on the block database.
L
2DPCA [12]
I2DPCA
RC2DPCA [18]
IRC2DPCA
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
25
14110
24457
8511
24649
14307
8351
8478
8318
50
24868
24076
8470
24084
26243
8339
8511
8335
75
38498
23470
8511
27308
38191
8335
8527
8314
100
49258
22876
8482
22884
50130
8306
8468
8310
125
60977
22310
8458
22306
62054
8310
8474
8351
150
72658
21934
8478
21958
73973
8339
8503
8314
175
84533
21188
8503
21385
86007
8347
8491
8318
200
96251
20774
8449
20738
98009
8343
8449
8335
225
107929
20189
8482
20189
109858
8331
8511
8351
250
119623
19599
8511
19591
121937
8331
8486
8302
4.2. Experiments on the ORL Database
To verify the generality of the proposed algorithms, a series of experiments are performed on the ORL face database. In the experiments, 400 face images are input ten times. The convergence rates of the I2DPCA and the IRC2DPCA can be shown in Figures 7 and 8, respectively. Compared with the I2DPCA, the IRC2DPCA has better convergence performance on the ORL database, similarly.
The convergence rates of the I2DPCA with the repeated input. (a) The first four eigenvectors and (b) the later four ones.
The convergence rates of the IRC2DPCA with the repeated input. (a) The first four eigenvectors and (b) the later four ones.
To better compare the performance of the I2DPCA and the IRC2DPCA, we use the two algorithms to reconstruct the face images. By adding up the first d subimages, we can obtain the approximate reconstruction images of the original images. Figure 9 shows eight reconstructed images of the first face image shown in Figure 2. The reconstruction images become clearer with the increase of the subimages. Because the feature extraction dimension of the IRC2DPCA is much smaller than that of the I2DPCA, the effect of the IRC2DPCA on the image reconstruction is poorer.
The reconstructed images. (a) The I2DPCA and (b) the IRC2DPCA.
From the classification rates shown in Table 4, the IRC2DPCA has better performance in the classification rate. When the highest classification rate reaches 97.5%, the feature matrix dimension of the IRC2DPCA is far smaller than that of the I2DPCA. For the running time, Table 5 shows that the IRC2DPCA has less average classification time of 0.0002s than the I2DPCA of 0.0015s. The classification time of the IRC2DPCA also has a great advantage on the ORL database. As shown in Table 6, we can see that, in the running memory, the advantages of the two incremental algorithms are also obvious with the increase of the samples.
Comparison of the classification rates on the ORL database.
L
2DPCA [12](112 × 4)
I2DPCA(112 × 4)
RC2DPCA [18](8 × 8)
IRC2DPCA(8 × 8)
1
0.894
0.842
0.881
0.867
2
0.947
0.909
0.925
0.922
3
0.961
0.918
0.939
0.936
4
0.979
0.950
0.967
0.963
5
0.980
0.955
0.970
0.975
Comparison of the running time on the ORL database.
L
2DPCA [12]
I2DPCA
RC2DPCA [18]
IRC2DPCA
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
Feature extraction time (s)
Classification time (s)
1
0.604
0.434
0.759
0.416
0.749
0.071
1.762
0.071
2
0.984
0.420
1.352
0.416
1.158
0.071
3.050
0.065
3
1.473
0.450
1.843
0.429
1.770
0.080
4.137
0.081
4
2.261
0.433
2.360
0.419
2.551
0.086
5.800
0.079
5
2.842
0.395
2.915
0.424
3.340
0.086
7.333
0.081
Comparison of the running memory on the ORL database.
L
2DPCA [12]
I2DPCA
RC2DPCA [18]
IRC2DPCA
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
Feature extraction memory(B)
Classification memory(B)
1
8413
8298
8441
8355
8474
8298
8413
8206
2
9060
8351
8421
8339
9252
8302
8437
8306
3
12427
8339
8396
8343
12658
8323
8404
8306
4
15810
8294
8404
8323
16142
8206
8421
8206
5
19169
8318
8404
8327
19603
8227
8429
8231
5. Conclusions
In this paper, a new feature extraction algorithm called the I2DPCA for image representation and object recognition has been developed. The I2DPCA can estimate the dominant eigenvectors from incrementally arriving 2D image data without transforming the 2D image matrix into a high-dimensional vector and without directly computing the covariance matrix. It can reduce the complexity of the algorithm and improve the feature extraction time. And then another incremental algorithm called the IRC2DPCA is proposed to intermingle effectively the information in both the row and column directions with less dimensions. The algorithm can effectively reduce the dimension of the feature matrices and the classification time.
The experimental results show that (1) with the increase of the training data, the advantage of the incremental algorithms in the feature extraction time and the required memory is gradually obvious; (2) compared with the I2DPCA, the IRC2DPCA has higher convergence rate and faster convergence speed; (3) the IRC2DPCA has higher classification rate with the less-dimension feature matrix, and its highest classification rate can reach 99.8%; (4) although the IRC2DPCA can not be performed as well in the reconstruction image because the feature matrix dimension is far smaller than the I2DPCA, the classification time is better than the I2DPCA. The proposed algorithms have a widespread prospect in the equipment fault identification and the robot vision tracking. Applying the algorithms, the recognition of the industrial weld surface defect is being tested by other members of our project. The IRC2DPCA algorithm can provide an effective solution for the real-time learning of the high-dimensional sample data or the video stream data.
NomenclaturePCA:
Principal component analysis
2DPCA:
Two-dimensional principal component analysis
RC2DPCA:
Sequential row-column 2DPCA
IPCA:
Incremental PCA
CCIPCA:
Candid covariance-free IPCA
CCIPCT:
Candid covariance-free incremental principal component thermography
PCT:
Principal component thermography
I2DPCA:
Incremental 2DPCA
IRC2DPCA:
Incremental RC2DPCA
KNN:
k-nearest neighbour.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This work is supported by National Key R & D Plan of China (Grant No. 2017YFB1303304), Tianjin Science and Technology Planed Project of China (Grant No. 17ZXZNGX00110), and Tianjin Natural Science Foundation Key Project of China (Grant No. 16JCZDJC30400).
SoltanpourS.BoufamaB.WuQ. M. J.A survey of local feature methods for 3D face recognition20177239140610.1016/j.patcog.2017.08.0032-s2.0-85027498785SunW.ZhaoH.JinZ.A complementary facial representation extracting method based on deep learning201830624625910.1016/j.neucom.2018.04.063AhiK.RiveraA.MazadiA.AnwarM.Fabrication of robust nano-signatures for identification of authentic electronic components and counterfeit avoidance20172632-s2.0-85021447264AhiK.ShahbazmohamadiS.AsadizanjaniN.Quality control and authentication of packaged integrated circuits using enhanced-spatial-resolution terahertz time-domain spectroscopy and imaging20181042742842-s2.0-8502580446210.1016/j.optlaseng.2017.07.007AhiK.RiveraA.AnwarM.Encrypted electron beam lithography nano-signatures for authentication20172603174001710.1142/S0129156417400171LiY.LiG.YangY.LiangX.XuM.A fault diagnosis scheme for planetary gearboxes using adaptive multi-scale morphology filter and modified hierarchical permutation entropy20181053193372-s2.0-8504045734410.1016/j.ymssp.2017.12.008LiY.YangY.WangX.LiuB.LiangX.Early fault diagnosis of rolling bearings based on hierarchical symbol dynamic entropy and binary tree support vector machine2018428728610.1016/j.jsv.2018.04.036YousefiB.SfarraS.Ibarra CastanedoC.MaldagueX. P. V.Comparative analysis on thermal non-destructive testing imagery applying Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT)2017851631692-s2.0-8502168157610.1016/j.infrared.2017.06.008YaoY.SfarraS.LagüelaS.Ibarra-CastanedoC.WuJ.-Y.MaldagueX. P. V.AmbrosiniD.Active thermography testing and data analysis for the state of conservation of panel paintings20181261431512-s2.0-8504033202010.1016/j.ijthermalsci.2017.12.036SfarraS.CheilakouE.TheodorakeasP.PaolettiD.KouiM.S.S. Annunziata Church (L’Aquila, Italy) unveiled by non- and micro-destructive testing techniques201712332-s2.0-85014492536ZhangH.SfarraS.OsmanA.SzielaskoK.StummC.GenestM.MaldagueX.An infrared-induced terahertz imaging modality for foreign object detection in a lightweight honeycomb composite structure2018141210.1109/TII.2018.2832244YangJ.ZhangD.FrangiA. F.Two-dimensional PCA: a new approach to apperence -based face representation and recognition200426113113710.1109/TPAMI.2004.12610972-s2.0-0742268833SenthilkumarR.GnanamurthyR. K.A comparative study of 2D PCA face recognition method with other statistically based face recognition methods201697342543010.1007/s40031-015-0212-62-s2.0-85029808287WangL.WangX.ZhangX.FengJ.The equivalence of two-dimensional PCA to line-based PCA200526157602-s2.0-964428103910.1016/j.patrec.2004.08.016GaoQ.MaL.LiuY.GaoX.NieF.Angle 2DPCA: a new formulation for 2DPCA20174851672167810.1109/TCYB.2017.27127402-s2.0-85023762470WangQ.GaoQ.Robust 2DPCA and its applicationProceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)June 2016Las Vegas, NV, USA1152115810.1109/CVPRW.2016.147ZhangS.-X.CaiY.-P.MuW.-J.The application of s-transformation and M-2DPCA in I.C. engine fault diagnosis18345th International Conference on Computer-Aided Design, Manufacturing, Modeling and Simulation (CDMMS)April 2017Busan, South Korea198211YangW.SunC.RicanekK.Sequential Row-Column 2DPCA for face recognition2012217172917352-s2.0-8486646818310.1007/s00521-011-0676-5ZhaoH.YuenP. C.KwokJ. T.A novel incremental principal component analysis and its application for face recognition200636487388610.1109/TSMCB.2006.8706452-s2.0-33746803683SkočajD.LeonardisA.Weighted and robust incremental method for subspace learning2Proceedings of the 9th IEEE International Conference on Computer Vision (ICCV '03)October 2003Nice, France1494150110.1109/ICCV.2003.12386672-s2.0-0344982975ArtacM.JoganM.LeonardisA.Incremental PCA for on-line visual learning and recognitionProceedings of the 16th International Conference on Pattern Recognition2002Quebec City, Que., Canada78178410.1109/ICPR.2002.1048133LiY.On incremental and robust subspace learning2004377150915182-s2.0-244247172310.1016/j.patcog.2003.11.010Zbl1070.68591TongX.-M.ZhangY.-N.YangT.Robust object tracking based on adaptive and incremental subspace learning20113712148314942-s2.0-84862947500WengJ.ZhangY.HwangW.Candid covariance-free incremental principal component analysis20032581034104010.1109/TPAMI.2003.12176092-s2.0-0041471247WengJ.ZhangY.HwangW.A Fast Algorithm for Incremental Principal Component Analysis20032690Berlin, HeidelbergSpringer Berlin Heidelberg876881Lecture Notes in Computer Science10.1007/978-3-540-45080-1_122BisonP.BurleighD.YousefiB.SfarraS.Ibarra CastanedoC.MaldagueX. P.Thermal NDT applying Candid Covariance-Free Incremental Principal Component Thermography (CCIPCT)Proceedings of the SPIE Commercial + Scientific Sensing and Imaging217Anaheim, California, United States102141I10.1117/12.2263118ShiY.ZhaoY.DengN. M.Robust object tracking based on structural local sparse representation and incremental subspace learning2013765-767238823922-s2.0-8488505495010.4028/www.scientific.net/AMR.765-767.2388