As a method of representing the test sample with few training samples from an overcomplete dictionary, sparse representation classification (SRC) has attracted much attention in synthetic aperture radar (SAR) automatic target recognition (ATR) recently. In this paper, we develop a novel SAR vehicle recognition method based on sparse representation classification along with aspect information (SRCA), in which the correlation between the vehicle’s aspect angle and the sparse representation vector is exploited. The detailed procedure presented in this paper can be summarized as follows. Initially, the sparse representation vector of a test sample is solved by sparse representation algorithm with a principle component analysis (PCA) feature-based dictionary. Then, the coefficient vector is projected onto a sparser one within a certain range of the vehicle’s aspect angle. Finally, the vehicle is classified into a certain category that minimizes the reconstruction error with the novel sparse representation vector. Extensive experiments are conducted on the moving and stationary target acquisition and recognition (MSTAR) dataset and the results demonstrate that the proposed method performs robustly under the variations of depression angle and target configurations, as well as incomplete observation.
In the recent years, sparse representation has attracted much attention in the fields of signal representation, compress sensing, and classification. The sparse representation classification (SRC) algorithm, which is proposed by Wright et al. [
In particular, with the development of SAR imaging techniques including high resolution and multipolarization, much effort has been devoted to SAR ATR. The moving and stationary target acquisition and recognition (MSTAR) dataset [
Through representing the test sample as the combination of training samples, sparse representation classification, which can be considered as a generalization of the LVQ, determines the class of the test sample based on the resulted sparsest coefficients. The sparse coefficients contain discriminatory information of the samples in low-dimensional subspace and are robust to noise and occlusion as well as incomplete observation [
In this section, we give a brief review on the sparse representation and the classification strategy, that is, how to represent a test sample as the combination of training samples from a dictionary [
Suppose that there are
Accordingly, all training samples for the
With a sufficiently large number of samples for each class, the coefficient vector
With the sparsest coefficient
For each class
In Figure
Sparse representation examples for 3 different test targets: (a) SN_9563, (b) SN_C71, and (c) SN_132. From top to bottom: input test target, the corresponding sparse representation coefficient vector, and the reconstruction residual error. All the targets are represented by colors and markers distinct for each class, with red circle for SN_9563, blue square for SN_C71, and green diamond for SN_132, as shown in legend.
In SAR images, even the same target presents different appearances with the variation of aspect angle. In this section, the aspect information is evaluated for the classification of vehicles in SAR image. Based on the analysis of the correlation of the test image with the train images of various aspects, the sparse representation vector is mapped onto a local aspect range and the algorithm of SRC along with aspect angle is proposed.
The correlation between two images reflects the similarity of them. A higher correlation coefficient means the two target images are likely to come from the same class. Based on the correlation coefficient, the template matching method has been widely adopted in SAR ATR [
Given two images, the correlation coefficient is calculated as follows [
Figure
Correlation coefficients of the input targets in Figure
The vehicles in SAR images are aspect sensitive and the test sample is more likely represented by the train sample whose aspect angle is close to the test sample’s. The conclusion is preliminarily validated by the correlation coefficients in Figure
The sparse representation vector and residual error of a test sample from SN_C71. (a) The sparse representation vector. (b) Residual error calculated with the complete sparse representation vector. (c) Residual error calculated with the sparse coefficients within a certain aspect range. The ground-truth aspect angle of the test sample is 307.01°, and the rectangle indicates the neighboring range around the aspect of the test sample.
Motivated by the above observations and analysis, we propose the SRC method along with aspect angle. For each class
It should be noticed that the aspect information can also be introduced to the SRC by other alternative ways, such as constructing the dictionary with the train samples of certain aspect or taking the aspect angle as one of the rows of the dictionary. However, the first one requires a large number of train samples of certain aspect to construct the overcomplete dictionary, and the second one is limited by the different dimensions of the aspect and other atoms in the dictionary. Therefore, we intuitively map the sparse coefficient vector onto a local range of aspect and calculate the residual error with the tailored sparse vector. The effectiveness of the proposed method will be further validated in Section
The proposed SAR vehicle classification method consists of three modules:
Procedure diagram of the proposed SRCA method.
In the first module, the aspect angle of the vehicle in SAR image is estimated through image processing techniques. Firstly, the target area is separated from the background with segmentation methods [
In this section, we evaluate the performance of the proposed method using MSTAR public database, which is a standard dataset for evaluating SAR ATR algorithms, and collected in 1995 and 1996 by the Sandia National Laboratory X-band (9.6 GHz) HH-polarization SAR sensor with the resolution of 0.3 m × 0.3 m. One subset of the MSTAR data consists of three classes of vehicles, that is, the BMP2, BTR70, and T72, with several configuration variations for each class. The vehicles are imaged in spotlight mode at 15° and 17° depression angles over 360° of aspect angles. The capacity of the subset is illustrated in Table
Capacity of the subset of MSTAR.
BMP2 | BTR70 | T72 | |||||
---|---|---|---|---|---|---|---|
SN_9563 | SN_9566 | SN_C21 | SN_C71 | SN_132 | SN_812 | SN_S7 | |
17° (train)1 | 233 |
|
|
233 | 232 |
|
|
15° (test) | 195 | 196 | 196 | 196 | 196 | 195 | 191 |
Note: 1the samples corresponding to the numbers in brackets are not used in training or testing, unless notified.
In the sequel, we carry out several experiments. Firstly, we evaluate the performance of the proposed method under different adopted range of aspect and feature dimensionalities. We then examine the robustness of the proposed method with respect to the variations of depression angle and target configurations. Finally, we evaluate the proposed algorithm under the condition of incomplete observation.
In this experiment, we use the first serial number targets from each class, that is, SN_9563 for BMP2, SN_C71 for BTR70, and SN_132 for T72, for algorithm evaluation and comparison. The training samples are captured at depression angle of 17° and the testing samples are captured at depression angle of 15°.
In our first experiment, we evaluate the recognition accuracy of the proposed SRCA method via different range of aspect for the different feature dimensions. The performance curves in Figure
Recognition performance of different algorithms under (a) different adopted range of aspect angle with
In the following experiment, we compare the performance of different algorithms when feature dimension changes. The corresponding results are summarized in Table
Recognition accuracy (%) on MSTAR with different feature dimensions (
Dims. ( |
20 | 40 | 60 | 80 | 100 | 120 | Avg. |
---|---|---|---|---|---|---|---|
Linear SVM | 80.56 | 90.79 | 90.62 | 90.79 | 90.96 | 91.14 | 89.14 |
KSVM | 82.43 | 92.33 | 92.67 | 92.84 | 93.01 | 92.16 | 90.91 |
SRC | 92.33 | 98.81 | 98.47 | 98.64 | 97.96 | 97.78 | 97.33 |
|
|||||||
SRCA | 93.01 | 99.66 | 99.83 | 99.66 | 99.66 | 99.83 |
|
For the real-world tasks, the invariance to depression angle is crucial to the successful application of a recognition algorithm. In this subsection, we evaluate the invariance to depression angle for the four algorithms. There are two different depression angles for the first 3 classes of MSTAR, that is, 17° and 15°. In the previous experiment, we have taken the samples captured on the depression angle of 17° for training and the samples captured on the depression angle of 15° for testing. In this experiment, we exchange the testing and training samples. As can be seen from Table
Depression angle invariance results (%) for different algorithms (
Datasets | Linear SVM | KSVM | SRC | SRCA |
---|---|---|---|---|
|
90.62 | 92.67 | 98.47 |
|
|
89.40 | 91.98 | 97.85 |
|
In this subsection, we examine the invariance of different algorithms under different configurations, which is a desirable property of an algorithm for SAR ATR applications. As shown in Table
Configuration invariance results (%) for different algorithms (
Algorithms | Datasets | Invariant | Mixed | Variant | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Input |
BMP2 | BTR70 | T72 | BMP2 | BTR70 | T72 | BMP2 | BTR70 | T72 | |
Linear SVM | BMP2 | 85.13 | 8.21 | 6.66 | 75.13 | 10.05 | 14.82 | 71.17 | 10.20 | 18.62 |
BTR70 | 2.55 | 92.43 | 1.02 | 2.55 | 94.90 | 2.55 | 1.53 | 96.94 | 1.53 | |
T72 | 7.65 | 2.04 | 90.31 | 16.49 | 10.31 | 73.20 | 20.98 | 11.66 | 67.36 | |
Avg. | 90.62 | 77.14 | 74.85 | |||||||
|
||||||||||
KSVM | BMP2 | 88.21 | 7.18 | 4.61 | 76.32 | 10.73 | 12.95 | 74.74 | 9.95 | 15.30 |
BTR70 | 2.55 | 96.94 | 0.51 | 2.04 | 95.92 | 2.04 | 1.53 | 97.45 | 1.02 | |
T72 | 6.12 | 1.02 | 92.86 | 14.60 | 5.84 | 79.55 | 19.95 | 5.18 | 74.87 | |
Avg. | 92.67 | 80.51 | 79.36 | |||||||
|
||||||||||
SRC | BMP2 | 97.44 | 0 | 2.56 | 90.97 | 2.04 | 6.98 | 86.48 | 3.83 | 9.69 |
BTR70 | 1.53 | 97.96 | 0.51 | 0.51 | 98.98 | 0.51 | 0 | 100 | 0 | |
T72 | 0 | 0 | 100 | 6.87 | 4.12 |
|
10.36 | 4.15 |
|
|
Avg. | 98.47 |
|
|
|||||||
|
||||||||||
SRCA | BMP2 |
|
0 | 0 |
|
2.38 | 3.92 |
|
4.34 | 3.83 |
BTR70 | 0.51 |
|
0 | 0 |
|
0 | 0 |
|
0 | |
T72 | 0 | 0 |
|
10.14 | 5.84 | 84.02 | 15.54 | 8.03 | 76.42 | |
Avg. |
|
90.48 | 87.37 |
In the real-world tasks, the targets are not observed under all conditions, such as every aspect angles, radar frequencies, and grazing angles. The incomplete observation proposes challenges to the recognition algorithms. We evaluate the robustness of proposed SRCA method under the condition of incomplete observation. In this experiment, the training samples captured at the depression angle of 17° are selected randomly with a certain percentage to construct the training set, and the samples captured at the depression angle of 15° are tested. The performances of different methods are compared in Figure
Performance comparison of the algorithms under incomplete observation.
In this paper, we propose a SAR vehicle recognition method based on sparse representation classification along with aspect angle. The method projects the sparse coefficient vector onto a subspace that is within a certain range of aspect angle around the estimated aspect angle of the test sample and then determines the class label according to the reconstruction residuals. The rationality of the idea lies in that the vehicles on SAR image are sensitive to its aspect angle and they are much more likely represented by the training samples with similar aspect angles. The proposed SRCA method is compared with the linear SVM, KSVM, and SRC methods by carrying extensive experiments on the MSTAR database. The results validate that the proposed SRCA method is robust to the variation of depression angles and target configurations, as well as the incomplete observation of training samples. Despite the effectiveness of the proposed method, much development needs to be further considered in the future work, including the learning of a more compact dictionary from the training data and the fast and effective solution of the sparse representation vector.
The authors declare that there is no conflict of interests regarding the publication of this paper.
This work is partially supported by the National Natural Science Foundation of China under Grant no. 61372163.