A target recognition method for synthetic aperture radar (SAR) image based on complex bidimensional empirical mode decomposition (C-BEMD) is proposed. C-BEMD is used to decompose the original SAR image to obtain multilevel complex bidimensional intrinsic mode functions (BIMF), which reflect the two-dimensional time-frequency characteristics of the target. In the classification stage, the decomposed multilevel BIMFs are represented using the multitask sparse representation. Finally, the target category of the test sample is determined according to the reconstruction errors related to different training classes. In the experiment, the standard operating condition (SOC) and extended operating conditions (EOC) are designed based on the MSTAR dataset to test and verify the proposed method. The results confirm the effectiveness and robustness of the method.
1. Introduction
Synthetic aperture radar (SAR) image processing has potential value in both military and civil fields [1]. SAR target recognition technology determines the target category through the analysis of target characteristics in images, which can be performed using template-based [2, 3] and mode-based ways [4–7]. Feature extraction is one of the key steps in SAR target recognition, which mainly realizes the extraction and representation of target characteristics. At this stage, commonly used SAR image features include geometric ones, transformation ones, and electromagnetic ones. In [8–17], the region (including the target and shadow ones), contours were used for SAR target recognition for describing the shape distributions. The transformation features can be summarized into two categories. One type was processed by mathematical projection algorithms [18–22], typically like principal component analysis (PCA) [18] and nonnegative matrix factorization (NMF) [20]. The other type used image decomposition through signal processing algorithms, such as monogenic signal [23] and bidimensional empirical mode decomposition (BEMD) [24]. In [25], the visual saliency model was employed for discriminative feature learning. The electromagnetic features describe the backscattering characteristics of the target reflected in the SAR imaging process, such as polarization [26] [27] and scattering centers [28–31]. The classifier is another key step of SAR target recognition. The classification mechanism is designed to classify the extracted features. A large number of classifiers have been used and verified in SAR target recognition, including support vector machines (SVM) [32, 33] and sparse representation-based classification (SRC) [34–36]. In recent years, with the maturity of deep learning theory and algorithms [37–39], a large number of SAR target recognition methods were developed based on deep leaning models, among which the most representative one was the convolutional neural network (CNN) [40–46].
The results of feature extraction, as the input of the classifier, largely determine the classification accuracy. Therefore, designing and proposing new SAR image feature extraction algorithms are of great significance for target recognition. This paper proposes a SAR target recognition method based on complex BEMD (C-BEMD) [47, 48]. C-BEMD is an extension of the traditional EMD [49, 50] and BEMD [24, 49] to the complex domain, which can be directly used for the processing and analysis of complex images. In [24], the authors applied BEMD for SAR image decomposition and target recognition, which verified the effectiveness. However, SAR images are filled with complex values with both amplitude and phase information. The sole use of image intensities would lose the discrimination of the phase distribution. In this sense, C-BEMD can more effectively reflect the two-dimensional time-frequency characteristics of the target, thereby providing more sufficient information for the following classification. For the bidimensional intrinsic mode functions (BIMF) obtained by C-BEMD, this paper adopts the multitask sparse representation for decision-making in the classification stage. The multitask sparse representation is a general extension of traditional single-task one, which considers and makes use of the relationship between several tasks. For the BIMFs decomposed by C-BEMD, their inner correlations can be employed by the multitask sparse representation, thereby improving the overall reconstruction accuracy. Finally, the target category of the test sample is determined according to the total reconstruction errors of all the BIMFs from the test sample achieved by individual training classes. In the experiments, a variety of operating conditions were set up based on the MSTAR dataset to test the proposed method. The experimental results verify the effectiveness and robustness of this method.
2. Basics of C-BEMD
Yeh first developed EMD to adaptively analyze the nonstationary signals [48]. Unlike traditional signal decomposition methods, e.g., wavelet analysis, EMD does not impose any prior assumptions on the data, such as linearity or stationarity. In the past researches, EMD has been numerically validated to be more capable of describing patterns in nonstationary and nonlinear signals. As a natural generalization of EMD to 2D space, BEMD is capable of describing an image using several BIMFs [24, 49]. The original image is decomposed into high- and low-frequency components with some residues. Hence, the generated BIMFs could better reflect the global and detailed information of the decomposed image. However, the traditional BEMD are designed for real signals and cannot process complex signals or images. Yeh extended BEMD to the complex domain to enable it for directly decomposing the complex matrix. According to [47, 48], the specific implementation process of C-BEMD algorithm can be summarized as following steps.
Step 1.
Construction of a two-dimensional band-pass filter as(1)B=Hm×nOm×N−nOM−m×nOM−m×N−n,where Hm×n represents a matrix with the sizes of m×n; Op×q was a zero matrix with the sizes of p×q. The values of m and n are determined as follows:(2)m=M+12,n=N+12.
Step 2.
Construction of 4 analytic signals as(3)G1eju,ejv=Aeju,ejvFeju,ejv,G2eju,ejv=Aeju,ejvFe−ju,ejv,G3eju,ejv=Aeju,ejvFeju,e−jv,G4eju,ejv=Aeju,ejvFe−ju,e−jv,where Feju,ejv denotes the two-dimensional Fourier transform of the input image fx,y.
Step 3.
Get the two-dimensional inverse Fourier transforms of G1eju,ejv and G4eju,ejv and extract their real parts as g¯1x,y and g¯4x,y. Get the two-dimensional inverse Fourier transforms of G2eju,ejv and G3eju,ejv and extract their imagery part as g¯2x,y and g¯3x,y.
Step4.
Apply BEMD to decompose g¯1, g¯2, g¯3, and g¯4, respectively. The obtained BIMFs are denoted as g¯1k, g¯2k, g¯3k, and g¯4k, with the numbers of decompositions as Nq, q=1,2,3,4.
Step 5.
Apply FQOto process the g¯qk and obtain the complex BIMF as(4)g¯qk=FQOg¯qkx,y.
In equation (4), the FQOfunction is defined as(5)FQOfx,y=14fx,y−f1x,y+jfxx,y+fyx,y.
The detailed deduction and implementation of C-BEMD can be found in [45].
In this paper, C-BEMD is applied to the decomposition of complex SAR images, and the two-dimensional time-frequency characteristics of the targets are described through multilevel BIMFs. Figure 1 decomposes a SAR image (shown in Figure 1(a)) from the MSTAR dataset with the amplitude parts of the first three BIMFs shown in Figures 1(b)–1(d), respectively. It can be seen that the decomposition results can effectively describe the characteristics associated with the target, while forming an effective complement to the original image with more detailed information. Therefore, this paper jointly uses the original image and BIMFs decomposed by C-BEMD for the following classification.
Illustration of the real parts of the decomposed BIMFs by C-BEMD: (a) original image, (b) 1st BIMF, (c) 2nd BIMF, and (d) 3rd BIMF.
3. Classification of Multilevel BIMFs for Target Recognition3.1. Multitask Sparse Representation
The multitask sparse representation can be considered as a united and compact form of several related sparse representation tasks [51, 52]. With the constraint of inner correlations, the multitask sparse representation could produce more precise and robust solutions than those from individual tasks. As reported in [53–57], the multitask sparse representation has been successfully applied to SAR target recognition to classify multiple views, features, resolutions, etc. This study employs it for the classification of multilevel BIMFs generated by C-BEMD. Assume there are M BIMFs denoted as y1,…,yM, which are from the same test sample y, and they are represented based on the sparse representations as(6)argminAgA=∑l=1Myl−Dl∗al,where Dl forms the dictionary of the lth BIMF; A=a1,…,aM stores the coefficient vectors of all the BIMFs.
Equation (6) aims to minimize the total reconstruction error but neglects the correlations between different tasks. As the decompositions are from the same image, different levels of BIMFs are actually correlated. So, the core of the multitask sparse representation falls on the constraint on the coefficient matrix and the optimization problem is changed to be(7)minAgA+ηA0,2,where A0,2 calculates the ℓ0/ℓ2 norm of the coefficient matrix; η is a nonnegative constant acting as the regularization parameter.
As validated, the coefficient vectors of different components solved by equation (7) tend to share similar patterns originated from their inner correlations. From reports in related researches [53–57], such modifications effectively improve the reconstruction precision, especially for pattern recognition problems. With the estimation of the coefficient matrix A^=a^1,…,a^M, the reconstruction errors of the test sample with respect to individual training classes can be obtained for the determination of target category as(8)Categoryy=mini=1,…,C∑l=1Myl−Dila^il2,where Dil extracts the subdictionary of the lth BIMF in the ith class; a^il infers to the corresponding coefficients.
3.2. Target Recognition
Figure 2 shows the basic flow of the proposed method for SAR target recognition. The training samples are first decomposed by C-BEMD to obtain the multilevel BIMFs, and a global dictionary is constructed for each of them accordingly. For the test sample, the same C-BEMD is used to decompose the corresponding levels of BIMFs. Then, the BIMFs of the test sample are jointly represented with the support of the constructed dictionaries. Finally, the target category of the test sample is determined according to the reconstruction errors from equation (8).
Flowchart of SAR target recognition based on multitask representation of BIMFs by C-BEMD.
In the actual operation process, the BIMFs obtained by C-BEMD are complex ones with both amplitude and phase parts, so they are extracted and used separately as well as the original SAR image. As shown in Figure 2, the K BIMFs for the dictionaries come from K/2 decompositions by C-BEMD, where the former K/2 represent the amplitude and latter K/2 infer to the phase. Especially in this paper, the first three BIMFs are used together with the original SAR image as shown in Figure 1. Both the global and local information of SAR targets can be characterized by these components. Therefore, the proposed method can make full use of the two-dimensional time-frequency characteristics of complex SAR images to improve the final recognition performance.
4. Experiments4.1. Preparation
The proposed method is tested based on the MSTAR dataset. The dataset contains SAR images of the 10 types of targets shown in Figure 3, covering 0°∼360° azimuth angles and typical depression angles such as 15°, 17°, 30°, and 45°. Due to the abundant data samples, the MSTAR dataset has long been the benchmark data source for the verification of SAR target recognition methods. According to the existing researches, this study relies on the MSTAR dataset to set up typical operating conditions for experiments, including standard operating condition (SOC), extended operating conditions (EOC) to be configuration differences, depression angle differences, and noise interference.
Appearances of ten targets in MSTAR dataset. (a) BMP2. (b) BTR70. (c) T72. (d) T762. (e) BRDM2. (f) BTR60. (g) ZSU23/4. (h) D7. (i) ZIL131. (j) 2SI.
Table 1 shows the training and test samples for the 10-class classification task under SOC, which come from 17° and 15° depression angles, respectively. All types of targets are from the same configurations, with only a small difference in the depression angles. Therefore, the test and training samples tend to share high similarities, so the recognition problem is relatively simple. Table 2 sets the training and test samples under the condition of configuration differences, including 3 types of targets. Among them, the training and test samples of BMP2 and T72 come from completely different configurations. Table 3 shows the training and test samples from different depression angles. In this case, the training samples are from 17° depression angle but the test ones are from 30° and 45°, respectively. In addition, on the basis of the experimental setting in Table 1, noises are added to the test samples to generate test sets of different signal-to-noise ratios (SNR) [29]. Then, the proposed method can be evaluated under noise interference. Figure 4 shows some noisy SAR images at different SNRs, where the influences of noises can be observed on the target appearances.
Experimental setup under SOC.
Depr.
Target category
BMP2
BTR70
T72
T62
BDRM2
BTR60
ZSU23/4
D7
ZIL131
2S1
Training
17°
232 (SN_9563)
232
231 (SN_132)
298
297
255
298
298
298
298
Test
15°
194 (SN_9563)
195
195 (SN_132)
272
273
194
273
273
273
273
Experimental setup under configuration differences.
Depr.
Target category
BMP2
BDRM2
BTR70
T72
Training
17°
232 (SN_9563)
297
232
231 (SN_132)
Test
15°, 17°
427 (SN_9566)428 (SN_C21)
0
0
425 (SN_812)
572 (SN_A04)
572 (SN_A05)
572 (SN_A07)
566 (SN_A10)
Experimental setup under depression angle differences.
Depr.
Target category
2S1
BDRM2
ZSU23/4
Training
17°
298
297
298
Test
30°
287
286
287
45°
302
302
302
SAR images at different SNRs: (a) original; (b) 10 dB; (c) 5 dB; (d) 0 dB; (e) −5 dB; (f) −10 dB.
Six types of reference methods are selected from existing researches to be simultaneously compared with the proposed one under same conditions. The first one comes from [24], which employed BEMD for SAR image feature extraction. The second one used visual saliency model for feature extraction, denoted as VSM method. The third and fourth methods are CNN-based ones, using the residual networks (Res-Net) [42] and deep feature [46], respectively. The last two are developed based on the multitask sparse representations to classify the multiple features (extracted by PCA, kernel PCA, and NMF) [55] and multiresolution representations [56]. They are abbreviated as “multifeature” and “multiresolution,” respectively. The following experiments are conducted sequentially under SOC and three EOCs. All the methods are compared with quantitative results to reach some effective conclusions.
4.2. SOC
Relying on the experimental setup in Table 1, the proposed method is tested and verified under SOC. Figure 5 shows the classification confusion matrix of the 10 types of targets. The horizontal and vertical coordinates in the figure represent the true and the prediction labels of the test samples, respectively. Hence, the diagonal elements are the classification accuracies of different targets. This study defines the recognition rate as Pcr=Nc/NT, where Nc and NT denote the numbers of correctly-classified and total test samples, respectively. The average recognition rates on the 10 types of targets achieved by various methods are shown in Table 4. It can be seen that different methods can achieve high recognition performance under SOC. In contrast, the proposed method is better than the five reference methods with an average recognition rate of 99.34%. Under SOC, the training sample and the test sample have high similarities, which can effectively cover various situations in the test set, which contributes to the good performance of CNN-based methods, i.e., Res-Net and deep feature. In particular, compared with the BEMD method, this paper uses C-BEMD to effectively explore the time-frequency characteristics of the original complex SAR image and obtain more effective feature descriptions. Therefore, the final performance of the proposed method is better than the BEMD method. For multifeature and multiresolution methods, they used the multitask sparse representation in the classification stage, same with the proposed one. The higher Pcr of the proposed one shows that the BIMFs decomposed by C-BEMD have higher discriminability than the multiple features or resolutions.
Confusion matrix of the proposed method under SOC.
Recognition performance of different methods under SOC.
Method
Proposed
BEMD
VSM
Res-Net
Deep feature
Multifeature
Multiresolution
Pcr (%)
99.34
99.02
99.06
99.16
99.21
99.12
99.14
4.3. Configuration Differences
Relying on the experimental setup in Table 2, the proposed method is tested and verified under configuration differences. Table 5 lists the classification results with respect to each configuration from BMP2 and T72 with the Pcr, and the average recognition rate of all the configurations reach 98.52%. The recognition performance of various methods is shown in Table 6. Compared with the SOC case, the Pcrs of different methods decreases to some extent because of the configuration differences. Specially, Res-Net and deep feature methods have the significant falls with inadequate training samples to cover the situations in the test set. In contrast with the traditional BEMD method, the Pcr of the proposed method has some improvements, which proves that C-BEMD can more effectively extract the complex-domain features of SAR images, thereby improving the overall recognition performance. Also, the better performance than the multifeature and multiresolution methods validates the higher discrimination of BIMFs decomposed by C-BEMD.
Results of the proposed method under configuration differences.
Configuration
Classification
Pcr (%)
BMP2
BRDM2
BTR70
T72
BMP2
SN_9566
422
2
1
2
98.83
SN_C21
425
1
1
1
99.30
T72
SN_812
3
1
2
419
98.59
SN_A04
1
2
4
565
98.78
SN_A05
3
5
3
561
98.08
SN_A07
2
6
1
563
98.43
SN_A10
3
2
6
555
98.06
Average
98.52
Recognition performance of different methods under configuration differences.
Method
Proposed
BEMD
VSM
Res-Net
Deep feature
Multifeature
Multiresolution
Pcr (%)
98.52
98.02
98.08
98.08
98.06
97.92
98.14
4.4. Depression Angle Differences
Relying on the experimental setup in Table 3, the proposed method is tested under the condition of depression angle differences. All the methods perform the classifications at 30° and 45° depression angles, respectively, and the results are summarized as Figure 6. At the depression angle of 30°, the average recognition rates of various methods can be maintained above 93%, indicating that the image differences caused by the depression angle difference are relatively small at this time. However, at the 45° depression angle, the performance of various methods drops significantly. In this situation, the image differences caused by the depression angle difference are much more significant. The proposed method in this paper maintains the highest Pcr at both cases, which proves its robustness to depression angle differences. Compared with the BEMD method, the performance of this paper has been greatly improved, which illustrates the effectiveness of C-BEMD for SAR image feature extraction. With significant differences between the training and test samples, the Res-Net and deep feature methods experience the largest degradations at 45° depression angle among all the methods because the trained networks can hardly discern those test samples with low similarity with the training ones.
Recognition performance of different methods at 30° and 45° depression angles.
4.5. Noise Interference
Relying on the constructed noisy test sets at multiple SNRs, the proposed method is evaluated under noise interference conditions. Figure 7 plots the curves of average recognition rates achieved by different methods with reference to SNR. The proposed method achieves the highest Pcr at each noise level, which verifies its robustness to noise interference. The C-BEMD used in this paper analyzes and extracts features from SAR images in the complex domain and finally obtains features that are robust against noise interference. During the decomposition, a certain denoising process is actually carried out, which can be observed in the implementation steps of C-BEMD. In the classification process, the discriminations of different BIMFs are combined and fused through the multitask sparse representation. Therefore, the proposed method can maintain a high level of performance under noise interference. Among the five reference methods, the BEMD method outperforms the remaining ones, further validating the noise robustness of the decomposed BIMFs. The two CNN-based methods achieve the lowest Pcrs, especially at low SNRs, because the networks trained by SAR images at high SNRs have weak adaptivity to those test sets with much noise.
Recognition performance of different methods under noise interferences.
5. Conclusion
This paper applies C-BEMD to SAR image feature extraction and target recognition. C-BEMD is an extension of traditional BEMD in the complex domain and can be directly used to process complex matrix. In this paper, C-BEMD is used to extract the features of complex SAR images with multilevel complex BIMFs, which can effectively reflect the time-frequency characteristics of SAR targets. In the classification stage, the multitask sparse representation is used to characterize the extracted BIMFs, and the target category of the test sample is determined according to the reconstruction errors. Based on the MSTAR dataset, the proposed method is tested and verified under SOC and typical EOCs including configuration differences, depression angle differences, and noise interferences. The experimental results reflect the highest average recognition rate of the proposed with 99.34% under SOC. Also, the robustness under the three EOCs is also higher than the reference methods.
Data Availability
The MSTAR dataset used to support the findings of this work is available online at http://www.sdms.afrl.af.mil/datasets/mstar/.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
El-DarymliK.GillE. W.McGuireP.PowerD.MoloneyC.Automatic target recognition in synthetic aperture radar imagery: a state-of-the-art review201646014605810.1109/access.2016.26114922-s2.0-84994791414NovakL. M.OwirkaG. J.BrowerW. S.WeaverA. L.The automatic target-recognition system in SAIP1997102187202NovakL. M.OwirkaG. J.BrowerW. S.Performance of 10- and 20-target MSE classifiers20003641279128910.1109/7.8926752-s2.0-0034292176VerboutS. M.IringW. W.HanesA. S.Improving a template-based classifier in a SAR automatic target recognition system by using 3-D target infomration1993615376DiemunschJ. R.WissingerJ.Moving and stationary target acquisition and recognition (MSTAR) model-based automatic target recognition: search technology for a robust ATRProceedings of the 5th SPIE Algorithms Synthetic Aperture Radar ImageryMay 1998Orlando, FL, USA481492RossT. D.BradleyJ. J.HudsonL. J.O’ConnorM. P.SAR ATR: so what’s the problem? —an MSTAR perspectiveProceedings of the 6th SPIE Algorithms Synthetic Aperture Radar ImageryAugust 1999Orlando, FL, USA662672KeydelE.LeeS.MooreJ.MSTAR extended operating conditions: a tutorialProceedings of the SPIEAugust 1996San Diego, CA, USA228242AmoonM.Rezai-radG. a.Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moments features201482778510.1049/iet-cvi.2013.00272-s2.0-84897057812ZhangX.LiuZ.LiuS.LiD.JiaY.HuangP.Sparse coding of 2D-slice Zernike moments for SAR ATR201738241243110.1080/01431161.2016.12661072-s2.0-85006982686ClementeC.PallottaL.GaglioneD.De MaioA.SoraghanJ. J.Automatic target recognition of military vehicles with Krawtchouk moments201753149350010.1109/taes.2017.26491602-s2.0-85019054605DingB.WenG.MaC.Target recognition in synthetic aperture radar images using binary morphological operations201610404600610.1117/1.jrs.10.0460062-s2.0-84994474443CuiS.MiaoF.JinZ.Target recognition of synthetic aperture radar images based on matching and similarity evaluation between binary regions2019715439815441310.1109/access.2019.2948839ShanC.HuangB.LiM.Binary morphological filtering of dominant scattering area residues for SAR target recognition20182018968046510.1155/2018/96804652-s2.0-85058838353PapsonS.NarayananR. M.Classification via the shadow region in SAR imagery201248296998010.1109/taes.2012.61780422-s2.0-84859830841AnagnostopoulosG. C.SVM-based target recognition from synthetic aperture radar images using target region outline descriptors2009712e2934e293910.1016/j.na.2009.07.0302-s2.0-72149124220TanJ.FanX.WangS.Target recognition of SAR images by partially matching of target outlines201933786588110.1080/09205071.2018.14955802-s2.0-85050114824ZhuX.HuangZ.ZhangZ.Automatic target recognition of synthetic aperture radar images via Gaussian mixture modeling of target outlines201919416292210.1016/j.ijleo.2019.06.0222-s2.0-85068592280MishraA. K.Validation of PCA and LDA for SAR ATRProceedings of the IEEE TENCONNovember 2008Hyderabad, India16MishraA. K.MotaungT.Application of linear and nonlinear PCA to SAR ATRProceedings of the IEEE 25th International Conference RadioelektronikaApril 2015Pardubice, Czech349354CuiZ.FengJ.CaoZ.RenH.YangJ.Target recognition in synthetic aperture radar images via non-negative matrix factorisation2015991376138510.1049/iet-rsn.2014.04072-s2.0-84949786829HuangY.PeiaJ.YangaJ.WangB.LiuX.Neighborhood geometric center scaling embedding for SAR ATR201450118019210.1109/taes.2013.1107692-s2.0-84900863340YuM.DongG.FanH.SAR target recognition via local sparse representation of multi-manifold regularized low-rank approximation201810221110.3390/rs100202112-s2.0-85042536761DongG.KuangG.WangN.ZhaoL.LuJ.SAR target recognition via joint sparse representation of monogenic signal2015873316332810.1109/jstars.2015.24366942-s2.0-85027948718ChangM.YouX.CaoZ.Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition2019713572013573110.1109/access.2019.2941397AmraniM.JiangF.XuY.LiuS.ZhangS.SAR-oriented visual saliency model and directed acyclic graph support vector metric based target classification201811103794381010.1109/jstars.2018.28666842-s2.0-85052857964NovakL. M.HalversenS. D.OwirkaG.HiettM.Effects of polarization and resolution on SAR ATR199733110211610.1109/7.5707132-s2.0-0030735062YangW.LuJ.CaoZ.A new algorithm of target classification based on maximum and minimum polarizationsProceedings of the CIE International Conference on RadarOctober 2006Shanghai, China14PotterL. C.MosesR. L.Attributed scattering centers for SAR ATR199761799110.1109/83.5520982-s2.0-0030779418DingB.WenG.ZhongJ.MaC.YangX.A robust similarity measure for attributed scattering center sets with application to SAR ATR201721913014310.1016/j.neucom.2016.09.0072-s2.0-84998887133DingB.WenG.HuangX.MaC.YangX.Target recognition in synthetic aperture radar images via matching of attributed scattering centers20171073334334710.1109/jstars.2017.26719192-s2.0-85015905310ZhangX.Noise-robust target recognition of SAR images based on attribute scattering center matching201910218619410.1080/2150704x.2018.15385802-s2.0-85064668132ZhaoQ.PrincipeJ. C.Support vector machines for SAR automatic target recognition200137264365410.1109/7.9374752-s2.0-0035300682LiuH.LiS.Decision fusion of sparse representation and support vector machine for SAR image target recognition20131139710410.1016/j.neucom.2013.01.0332-s2.0-84877622257ThiagaraianmJ. J.RamamurthyK. N.KneeP.Sparse representations for automatic target classification in SAR imagesProceedings of the 4th International Symposium on Communication, Control Signal ProcessMarch 2010Limassol, Cyprus14SongH.JiK.ZhangY.XingX.ZouH.Sparse representation-based SAR image target classification on the 10-class MSTAR data set2016612610.3390/app60100262-s2.0-84973661549NingC.LiuW.ZhangG.WangX.Synthetic aperture radar target recognition using weighted multi-task kernel sparse representation2019718120218121210.1109/access.2019.2959228KrizhevskyA.SutskeverI.HintonG. E.Imagenet classification with deep convolutional neural networksProceedings of the NIPSDecember 2012Lake Tahoe, NV, USA10961105SzegeduC.LiuW.JiaY.Going deeper with convolutionsProceedings of the CVPRJune 2015Boston, MA, USA19ZhuX. X.TuiaD.MouL.Deep learning in remote sensing: a comprehensive review and list of resources20175483610.1109/mgrs.2017.27623072-s2.0-85040367775MorganD. E.Deep convolutional neural networks for ATR from SAR imageryProceedings of the SPIEJuly 2015Szczecin, Poland113ChenS.WangH.XuF.Target classification using the deep convolutional networks for SAR images20165461685169710.1109/tgrs.2016.25517202-s2.0-84992303973FurukawaH.Deep learning for target classification from SAR imagery data augmentation and translation invariance20171171821317ZhaoJ.ZhangZ.YuW.TruongT.-K.A cascade coupled convolutional neural network guided visual attention method for ship detection from SAR images20186506935070810.1109/access.2018.28692892-s2.0-85053146443WangL.BaiX.ZhouF.SAR ATR of ground vehicles based on ESENet20191111131610.3390/rs111113162-s2.0-85067389277ZhaoP.LiuK.ZouH.ZhenX.Multi-stream convolutional neural network for SAR automatic target recognition2018109147310.3390/rs100914732-s2.0-85053615838AmraniM.JiangF.Deep feature extraction and combination for synthetic aperture radar target classification201711404261610.1117/1.jrs.11.0426162-s2.0-85032865193YehM.-H.The complex bidimensional empirical mode decomposition201292252354110.1016/j.sigpro.2011.08.0192-s2.0-80054736879YehM. H.Multi-focus color image fusion based on bivariate and complex bidimensional empirical mode decompositionsProceedings of the IEEE International Conference on Signal Processing, Communication and Computing (ICSPCC)August 2012Hong Kong, China358363HuangN. E.ShenZ.LongS.The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysisProceedings of the Royal Society of LondonOctober 1998London, UK15QinY.QiaoL.WangQ.RenX.ZhuC.Bidimensional empirical mode decomposition method for image processing in sensing system20186821522410.1016/j.compeleceng.2018.03.0332-s2.0-85045255493TroppJ. A.GilbertA. C.StraussM. J.Algorithms for simultaneous sparse approximation. Part II: convex relaxation200686358960210.1016/j.sigpro.2005.05.0312-s2.0-30844461481JiS.DunsonD.CarinL.Multitask compressive sensing20095719210610.1109/tsp.2008.20058662-s2.0-58649110599ZhangH.NasrabadiN. M.ZhangY.HuangT. S.Multi-view automatic target recognition using joint sparse representation20124832481249710.1109/taes.2012.62376042-s2.0-84863979867DingB.WenG.Exploiting multi-view SAR images for robust target recognition2017911115010.3390/rs91111502-s2.0-85034757265LiuS.YangJ.Target recognition in synthetic aperture radar images via joint multifeature decision fusion201812101601210.1117/1.jrs.12.0160122-s2.0-85040777532ZhangZ.Joint classification of multiresolution representations with discrimination analysis for SAR ATR2018274043030ZhuL.Selection of multi-level deep features via spearman rank correlation for synthetic aperture radar target recognition using decision fusion2020813391413392710.1109/access.2020.3010969