Space Precession Target Classification Based on Radar High-Resolution Range Profiles

. Precession is a common micromotion form of space targets, introducing additional micro-Doppler (m-D) modulation into the radar echo. E ﬀ ective classi ﬁ cation of space targets is of great signi ﬁ cance for further micromotion parameter extraction and identi ﬁ cation. Feature extraction is a key step during the classi ﬁ cation process, largely in ﬂ uencing the ﬁ nal classi ﬁ cation performance. This paper presents two methods for classifying di ﬀ erent types of space precession targets from the HRRPs. We ﬁ rst establish the precession model of space targets and analyze the scattering characteristics and then compute electromagnetic data of the cone target, cone-cylinder target, and cone-cylinder-ﬂ are target. Experimental results demonstrate that the support vector machine (SVM) using histograms of oriented gradient (HOG) features achieves a good result, whereas the deep convolutional neural network (DCNN) obtains a higher classi ﬁ cation accuracy. DCNN combines the feature extractor and the classi ﬁ er itself to automatically mine the high-level signatures of HRRPs through a training process. Besides, the e ﬃ ciency of the two classi ﬁ cation processes are compared using the same dataset.


Introduction
In recent years, space activities have been expanded with the growing emphasis on space utilization by countries around the world [1].In fact, the number of spacecraft, satellites, ballistic missiles, and debris [2] has increased dramatically.There is no doubt that the importance of space target classification is becoming more and more significant.A number of researchers have carried out a lot of exploration on this problem [3][4][5][6][7][8], but it is still a quite difficult and challenging task.Various classification methods based on different features have been proposed under specific application scenarios and detection measures.The main features used by classification methods that have been widely studied and applied for space targets include radar high-resolution range profiles (HRRPs) [4,7,8], micromotion features [9,10], ISAR images [6], RCS features [11], and polarization features [12].Space targets usually possess several complex micromotion forms such as spinning, rotation, tumbling, and precession, whereas discriminations generally exist in the micromotion parameters of different targets.The micro-Doppler (m-D) analysis of echo signal could be used for micromotion parameter inversion, providing important feature information for target classification.
In general, traditional m-D analysis first extracts handcrafted features from raw radar data or time-frequency distributions and then inputs these features into a classifier.Commonly used handcrafted features include Doppler frequency shift, period duration [9], instantaneous frequency, and other predefined features.For example, Du et al. [10] defined a 3-dimensional feature vector extracted from spectrograms and achieved 96% classification accuracy of three classes (single walking person, two people walking, and a moving wheeled vehicle).Berndt [13] extracted rotation rate harmonic and location of modulating scatterers from HRRPs, providing effective information for the classification and identification of airborne targets.Beom-Seok et al. [14] extracted eight statistical and geometrical features from decomposed waveforms by empirical-mode decomposition (EMD), showing encouraging accuracy performance for mini-UAV classification.Applying these handcrafted features has achieved good results in the field of micromotion target classification.However, the universality of parameter theoretical model in handcrafted features remains to be discussed, considering that a certain amount of prior knowledge is required while designing these features.In addition, strong pertinence of these features would slow down the database updates and upgrades.
This paper aims to study the use of HRRPs to classify space targets.Inspired by the recent rapidly developing image classification technology, we regard HRRPs of space targets as image samples attempting to obtain the high-level essential features from the images.The first method we present is to extract a kind of commonly used predefined feature called histograms of oriented gradient (HOG) feature and input it into a multiclass support vector machine (SVM).In the second method, we consider the deep learning theory proposed by Hinton and Salakhutdinov [15] in 2006.The essence of deep learning is to map data and obtain deep information about the nature of data by building a neural network with multiple hidden layers.Deep learning combines the feature extractor and classifier into one framework in classification so features can be learned directly from data, which reduces the huge workload of designing and extracting handcrafted features.As an important part of the deep learning structure, the deep convolutional neural network (DCNN) plays an important role in unsupervised learning and nonlinear feature extraction.For the classification of three types of space targets, we designed a 14-layer DCNN with 3 convolutional layers.To determine the optimal structure, we trained several DCNNs by changing hyperparameters, including the number of convolutional layers, the number of filters, and the size of the filter.
The rest of this paper is organized as follows.Section 2 establishes the precession model of space targets containing motion model and radar echo model.In Section 3, the scattering characteristics of three types of space targets are analyzed.Two classification methods using SVM and DCNN, respectively, are presented in Section 4. Finally, we draw a conclusion in Section 5.

Precession Model of Space Targets
This section takes the cone target as an example to analyze the geometric relationship between the radar and the precession target.In addition, the parametric representation of the target echo under a wideband radar system is derived, and the method of obtaining the simulated target echo signal is given in the following experiment.
2.1.Geometry and Motion Model.As shown in Figure 1, Q − UVW is the radar observation coordinate system.O − xyz is the body coordinate system, and z is the target spin axis.O − XYZ is the relative coordinate system parallel to Q − UVW.Where O is the target centroid, ω s and ω c are the target spinning frequency and coning frequency, respectively.The position vector at t from the radar station to the mth scattering center on the target is R t = R 0 + ΔR mt , where R 0 is the translational component and ΔR mt is the micromotion component [16].Then the radial distance of the mth scattering center is where n ≈ R 0 / R 0 is the direction of the radar line of sight (LOS); E is a 3 × 3 unit matrix; G s and G c are the rotation matrices generated by spinning and coning, respectively; rm is the position vector of the mth scattering center in O − X YZ at initial time; êc and ês are the oblique symmetric matrices of ω c and ω s , respectively; and ω c = ω cX , ω cY , ω cZ T and ω s = ω sX , ω sY , ω sZ T .êc and ês can be expressed as From (1) we can see that the microrange of the precession target is a superimposition of multiple sinusoidal components.The scattering center at the top of the cone is only affected by coning, whereas the other scattering centers are also modulated by spinning.The motions of these scattering centers generally do not satisfy the sinusoidal pattern and are related to the angel of view, spinning frequency, and coning frequency.
The coordinate system that only considers precession is established in Figure 2. O − x ′ y ′ z ′ is the precession coordinate system, and Oz is the precession axis.θ denotes the precession angle, γ denotes the angle between radar LOS and Oz ′ defined as the attitude angle, and φ denotes the initial phase angle.β is the angle between LOS and Oz defined as the pitch . 2

Radar station
International Journal of Antennas and Propagation angle.Space targets can be divided into rotational symmetry targets and asymmetric targets which also contain rotational symmetry structures according to their own structure.The reflected echo of a rotational symmetric structure is not modulated by spinning but only related to β.According to geometrical relationship [16], time-varying β can be facilely described as where t s denotes the slow time.

Wideband Radar Echo Modulation Model.
For wideband radars with high distance resolution, range profiles of each scattering center on the target can be observed.Linear frequency modulated (LFM) signal is the most commonly used form of signal for ground-based space target detection radar.
The LFM pulse duration is generally within the order of microseconds, while the micro-Doppler modulation frequency of space targets is typically several Hertz and the size is typically several meters.Then the maximum migration of scattering centers is less than half of the resolution cell.Therefore, the influence of micromotion on the single LFM pulse can be negligible.Here we use the "stop-go" model to analyze the echo of the LFM signal with the carrier frequency f c which can be expressed as where τ is the pulse width, μ = B/τ is the modulation frequency, t q = t − t s is the fast time and t s is the slow time, τ m = 2 R m t s /c is the time delay, and σ m is the scattering coefficient of the mth scattering center.Among them, B is the bandwidth and c is the speed of light.After Fourier transform and eliminating residual terms, HRRP of the target is obtained by the linearly demodulated echo where • indicates the inclusion relationship, sin c • is the singular function, and ΔR t s is the microrange after pulse compression.The peak of the HRRP is located at where r m t s is the microrange of the mth scattering center.
The speed of space targets can reach several kilometers per second, whereas the requirement for compensation accuracy of translational motion is low due to pulse compression.For space target detection systems, other detectors can provide rough indication information of targets for ground-based high-resolution radars.The echoes can be compensated based on estimated speed.Therefore, the influence of target high-speed translational motion on range profiles is not considered.
In general, it is necessary to obtain the dynamic echo data of the target before simulating the wideband radar signal.Since the single HRRP is only related to the pitch angle, the quasistatic method can be used to simulate the dynamic echo data of the space target as the following steps.
Step 1. Obtain the static HRRP sequences of the target in the whole attitude range.
Step 2. Build the motion model of the target to get the pitch angle β t s relative to the radar at t s .
Step 3. Interpolate the HRRP sequences linearly in the whole attitude range according to β t s to get the range profile S c ′ r, t s at the corresponding moment.

Scattering Characteristic Analysis
There are many types of space targets, whereas their shapes are simple compared to other targets such as airplanes and ships.The sphere-conical structure is widely used in space targets considering aerodynamic design.After obtaining the electromagnetic model of the target, the static scattering characteristic data at any frequency and angle can be calculated by the physical optics (PO) method.In this paper, three typical rotational symmetry space targets are studied as shown in Figure 3, which are the cone target, cone-cylinder target, and cone-cylinder-flare target.T1, T2, and T3 are used 3 International Journal of Antennas and Propagation to represent these three types of targets, respectively, for convenience.The height of the three types of target models is set to be 3 m, and the radius of the conic node is 7.5 cm.
We use X-band and horizontal polarization mode in the experiment.The frequency ranged from 10 GHz to 11.98 GHz with a step of 30 MHz.The pitch angle ranged from 0 °to 180 °with a step of 0.2 °.The 3-dimensional effect pictures of RCS of the three types of targets are shown in Figure 4.As is apparent in Figure 4(a), the cone target contains three equivalent scattering centers corresponding to the point on the spherical cap and the two points on the bottom edge.RCS of the cone target increases sharply around 79 °and 180 °due to specular scattering of the conical surface and specular reflection of the bottom surface, respectively.The cone-cylinder target contains four equivalent scattering centers seen from Figure 4(b).Its RCS increases sharply around 76.5 °, 90 °, and 180 °due to specular scattering of the conical surface and specular reflection of the cylindrical surface and the bottom surface, respectively.Similarly, Figure 4(c) shows five equivalent scattering centers and peaks at 69 °, 74.6 °, 90 °, and 180 °.It should be noted that some of the scattering centers are always visible while the other ones are visible in a certain range of angles, which is called the occlusion effect.This effect invalidates some micromotion feature extraction method based on the continuity of scattering centers, which also brings a research hotspot [17].
The electromagnetic data of each target includes 901 angles and 67 frequencies according to the simulation parameters.When constructing samples, first set the observation time and precession parameters such us radar LOS, precession angle, and precession frequency.Then calculate the time-varying pitch angle according to (3).The final HRRP sequences can be generated using the quasistatic method given in Section II.For uniformity, we call this kind of sequences time-range profiles.Figure 5 shows the value of the pitch angle in each frame corresponding to the parameter settings shown in Table 1.Time-range profiles of the three targets using the same parameters are shown in Figure 6.It can be seen that there are some subtle differences between the three images, mainly due to the differences in the number, position, scattering intensity, and occlusion effect of scattering centers on each target.Overall, these pictures might be misconceived because of the resemblances in their precession parameters, although the structure of appearances is not alike.This demonstration indicates that we need a powerful tool to classify the time-range profiles, considering that manual features would not be able to distinguish those differences.

Classification of Space Precession Targets
Samples of space targets are often more difficult to obtain than those of natural objects, which makes the number of samples available quite limited.Two classification methods are designed here.One is SVM based on statistical learning theory, which is suitable for pattern classification of a small sample, and the other is DCNN based on deep learning theory, which is state of the art for classifying natural images.All the samples used are generated by uniform sampling.Each target adopts the same parameter setting, and the sampling interval and range of each parameter are shown in Table 2.In the end, a total of 2700 samples of 434 × 343 pixels are obtained, with 900 samples per class.All the samples are resized to be 200 × 200 to reduce dimension.The available data set for each class was randomly divided into 70% for training and 30% for testing.Figure 7 displays some of the resized time-range profiles in the training set.
4.1.Classification Using SVM.We use HOG features and a multiclass SVM classifier to classify time-range profiles of space targets.The essence of a HOG descriptor is to represent image features using statistical information of the image gradient.This descriptor constructs features by counting a gradient direction histogram of a local region of the image.The constructed features could describe an image by computing the distribution of local gradients or edge orientations without knowing the corresponding gradients and edge positions.HOG features were originally used for human detection [18] due to good scale translation and geometric invariance.
Firstly, the image is pretreated by gamma normalization and graying.The gradient of each pixel in the horizontal and vertical directions is calculated.Next, the image is divided into several cells of size C × C. To obtain a gradient direction histogram of the cell, a weighted projection of the gradient direction in the histogram is performed for each pixel in each cell.In order to implement a better invariance to background changes, the cells are then grouped into larger blocks containing B × B cells.The HOG feature vector of the entire image can be obtained by concatenating the HOG features of all blocks.
As the two main parameters, cell size C and block size B determine the manifestation of the HOG feature.B is sensitive to local illumination changes; smaller B helps to capture the significance of local pixels to suppress background changes of HOG features.We set B to 2 based on experience.On the other hand, increasing C helps to capture large-scale spatial information, but small-scale detail might be lost.So it is critical to choose the right C because it greatly affects the subsequent classification performance.Figure 8 shows the extracting result of the HOG feature vector from Figure 5 3 indicates that the smaller the C, the longer the length of the HOG feature and the more time it consumes, whereas the classification accuracy is not always improving.The accuracy reaches the highest 93.09% when C is 20 × 20.When C is further reduced, the accuracy is decreased and then remains almost unchanged.To analyze misclassification conditions, the confusion matrix of 20 × 20 cell size is shown in Table 4. From Table 4, it can be seen that classes T2 and T3 are relatively hard to discriminate, which is largely due to the structural similarities between them.In recent years, some literatures have used DCNN to process the micro-Doppler feature, mainly in the fields of human detection [19], activity classification [20], and hand gesture recognition [21].They have spontaneously chosen the time-frequency diagram, or rather the spectrogram, as the study object.Both the spectrogram and the time-range profile contain the micro-Doppler information of the target, whereas the latter possesses more range structural information observed by wideband radar.DCNN is good at automatically extracting deep features in the image, so the time-range profile has advantages over the spectrogram in micromotion target recognition.
A typical layer of DCNN mainly consists of a convolutional layer, an activation layer, and a pooling layer.
The convolutional layer contains multiple convolutional filters working as a feature extractor.This layer convolves the image by sliding these filters along the input vertically and horizontally.The activation layer performs nonlinear transformation on input data to better discriminate different classes.The pooling layer does not learn anything itself but reduces overfitting and the number of connections to the following layers.The network we proposed is comprised of fourteen layers, including three convolutional layers with filter sizes 10 × 10, 5 × 5, and 2 × 2, respectively; two max-pooling layers; and one fully connected layer followed by a softmax layer.All three convolutional layers use the rectified linear unit (ReLU) as the activation function.The following cross channel normalization function replaces each element with a normalized value obtained using the elements from a certain number of adjacent channels.Dropout was applied to the fully connected layer with 50% probability.The overall DCNN architecture is shown in Figure 9.The network was trained from scratch using the stochastic gradient descent with momentum (SGDM) optimizer with a constant learning rate of 0.0001 and a batch size of 10.We used a NVIDA Quadro K2100M GPU with 2 GB memory to train the network.The training time for all 20 epochs was about 1200 s.Before we chose the architecture of the DCNN, we studied the effect of hyperparameters on the performance of classification by modifying the number of convolutional layers and fully connected layers.We found that just increasing the number of convolution layers resulted in a slight decline of classification accuracy.A similar situation emerged in the accuracies of altering fully connected layers.What is worse, the architecture of more than four fully connected layers led to misconvergence.Besides, filter size and filter number were adopted after random search.
We used the same training and testing data set as was used in the SVM process.The training process plotted in Figure 10 show the obtained accuracy at each epoch.The final classification result of the DCNN is shown in Table 5.Overall, the DCNN achieved an accuracy of 97.41% outperforming SVM.However, it still easily confuses T2 and T3.We show the learned 64 convolutional filters of the third convolutional layer in Figure 11.The visualization shows that the DCNN has learned to use the line feature of the signature, which corresponds to the microrange curves of scattering centers in the time-range profiles, to distinguish among the three classes.Although the visualization of the network features reveals hierarchical structures, it is hard to get physical insights into our situation.These results are similar to those obtained by SVM.

Conclusion
In this paper, an SVM using HOG features and a 14-layer DCNN has been applied to time-range profiles containing micro-Doppler characteristics for space precession target classification problems.In a three-class classifying scenario on electromagnetic computation data, the DCNN achieved 97.41% accuracy, outperforming the SVM with 93.09% accuracy but consuming much more time.In the first experiment, we extracted HOG features of time-range profiles before applying a SVM, whereas no definite domain knowledge for feature extraction was used in the second experiment.Instead, only the profiles themselves worked as input data to the DCNN.The visualization of the convolutional layer indicated that there might be a potential application of deep learning theory in several processes of the radar signal processing field.In the future, we will study the classification performance of DCNN under different signal-to-noise ratio and sample numbers and focus on solving the problem of slow training.

Figure 1 :
Figure 1: Geometric relationship between the radar and precession target.

Figure 2 :
Figure 2: Geometry of the precession cone target.
(a) while C is 25 × 25.The length of HOG feature is based on the image size and parameter values.

Figure 7 :
Figure 7: Random display of the time-range profiles in the training set.

Figure 8 :
Figure 8: Visualization of the HOG feature.

Figure 10 :
Figure 10: Test accuracy of the proposed DCNN.

Table 5 :Figure 11 :
Figure 11: Visualization of learned features in the last convolutional layer.

Table 3 :
SVM classification results using different cell sizes.

Table 4 :
Confusion matrix using SVM of a 20 × 20 cell size.