The localization and tracking technology for a three-dimensional target, which is a kernel problem in the military area, has received more and more attention. This paper proposes a closed-loop system to detect 3D maneuvering targets, including data acquisition, the direction of arrival (DOA) estimation, the triangle localization, and a trajectory prediction. This system firstly uses several L-shaped sensor arrays to sample the signals of maneuvering targets. Then the 2D ESPRIT algorithm and a maximum likelihood algorithm are introduced to achieve the positions of the spatial targets. Thirdly an autoregressive (AR) particle filter (PF) algorithm is realized to predict the locations in the next moment. Finally the localization process is directed by using the predicted positions to form a positive feedback closed loop. Experiment results show that this system can enhance the robustness and accuracy of the localization and tracking for three-dimensional maneuvering targets.
1. Introduction
Localization and tracking play a critical role in military applications and such as radar warning, battlefield monitoring. The kernel problem of each application is the ability of accurate localization. The most popular location tool is the Global Positioning System (GPS) [1]. Sometimes the location accuracy of the GPS is less than 0.1 m, but this sensor is usually used for self-localization. The multimedia sensor is another big type of location tools, such as CCD sensor [2, 3], stereo camera [4], and infrared sensor. However, multimedia sensor needs an extremely high image resolution, a fast network flow rate, and distinguished feature which limits its extensive use. RFID [5], wireless sensor networks [6], hybrid sensor networks [7] are generally used for the short-range target localization because their wireless transmission is easily influenced by the circumstance and the distance.
The reasons why the sensor array [8–12] is chosen in this paper to localize targets are as follows: sensor array is able to localize multiple targets simultaneously; sensor array can certainly eliminate the same noise and interference which can increase the signal noise ratio of the received data; and the localization and tracking system using sensor array is little impacted by the environment which can increase the accuracy reliability and robustness. Sensor array is widely used for localization. For example, Tidd used an 8-element uniform circular sensor array to precisely localize the far-field target by multiple signals classification algorithm [9]. Fallon and Godsill [10] and Talantzis [11] use a microphone array to locate the acoustic source. Lin compared the performance of DOA estimation with different types of sensor array, such as cross array, circular arrays, and L-shaped array. The Clay Metro lower boundary of uniform L-shaped array was almost lowest except circular array. The direction-finding resolution and orientation consistency of the L-shaped array were much better than circular array [12].
After well localization, tracking process is always carried out to enhance the performance of the localization and tracking system. Tidd use the Kalman filter to deal with the DOA and speed of the target to offer strategic, fast, and low-cost advantages [9]. Fallon and Godsill designed a track before detect framework to obtain the probability distribution of the particle filter which could guide the resample process to create better particles in a comparative computational load [10]. This framework could increase the stability of the tracking system. Talantzis used the particle filter to predict the target trajectory [11]. The predicted position was used to estimate the accuracy of the localization by the microphone array and then replaced the inaccurate localization, sometimes as the true localization value when targets could not be detected.
This paper proposes a closed-loop localization and tracking system for three-dimensional targets. Firstly, choose several uniform L-shaped arrays to sample the spatial target signals and estimate the DOA of the 3D target with a two-dimensional estimating signal parameter via a rotational invariance techniques (2D ESPRIT) algorithm. Then the positions of the targets are calculated by a novel maximum likelihood (ML) algorithm according to the estimated DOA and the location of L-shaped sensor arrays. Thirdly the autoregressive particle filter algorithm is proposed to predict the target trajectory the in next time. Finally, the system uses the predicted position to guide the localization process. This system combines the localization and tracking algorithm together to form a positive feedback closed loop. This loop can increase the stability, robustness, and accuracy of the localization and tracking system.
The outline of this communication is as follows. In Section 2, the signal reception model based on the sensor array is developed. The proposed closed-loop localization and tracking system is described in Section 3. In Section 4, a series of experiment results are made to validate that the regenerative feedback system can work well to enhance the performance of the localization and tracking system. Finally the conclusion and perspectives are presented in Section 5.
2. Signal-Reception Model of Sensor Array
We consider a uniform L-shaped sensor array to localize and track the 3D maneuvering target. Make the intersected sensor as the origin and the arms of L type array as the axis to build a virtual coordinate system (shown in Figure 1). Each arm of the array have M elements. The sensors in the array are of the same nature. The distance between adjacent elements in x-axis is dx and in y-axis is dy.
A uniform L-shaped sensor array receiving far field narrowband signal.
L-shaped sensor array is used for collecting the narrow band signals emitted from the far-field target located at (x,y,z). The wavelength of signal is λ. (φ,θ) is the DOA of the spatial target. The elevation angle φ∈[0,90]°. Azimuth angle θ∈(-180,180]°.
The array manifold vector, called the steering vector, of the uniform L-shaped array is given as
(1)ax(φ,θ)=[1,u(φ,θ),…,uM-1(φ,θ)]T,ay(φ,θ)=[1,v(φ,θ),…,vM-1(φ,θ)]T,with{uk(φ,θ)=exp(j2πdx×k×cosφcosθλ)vk(φ,θ)=exp(j2πdy×k×cosφsinθλ),
where (·)T, j, x, y denote matrix transposition, imaginary unit, x-axis, and y-axis, respectively. The snapshot signal vector X(t)=[x1(t),x2(t),…,xM(t)]T, Y(t)=[y1(t),y2(t),…,yM(t)]T on x- and y-axis sampled by the L-shaped array is written as
(2)X(t)=AxS(t)+Nx(t),Y(t)=AyS(t)+Ny(t),
where S(t)=[s1(t),s2(t),…,sn(t)]T are the complex envelope vector of the incoming signals. n is the number of targets which can be detected. Nx(t) and Ny(t) are the Gaussian white noise vectors whose mean is 0 and variance is σ2 decided by signal noise ratio (SNR). Ax and Ay are array manifold matrixes as
(3)Ax=[ax(φ1,θ1),…,ax(φn,θn)],Ay=[ay(φ1,θ1),…,ay(φn,θn)].
3. Closed-Loop Localization and Tracking System for the 3D Target
With the data received by L-shaped sensor arrays, this paper uses 2D EPSRIT and ML algorithms to get the targets’ positions. Then predict the trajectories by integrating AR model into PF algorithm. Finally guide the targets’ localization process with the predicted positions. The whole system forms a closed-loop feedback and improves the localization and tracking accuracy.
3.1. 2D ESPRIT Algorithm
2D ESPRIT algorithm [11] is the expansion of ESPRIT algorithm. According to the snapshot data X(t), Y(t) received by arrays, the special covariance matrix is calculated. Then construct the signal subspace using a singular value decomposition (SVD) of the covariance matrix. With the eigenvalue derived from SVD, get u(φi,θi) and v(φi,θi) mentioned in (5). The elevation and azimuth angle (φi,θi) can be achieved by u(φi,θi) and v(φi,θi).
The 2D ESPRIT algorithm can be described as follows.
Step 1.
Decompose X(t) and Y(t) into 4 an same size of the submatrixes X1(t),X2(t), Y1(t), Y2(t).
(4)X1(t)=X(1:M-1,:)(t),X2(t)=X(2:M,:)(t),Y1(t)=Y(1:M-1,:)(t),Y2(t)=Y(2:M,:)(t).
Define Ax1=Ax(1,M-1,:), Ax2=Ax(2,M,:), Ay1=Ay(1,M-1,:), Ay2=Ay(2,M,:), so
(5)Ax2=Ax1Φx,Ay2=Ay1Φy,X1(t)=Ax1X(t),X2(t)=Ax2X(t),Y1(t)=Ay1Y(t),Y2(t)=Ay2Y(t),
where Φx and Φx imply the translation characterization of the sensor array. They are denoted as
(6)Φx=diag(u(φ1,θ1),…,u(φn,θn)),Φy=diag(v(φ1,θ1),…,v(φn,θn)).
Step 2.
Calculate the covariance matrixes of these four sub-matrixes
(7)RX1Y1=E[X1(t)Y1(t)H]=Ax1RsAy1H+FNx,Ny,RX2Y1=E[X2(t)Y1(t)H]=Ax1ΦxRsAy1H+FNx,Ny,RX2Y2=E[X2(t)Y2(t)H]=Ax1ΦxRsΦyHAy1H+FNx,Ny,
where Rs=E[S(t)SH(t)] is the signal self-related matrix. H is the complex conjugate transpose matrix. FNx,Ny conclude the information of the noise. Usually the noise is too less than the received signals to ignore, so we have FNx,Ny=0.
Step 3.
Construct a special matrix C in formula (8). And then adopt SVD for the matrix C to achieve the signal subspace Es(8)C=[RX1Y1RX2Y1RX2Y2],[U,S,V]=svd(C);(9)Es=[E0E1E2]=[AAΦxAΦxΦyH]T=U(:,1:n).
There are two unknown quantities A and T. A is the transfer matrix produced by the Ax1 and Ay1. T is a middle matrix which is invertible. E0, E1, E2 and Es are the signal subspace.
Step 4.
Let Ψx=T-1ΦxT, Ψy=T-1ΦyHT, we have E1=E0Ψx, E2=E1Ψy; therefore,
(10)Ψx=(E0HE0)-1E0HE1,Ψy=(E1HE1)-1E1HE2.
Step 5.
Use eig-function to deal with the matrixes Ψx and Ψy and gain eigenvalues which are equal to the diagonal element of the matrix Φx and ΦyH(11)[Vx,Φx]=eig(Ψx),[Vy,ΦyH]=eig(Ψy).
Step 6.
Both Φx and ΦyH have the information of the DOA. Due to the respective calculation, the eigenvalues from the same targets are not pairing. It needs post-process. Define a new matrix G=VyHVx. Adjust the correspondence of the Φx, ΦyH according to the maximum value in each line in G.
Step 7.
Compute the DOA of targets
(12)θk=arctan(angel(v(φk,θk))angel(u(φk,θk))),φk=arccos(λv2(φk,θk)+u2(φk,θk)2πd).
3.2. Localization through the Maximum Likelihood Algorithm
Using the estimated DOA and the state of the sensor arrays, a maximum likelihood algorithm is proposed to realize the triangulation location. As shown in Figure 2(a), there are three uniform L-shaped arrays which have the same character. The positions of sensor arrays are (0,0,0), (a,b,0), and (c,d,0) and meet the conditions a≠c≠0 and b≠d≠0.
Localization by three uniform L-shaped sensor arrays.
Three L-shaped arrays locate 3D target
Projection on PL (Oyz) plane
The problem of triangulation location can be transformed into the one of seeking the intersection S for the three known radials. It has one unique answer theoretically. But due to the error of DOA estimation, the three radials will not intersect (seen in Figure 2(b)). So the paper uses a novel ML algorithm to search for partial areas and get the nearest point far from the three radials.
Definition 1.
The angle between the point and radial is the one no more than 180° which is intersected by the radial and the line of the point and radial’s jumping-off point.
Theorem 2.
In 2D plane, the point nearest to the three radials lies in the intersected triangle by the radials.
For example, the angle between the point S and the radial L2P3 is ∠SL2P3. The nearest point S to three radials (L2P3, L1P3 and L3P1) lies in the triangle P1P2P3.
This theorem can be deduced to a 3D space: in 3D space, the point nearest to three radials lies in the intersection space of projection triangle formed by the three radials on the Oxy, Oxz, and Oyz planes.
The maximum likelihood algorithm is described as follows.
Firstly, get the projection of the three radials on the Oxy, Oxz, and Oyz planes. Then obtain the intersection points of the three radials on each plane. After that achieve the value ranges on each axis. Take the Oyz plane as an example, the points of intersected triangle are P1(y1,z1), P2(y2,z2), and P3(y3,z3), and the range value of y is YOyz∈[min(y1,y2,y3), max(y1,y2,y3)]. We can also get the range of z on the Oyz plane.
Secondly, intersect the value range of xyz axis on each plane to create a spatial space for the maximum likelihood points. Take y-axis for example, the intersection of y is [max(min(YOxy), min(YOyz)), min(max(YOxy), max(YOyz))]. Then divide this 3D space into NX×NY×NZ pieces by a self-adaptive interval. Suppose that the probability of the angle of point and radial accords to the normal distribution. For all the divided points, then calculate the total probability of the angle to the three radials
(13)P(x,y,z)=12πexp(-θ12π2)+12πexp(-θ22π2)+12πexp(-θ32π2),
where θi is the angle between the point (x,y,z) and the ith radial.
Finally, search for the maximum of P(x,y,z) and find the target position (x,y,z).
3.3. Autoregressive Particle Filter Algorithm
After localization, this paper introduces an autoregressive particle filter (ARPF) which combines AR models [13] with the particle filter [14] to predict the state of the targets in the next moment. This algorithm conquers the difficulty of designing some unknown state equations, noise matrixes, and initial state in particle filter and also provides a method to improve the accuracy of AR model.
The p-rank AR model (AR(p) in short) used for prediction can be denoted as
(14)yt=εt-φ1yt-1-⋯-φpyt-p,t=p+1,p+2,…T,
where yt is the predicted position, yt-1,…,yt-p is the historical state of the same target. φ1,φ2,…,φp is the autoregressive model coefficient and εt is the Gaussian white noise with the mean 0 and the variance ξ2.
The ARPF algorithm is carried out. (1) Form the initial particles. (2) Use the AR model to calculate the positions of the particles at the next moment instead of the motion equation. (3) Get the prediction by the weighted sum of each particle. (4) Update the probability of each particle and do importance resampling process to form the new particle cluster. (5) Skip to (2) to recur until target is out of the detecting area.
3.4. Localization and Tracking System
The localization and tracking system presented here contains the whole process of signal collection (model building), DOA estimation, and tracking and prediction. The pseudocode of the system is described in Algorithm 1.
Algorithm 1
While (the target is in the detecting area of L-shaped sensor arrays)
If (it’s the first moment the arrays detect)
(1) Use all the sensor arrays to sample the signal of the target.
(2) Calculate the position of the target by maximum likelihood algorithm.
(3) Select three sensor arrays which are nearest to the targets.
Else
(1) Use the selected three sensor arrays to sample the signal.
(2) Get the location by ML algorithm.
End
If (the number of the measured position for the target <4)
The value achieved by ML algorithm is regarded as the final position.
Else
(1) Use ARPF algorithm to predict the position of the target at the next moment.
(2) If (the number >4)
Use the predicted position to guide targets’ localization process which is discussed in Section 4.
End
End
Time++;
End
This closed-loop system makes the localization and tracking algorithms described into an integral whole to form a positive feedback. The loop increases the robustness and the accuracy of the target localization and tracking.
4. Simulation
The paper chooses the American F22 battle plane as the emulation target. The parameters of F22 can be achieved in http://www.aerospaceweb.org/aircraft/fighter/f22/. Create the random trajectory of F22 by the navigation and positioning toolbox of GPSoft in MATLAB, as shown in Figure 3. There are 2 pieces of moving orbits. The time range when F22-1 is flying is [195,399]s, and the range of F22-2 is [171,397]s. These planes emit sine complex envelope signals omnidirectionally when it moves along the trail.
Three L-shaped arrays sensing 3D maneuvering targets.
We choose three uniform L-shaped sensor arrays of the same nature to localize and track 3D targets. The fixtures of the arrays are (0,0,0) m, (60000, −12000,0) m, and (40000,24000, 0) m. Set the parameters of the sensor array mentioned in Section 2, M=8, dx=dy=0.5 m, λ=1 m, and SNR=20 dB. The number of the snapshot is 64. The frequency of the localization is 1 Hz.
The experiment adopts Windows XP SP3 as the software platform, and AMD Athlon(tm) II X2 250 3.01 GHz and 2 GB memory as the hardware platform. The programming environment is MATLAB R2010b.
4.1. DOA Estimation
At a certain time, F22-1 is located in (52266, −5342, and 8561) m, and target F22-2 is located in (96898, 4164, and 9191) m, see the rhombuses shown in Figure 3. With the signal vector X(t) and Y(t) sampled by the sensor arrays, DOA of the two targets is estimated, using the 2D ESPRIT algorithm. The results are shown in Table 1. The first item in brackets is elevation, and the second item is azimuth.
DOA estimation with 2D ESPRIT algorithm.
Array L1
Array L2
Array L3
Real angle/°
F22-1
(9.25, −5.84)
(39.99, 139.28)
(15.07, −67.31)
F22-2
(5.41, 2.46)
(12.85, 23.66)
(8.67, −19.22)
Estimated angle/°
(5.14, 2.49)
(40.02, 139.28)
(14.99, 112.65)
(9.69, 174.12)
(12.63, 23.69)
(8.58, 160.79)
By doing tens Monte Carlo simulation and calculate the mean of DOA, it can be found that the average error of angle estimation is less than 0.2°. Most of the time the 2D ESPRIT algorithm works well, but when the incident DOA is very close to each other, the errors is a little big. The 2D ESPRIT algorithm needs to be improved in such situation.
Besides, there are other two problems: first, the DOA estimations of the same target are not in the same line. Second, the estimated azimuth is all in [0,180]°. When the incident azimuth is minus, the result is always equal to this angle plus 180°.
So before localization by maximum likelihood algorithm, the DOA from the same target must be paired in the same group, and the azimuth must be well decided. When there is no other prior information, the positions of targets at the preview moment are used to confirm azimuth and group the DOA estimations which are out of order.
4.2. Triangulation Location
With the estimated DOA and the location of sensor arrays, the maximum likelihood algorithm is employed to calculate the positions of the targets. The localization results and errors are displayed in Figure 4.
Performance of the maximum likelihood algorithm.
Triangulation localization with the maximum likelihood algorithm
MSE of localization with the ML algorithm
We can find in Figure 4 that most localization mean square errors (MSEs) of F22-1 are less than 300 m, and F22-2 localization deviation is less than 1500 m by the maximum likelihood algorithm. Because of the larger distance between F22-2 and the sensor arrays, MSE of F22-2 is bigger than MSE of F22-1.
For far-field targets’ localization in this application, the relative errors are less than 3% which is acceptable. So the ML method meets the localization needs. There are also some outliers that have exceptional large localization MSE. These outliers always appear when the target is far away from sensor arrays. So when there are redundant sensor arrays, select three sensor arrays which are nearest to the target for localization.
4.3. Trajectory Prediction
After localization by the 2D ESPRIT and ML algorithms, autoregressive particle filter is implemented to form a positive feedback closed loop which enhances the accuracy and stability of the localization. The forecast MSE is seen in Figure 5.
MSE of the ARPF algorithm.
Compare prediction MSE by ARPF and the distance between the location at current and preview moment. It can be found that most of the time, the prediction MSE by ARPF is smaller. This illustrates that the prediction is more appropriate to direct the process mentioned above which confirms azimuth and groups the DOA estimations. There are also some moments that the prediction MSE is less than the localization MSE, so when the localization result is far away from the trajectory, in other words, when the outlier appears, we would better use the prediction as the true location instead of the measurement. For example, the location MSE of F22 by ML algorithm at 358 s is 4477 m, and the forecast MSE is 1035 m; this time uses the predicted position as the real measurement.
The performance of the closed loop is shown in Table 2.
Performance of closed positive feedback loop.
F22-1 MSE/m
F22-2 MSE/m
Average consuming/s
Localization by 2D
98.55
290.38
0.033
ESPRIT and ML algorithm
Using a closed-loop system
96.97
276.14
0.047
Table 2 illustrates that 2D ESPRIT and ML algorithms are available to locate the target because the localization MSE is acceptable. The close loop system can slightly improve the accuracy of the localization. The time consumption is a little increased, but still fast enough to locate and track the target.
5. Conclusion
This paper proposes a localizing and tracking system for a three-dimensional target which combines the DOA estimation, the triangulation location, and the trajectory prediction. Firstly we establish a signal-reception model for a far-field target. Then the 2D ESPRIT algorithm is carried out to estimate the DOA of the 3D target. After that, a novel maximum likelihood algorithm is introduced to solve the triangulation location problem. Next the autoregressive particle filter is implemented to predict the target trajectory. Finally the localization process is guided according to the predictions. The experiment shows that the localization by 2D ESPRIT algorithm and the ML algorithm are useful for the 3D target. ARPF algorithm can well predict position at the next moment and guide the localization process. The localization and prediction processes form a passive feedback loop which can raise the accuracy and robustness of the localization and tracking system.
Acknowledgments
The authors would like to acknowledge the support from the National Grant Fundamental Research 973 Program of China (Grant no. 2006CB303000), the National Natural Science Foundation of China (Grants no. 60673176, 60373014, 50175056, and 60970103), and the Ph.D. Programs Foundation of the Ministry of Education of China (Grant no. 20090002110016).
AbdelkrimN.AoufN.TsourdosA.WhiteB.Robust nonlinear filtering for INS/GPS UAV localizationProceedings of the 16th Mediterranean Conference on Control and Automation (MED '08)June 2008Ajaccio, France6957022-s2.0-5294914715710.1109/MED.2008.4602149SohnS.LeeB.KimJ.KeeC.Vision-based real-time target localization for single-antenna GPS-guided UAV2008444139114012-s2.0-6234914039110.1109/TAES.2008.4667717LiuC.WangX.XingZ.Fast face detection method based on real- time prediction and learning classification20123313641NemraA.AoufN.Adaptive decentralised cooperative vision based simultaneous localization and mapping for multiple UAVProceedings of the 19th Mediterranean Conference on Control and Automation2011Corfu, Greece546551PaulA. S.WanE. A.RSSI-based indoor localization and tracking using sigma-point Kalman smoothers20093586087310.1109/JSTSP.2009.2032309WangS.WangX.WangY.SunX.Distributed lightweight target tracking for wireless sensor networksProceedings of the 6th IEEE International Conference on Mobile Adhoc and Sensor Systems (MASS '09)October 2009Macau, China8708752-s2.0-7424908724610.1109/MOBHOC.2009.5337044HurstA.WickenheiserA.GarciaE.Localization and perching maneuver tracking for a morphing UAVProceedings of the IEEE/ION Position, Location and Navigation Symposium (PLANS '08)May 2008Monterey, Calif, USA123812452-s2.0-5534909881110.1109/PLANS.2008.4570044ZhangX.WangX.Sensor arrays based predicted spatial multi-signals classification method for target localization and tracking2012335970975TiddW.WeberR. J.HuangY.ZhaoY.A RF source localization and tracking systemProceedings of the IEEE Military Communications Conference (MILCOM '10)November 2010San Jose, Calif, USA8588632-s2.0-7995165921610.1109/MILCOM.2010.5680188FallonM. F.GodsillS.Acoustic source localization and tracking using track before detect2010186122812422-s2.0-7795581358710.1109/TASL.2009.2031781TalantzisF.An acoustic source localization and tracking framework using particle filtering and information theory2010187180618172-s2.0-7795569042210.1109/TASL.2010.2052248linL. M.2008Harbin, ChinaHarbin Institute of TechnologyLiuW.FarooqM.ARMA model based scheme for maneuvering target trackingProceedings of the 37th Midwest Symposium on Circuits and SystemsAugust 1994140814112-s2.0-0028739047van der MerweR.DoucetA.The unscented particle filter2000CUED/F-INFENG/TR 380