Multitarget Direct Localization Using Block Sparse Bayesian Learning in Distributed MIMO Radar

The target localization in distributed multiple-input multiple-output (MIMO) radar is a problem of great interest. This problem becomes more complicated for the case of multitarget where the measurement should be associated with the correct target. Sparse representation has been demonstrated to be a powerful framework for direct position determination (DPD) algorithmswhich avoid the association process. In this paper, we explore a novel sparsity-based DPD method to locate multiple targets using distributed MIMO radar. Since the sparse representation coefficients exhibit block sparsity, we use a block sparse Bayesian learning (BSBL) method to estimate the locations of multitarget, which has many advantages over existing block sparse model based algorithms. Experimental results illustrate that DPD using BSBL can achieve better localization accuracy and higher robustness against block coherence and compressed sensing (CS) than popular algorithms in most cases especially for dense targets case.


Introduction
Multiple-input multiple-output (MIMO) radar study has received considerable attention over the past few years [1][2][3][4][5][6][7].MIMO radar is typically used in two antenna configurations, namely, colocated [1,2] and distributed [3,4].Colocated MIMO radar with closely spaced antennas exploits the waveform diversity and increased degrees of freedom (DOF) to obtain better angular resolution due to the virtual aperture [1].The proximity of the antenna arrays allows considering the same target response for each transmitter-receiver pair [8].Unlike colocated MIMO radar, distributed MIMO radar exploits angular diversity by capturing information from different aspect angles of target with widely spaced antennas [3] and supports accurate target location and velocity estimation [9].In distributed MIMO radar, targets display different radar cross-sections (RCS) in different transmit-receive channels, and thus better detection performance is ensured by averaging the target scintillations from different angles [3].In this paper, we are concerned with solving multiple stationary targets localization problem using distributed MIMO radar.
Location estimation technique is one important problem for MIMO radar systems due to its great potential to enable different kinds of localization applications.The traditional approach to solve the localization problem consists of a two-step procedure.The signal parameters such as direction of arrival (DOA), time of arrival (TOA), and time difference of arrival (TDOA) are estimated firstly at several receivers independently and then the coordinates of targets are calculated by exploiting the explicit geometric relationship.The authors in [10,11] studied target localization with MIMO radar systems by utilizing bistatic TOA for multilateration and the Cramêr-Rao bound (CRB) for the target localization accuracy was derived.It has been shown that localization by coherent MIMO radar provides significantly better performance than noncoherent processing where the phase information is ignored.Coherent processing, however, entails the challenge of ensuring multisite systems phase synchronization [12] and the impact of static phase errors at the transmitters and receivers over the CRB has been well analyzed [13,14].Literature [15] has demonstrated that even the noncoherent MIMO radar provides significant performance improvement over a monostatic phased array radar with high range and azimuth resolutions.Although most publications on localization algorithms concentrate on the two-step method, it is suboptimal in general [16].

International Journal of Antennas and Propagation
The problem becomes more complicated and challenging for multiple dense targets scenario using the method given in [11], where parameters as TOAs should be assigned to the correct targets, which is called "Data Association" [17] and it is an important problem especially for multiple target applications.A multiple-hypothesis-(MH-) based algorithm for multitarget localization was proposed to estimate the number and states of targets [18].
On the contrary, the direct position determination (DPD) method suggested by Weiss in [16] and Bar-Shalom and Weiss in [19] does not need intermediate parameters as DOAs or TOAs.The position estimates of interest are obtained directly by minimizing a cost function using the grid-search method, which can improve the estimation accuracy with respect to the two-step method.A maximum likelihood (ML) based DPD method dealing with one moving target is developed [20].Moreover, the DPD method can provide superior localization capability in the context of multitarget scenarios since the data association step is avoided.Despite these advantages, the DPD method did not receive enough attention due to its intensive computation load.Recently, sparsity-based representation DPD framework is exploited for target/source localization problem.In fact, since the number of unknown targets is small in the radar scene, it can be modeled as an ideal sparse vector in the localization problem.Therefore, sparse modeling for distributed MIMO radar is firstly presented in [21] and the location estimates can be obtained by searching for the block sparse solution of underdetermined model using block matching pursuit (BMP) method.In [22], the multisource localization problem using TDOA measurements is formulated to be a sparse recovery problem and the problem of the data association and multisource localization is solved in a joint fashion.The method of block sparse Bayesian learning (BSBL) method in [23] motivates us to consider its application to multitarget localization problem in distributed MIMO radar.By exploiting the intrablock correlation, BSBL can achieve a superior performance over other algorithms for off-grid DOA estimation [24].Simulation results showed that the BSBL method significantly outperforms competitive algorithms in different experiments.
In this paper, motivated by [21], we propose to apply the BSBL algorithm [23] for solving multitarget direct localization problem by employing block sparse modeling and we demonstrate the superiority of BSBL for multitarget localization problem through sufficient numerical experiments from many aspects.Specifically, we demonstrate the robustness of BSBL against compressed sampling and capability of dealing with dense targets localization.The effect of parameter estimation based on the off-grid model is also shown.
The remainder of the paper is organized as follows.We introduce the signal model for a distributed MIMO radar and formulate the block sparse representation of signal in Section 2. In Section 3, we review existing sparse recovery algorithms for this problem.Then, the sparsityaware multitarget localization using BSBL is presented in Section 4. The comparison of performance based on Monte Carlo simulations is shown in Section 5. Finally, concluding remarks and future work are addressed in Section 6.
Notations used in this paper are as follows.Boldface letters are reserved for vectors and matrices.‖ ⋅ ‖ 1 and ‖ ⋅ ‖ 2 denote the ℓ 1 norm and ℓ 2 norm, respectively.|A|, Tr(A) are the determinant and trace of a matrix A, respectively.diag{A 1 , . . ., A  } denotes a block matrix with principal diagonal blocks being the A 1 , . . ., A  in turn.^⪰ 0 means each elements in the vector ^is nonnegative. 1  denotes  × 1 vector of all ones and I  denotes  ×  identity matrix.

Signal Model
where rect ( t denotes the window function,  = √ −1,  = /  is the chirp rate,  represents the bandwidth,   denotes the pulse duration, and   is the carrier frequency of the th transmitter.Further, we assume that the cross correlations between these waveforms are close to zeros for different delays; namely, where (⋅) * denotes the conjugate operator.Let    denote the complex RCS value corresponding to the th target between the th transmitter and the th receiver and each target is modeled as a collection of     various reflection coefficients.In this work, we are interested in Rician target model [25], which describes one dominant scatterer together with a number of small scatterers, and target returns are assumed to be deterministic and unknown.
For coherent processing, we obtain the bandpass signal arriving at the th receiver taking account of the phase errors as where    is the time delay corresponding to the th target in the (, )th transmit-receive pair and  is the speed of the propagation of the wave in the medium.   and    in (4) denote the phase error induced by the th transmitter or th receiver, respectively.The noise   ( t) is assumed to be complex Gaussian with power spectral density (PSD)  2   and is assumed to be independent for different .
The received signals at each receiver can be decomposed by a bank of   matched filters.Then we take  samples within a range bin   centered at  0  in the (, )th transmitreceive pair as where  and   denote the sample index and sampling interval, respectively, is the sampling start time of corresponding range gate, and   () is the noise component at the output of the matched filter.Note that unknown phase errors are absorbed in the unknown reflection coefficient as α  ≜    exp(   +    ).Plus, the waveform term   is no longer present in this equation as it is integrated out of the matched filter being a sinc function.This model is more practical than that in [21] by taking account of the effect of sampling deviation from the location of peaks.
We discretize the planar area into a grid of uniform cells where each of the targets is located at one of the cells.If there are  targets in the area and is given a fine grid of  cells such that the cell's occupancy is exclusive, the distribution of the targets in the plane is sparse; that is, out of  cells only  ≪  contain the targets.This implies the spatial sparsity model as depicted in Figure 1.Denoting the signal attributed to the target located at cell  at sample index  as w  () and concatenating the signals corresponding to each cells, the signal vector coming from all the 2D plane can be formed , where {w  ()}  =1 ∈ C     ×1 and [⋅]  stands for the transpose operator.Further, w  () is defined as There are     reflection coefficients corresponding to the one particular cell where the target is located and there are only  ≪  targets.We characterize sparsity with such structure as block sparsity.Figure 1 illustrates the particular block sparsity model exhibited in representation of signals coming from all over the grid as described above.The targets occupy only two cells marked as 1 and 2. Hence, the spatial representation of the target reflection coefficients is sparse.We can see that support of w exhibits the block sparsity structure as there are only two blocks of nonzero elements corresponding to the two targets.The size of each block is the number of transmit-receive pairs.
For the coherent processing, the dictionary elements at the th grid at sample index  in the (, )th pair is given as The dictionary {Ψ()}  =1 ∈ C     ×    is partitioned accordingly into where Further, we arrange y  () ( = 1, . . .,   ;  = 1, . . .,   ) and   () ( = 1, . . .,   ;  = 1, . . .,   ) in (6) into     dimensional column vectors y() and (), respectively.Therefore, we can express the received vector at  as where w() is a block sparse vector with only  nonzero blocks and each block containing     entries.We have expressed our observed data at  using sparse representation.
It is further assumed that the target reflection coefficients remain constant across the range bin.In order to make the model more concise, we stack {y()} where  ∈ C     ×1 , y ∈ C     ×1 , and w ∈ C     ×1 .Note that, in the above expression for the measurement vector, Ψ ∈ C     ×    is known and only w depends on the actual targets present in the illuminated area.The nonzero entries of w represent the target RCS values and the corresponding indices determine the positions.We assume that the number of targets  is unknown.The problem of target localization is therefore turned into sparse vector recovery problem.Recovery methods for block sparse signals will be addressed in the next section.

Existing Sparse Support Recovery
In the previous section, we have expressed that the signal received across   receive antennas over  snapshots using sparse representation.In order to find the locations of the targets, we need to recover the sparse vector w from the measurements y.Since the sampling number  is much smaller than the grid number , the inversion of ( 12) is an ill-posed problem.In addition, w has block/group structure.The exact sparsity of the signal w denoted by ‖w‖ 0 , that is, the ℓ 0 norm of w, is equal to the number of nonzero elements in w and is employed to get the inversion of (12).Then, the signal vector can be obtained by solving the following optimization problem: where  is the regularization factor proportional to the noise level.The optimization problem requires combinatorial search and is widely known as NP-hard.In order to simplify the optimization problem, some convex relaxation is often made.The most extensively used one is the ℓ 1 -norm relaxation as follows: Since ( 13) is nonconvex, matching pursuit (MP) and orthogonal MP are preferred.The aforementioned two methods use a greedy strategy that iteratively selects the basis vector.After ℓ 1 -norm relaxation, many methods, such as basis pursuit (BP) denoising, least absolute shrinkage, and selection operator (LASSO), and gradient projection for sparse reconstruction can be used to find the solution.These algorithms recover sparse vectors but do not exploit the knowledge of the block sparsity.It is known that exploiting such block partition property can further improve recovery performance.Recently, block-MP (BMP) algorithm has been proposed [26] which exploits the knowledge of block sparsity.
Nevertheless, the BMP algorithm is effective on noiseless scenarios.In practice, measurements are inevitably contaminated with noise and underlying uncertainties.Besides, the performance of the sparsity based estimation approaches is determined by the correlations between columns of the dictionary matrix Ψ and the distance between the adjacent grids.High dictionary coherence can potentially disrupt BMP or Group-Lasso algorithms [27].More importantly, one should note that when one target belongs to the th grid, not only the target reflection coefficient block w  is a nonzero block, but also its elements are correlated in amplitude.The correlation arises because the coefficients of the th grid are belonging to the same target, and thus the elements in w  are mutually dependent.It is shown that exploiting the correlation within blocks can further improve the estimation quality of ŵ [23].
Therefore, in this paper we propose to use BSBL [23] to estimate ŵ by exploiting the block structure and the correlation within blocks.In the next section we briefly introduce BSBL and its algorithm.

Block SBL Based Target Localization
This section briefly describes the BSBL framework and corresponding algorithm.

BSBL Framework.
BSBL is an extension of the basic SBL framework, which exploits a block structure and intrablock correlation in the coefficient vector w.It is based on the assumption that w can be partitioned into  nonoverlapping blocks as w = [ 1  1 , . . ., For sparse model in this paper,  =     .Then, each block w  ∈ C ×1 is assumed to satisfy a parameterized multivariate Gaussian distribution with the unknown parameters ]  and Q  .Here ]  is a nonnegative parameter controlling the block sparsity of w.When ]  = 0, the th block becomes zero.During the learning procedure most ]  tend to be zero, due to the mechanism of automatic relevance determination.Thus sparsity at the block level is encouraged.Q  ∈ C × is a positive definite and symmetrical matrix, capturing the intrablock correlation of the th block.Under the assumption that blocks are mutually uncorrelated, the prior of w is (w Assume the noise vector  satisfies (; ) ∼ CN(0; I), where  is a positive scalar to be estimated.Therefore the posterior of w is given by with Therefore, the estimate of w can be directly obtained by using the the maximum a posteriori (MAP) estimation, providing all the parameters , {]  , Q  }  =1 .The parameters , {]  , Q  }  =1 can be estimated by a Type II maximum likelihood procedure [28].This is equivalent to minimizing the following cost function: where Θ ≜ {,{]  , Q  }  =1 } denotes all the parameters.This framework is called the BSBL framework.The algorithm derived from this framework includes three learning rules, that is, the learning rules for ]  , Q  , and .The correlation matrix Q  is modeled as a Toeplitz matrix.There are several optimization methods to minimize the cost function, such as the expectation-maximum (EM) method, the bound-optimization (BO) method, and the duality method.

Advantages of BSBL.
Compared to Lasso-type algorithms (such as Group-Lasso based on ℓ 1 -minimization) and greedy algorithms (such as Group-MP based on ℓ 0minimization), BSBL has the following advantages.
(1) BSBL provides large flexibility to model and exploit intrablock correlation structure in signals.By exploiting the correlation structures, recovery performance is significantly improved [29].
(2) BSBL has the unique ability to find less-sparse and nonsparse true solutions with very small errors [30].This is attractive for practical use, since in practice the true solutions may not be very sparse and existing sparse signal recovery algorithms generally fail in this case.
(3) Its recovery performance is robust to the characteristics of the dictionary Ψ, while other algorithms are not.This advantage is very attractive to sparse representation and other applications, since in some applications there is a trade-off between the resolution (grid size) and the block coherence measure [21].When the grid points come closer, the resolution is improved but blocks within Ψ are highly coherent.
Therefore, BSBL is promising for multitarget localization.In the following we choose the BSBL-ℓ 1 algorithm [23], which transforms the BSBL cost function from the ^space to the w space by treating  and Q  as regularizers.Since it only takes few iterations and each iteration is a standard Group-Lasso type problem, it is much faster and is more suitable for largescale datasets than BSBL-EM and BSBL-BO algorithms [23].

Benchmark
Group-Lasso Group-MP

Experiments
To demonstrate the superior performance of BSBL, this section tests the performance of sparse recovery based multitarget localization algorithms by conducting a wide range of numerical experiments.Three algorithms are used, which are the Group-Lasso method for solving (14), the Group/block-MP for solving (13) [21], and the BSBL-ℓ 1 method.
We use the same radar configuration as in [21].Consider a 2 × 2 MIMO radar system in a common Cartesian coordinate system.The transmitters are located at =1 are assumed to be 0. We choose the range gate is   = 6  and snapshots number is  = 24 for the simulation results.Therefore, y has 96 entries.We divide the planar area into 13 × 13 grid points.Therefore, the total number of possible target states is  = 169.Hence, the 676 dimensional sparse vector w has only 12 nonzero entries corresponding to the targets.The th target reflection coefficients follow a Rician distribution with pdf (  ;   ,  0 ) = (  / 2 0 ) exp(−((  ) 2 +(  ) 2 )/2 2 0 ) 0 (    / 2 0 ), where the fixed-amplitude part of three targets in all transmitreceive paths are  1 = 5 × 1 4 ,  2 = 3 × 1 4 , and  3 = 1 4 , and the power of Rayleigh part is  0 = 0.05 for all three targets.Our definition of signal-to-noise ratio (SNR) is SNR[dB] = 10log 10 (‖Ψw‖ 2 / 2  ) and the noise is generated independently from Gaussian distribution.We combine energies of reconstructed signal corresponding to different transmit-receive paths for each grid point in the target state space and define a new vector as In the following, each experiment was repeated for 100 trials.In [21] a metric is given below to analyze the performance: where w * contains the values that w carries at the correct  indices and w * takes 0 at the correct  indices and takes the same values as w at every other index.And the authors in [21] claimed that Δ > 1 can guarantee exact estimation of the position.Since Δ can only represent the mean value of accuracy in the 100 trials, so we define the success rate as a new localization accuracy performance index, defined as the percentage of successful trials in the 100 trials (a successful trial was defined as the one when Δ > 1).

Comparison with Different Sparse Recovery Algorithms.
We start by comparing the BSBL-ℓ 1 method with two classical methods, including Group-MP, Group-Lasso methods.Also we examine the benefit of exploiting intrablock correlation using BSBL-ℓ 1 algorithm.The normalized mean square error (NMSE) is used as a performance index, defined by ‖ŵ − w‖ 2 2 /‖w‖ 2 2 .Figure 2 depicts the NMSE results using different recovery algorithms.As a benchmark result, the "pentagram" result is calculated, which is the least-square estimate of w given its true support.The BSBL algorithm exhibits significant performance gains over non-BSBL algorithms.Figure 3 shows the reconstructed reflection coefficients of three targets for an SNR of 10 dB using different algorithms.From Figure 3 we can see that three algorithms are capable of estimating the positions of targets but the performance of Group-Lasso and Group-MP are intuitively poorer than BSBL-ℓ 1 method.In order to quantitatively analyze the performance of the three algorithms, we plot the metrics Δ and success rate versus SNR in Figure 4. BSBL-ℓ 1 is applied with and without correlation exploitation.In the first case, it adaptively learned and exploited the intrablock correlation.In the second case, it ignored the correlation, that is, fixing Q  = I (∀).As can be seen, the BSBL-ℓ 1 algorithm exhibits significant performance gains over non-BSBL algorithms.In Figure 4(a), the value of Δ remains above 1 for lower SNR for BSBL-ℓ 1 algorithm when compared with Group-MP and Group Lasso methods.In Figure 4(b), it is worth noting that when SNR ≥ 5 dB, BSBLℓ 1 exactly recovers block sparse signals with a high success rate (≥ 92%).Also we see that exploiting the intrablock correlation greatly improves the performance of the BSBL-ℓ 1 in terms of both metrics.Figure 5 demonstrates that BSBL-ℓ 1 is capable of exactly recovering less-sparse signals even for dense targets localization with respect to other algorithms.Figure 6 compares the location estimation performance in terms of the two metrics for different algorithms when the targets are located densely.As shown, its advantage over other conventional recovering algorithms is manifested in terms of larger Δ and higher success rate.The BSBL-ℓ 1 is accordingly suitable for dense targets localization problem.

Robustness against Block Coherence
Measure.The following experiments are devoted to the performance evaluation for the case of robustness of BSBL to the block coherence measure [26], and it increases as the grid distance reduces [21].shows the reconstructed reflection coefficients with smaller grid distance than that in Figure 7(a).Accordingly, Figure 7(d) illustrates that the resolution is improved compared with Figure 7(c).We plot the performance versus SNR with reduced grid distance in Figure 8.As expected, we note from Figure 8 that when the blocks of Ψ are highly coherent, BSBL exploiting intrablock correlation still maintains good performance compared with Figure 4, while other algorithms have seriously degraded performance in these two metrics.
Table 1 gives the computational time comparison of two algorithms on a computer with dual-core 2.5 GHz CPU, 2.0 GiB RAM, and Windows 7 OS, and SNR = 10 dB.It shows BSBL-ℓ 1 needs extra time to obtain better estimate performance compared with Group-MP algorithm.Also, we note that as the grid distance decreases, the computational time of two algorithms increases due to the larger dimensional dictionary matrix, and the NMSE of BSBL shows little change, while NMSE of Group-MP degraded significantly, which is caused by the highly coherent dictionary.

Robustness against Compressed
Sensing.In this experiment, we consider the case for compressed sensing (CS)  technique.The percentage of samples used is given as  CS /(    ), where  CS ≪     is the CS sample number.For reconstruction of w, we use Group-MP and BSBL-ℓ 1 methods.
As is clear from Figure 9, on one hand, the performance degrades as the percentage of samples reduces for both algorithms; on the other hand, the performance for Group-MP method dropped much more significantly than that for BSBL-ℓ 1 .In other words, the reconstructed estimates of the position using BSBL-ℓ 1 match the true values better while using only 50% of the samples.

Impact of Off-Grid Mismatch
Errors.Finally, we consider the impact of off-grid mismatch errors.In the above experiments, we assume that all the targets are exactly located on the selected grid.The grid size is 10 m.Here, three targets are relocated at obtained according to three peaks projection.To deal with this problem, we could employ smaller grid size to make the targets located on the grid or approximate the error using linearization method [24].

Conclusions
Multitarget direct localization using distributed MIMO radar systems was discussed in this paper.Previous works generally focused on the use of two-step procedures with exact data association.In this paper, we introduced the block sparse Bayesian learning algorithm for multitarget direct localization by employing a new sparse modeling within a range bin.The success rate was defined to analyze the performance of the radar system.We experimentally demonstrated that BSBL algorithm significantly outperforms some algorithms by exploiting intrablock correlation in signals, especially when the targets were located densely and the blocks of dictionary were highly coherent.Finally, the CS technique was applied to the block sparse recovery.Results showed the BSBL was more robust than other algorithms when few samples were used.
In future work, we will consider dealing with off-grid target localization problem, where the targets are no longer constrained in the sampling grid set.Further, we will consider the problem of target localization for distributed MIMO radar in the presence of phase synchronization mismatch.

Figure 1 :
Figure1: The spatial sparsity of the targets inside the area is illustrated through discretization of the area into a grid of  cells.The targets occupy only two cells marked as 1 and 2. Hence, the spatial representation of the target reflection coefficients is sparse.We can see that support of w exhibits the block sparsity structure as there are only two blocks of nonzero elements corresponding to the two targets.The size of each block is the number of transmit-receive pairs.
Dense Targets Scenario.In this subsection, we investigate the robust ability to find less-sparse solutions with small errors in the case of dense targets.Three targets are relocated at p 1 = [120, 300] m, p 2 = [100, 300] m, and p 3 = [110, 280] m and the RCS values remain unchanged.

Figure 7 (
Figure 7(b)  shows the reconstructed reflection coefficients with smaller grid distance than that in Figure7(a).Accordingly, Figure7(d) illustrates that the resolution is improved compared with Figure7(c).We plot the performance versus SNR with reduced grid distance in Figure8.As expected, we note from Figure8that when the blocks of Ψ are highly coherent, BSBL exploiting intrablock correlation still maintains good performance compared with Figure4, while other algorithms have seriously degraded performance in these two metrics.Table1gives the computational time comparison of two algorithms on a computer with dual-core 2.5 GHz CPU, 2.0 GiB RAM, and Windows 7 OS, and SNR = 10 dB.It shows BSBL-ℓ 1 needs extra time to obtain better estimate performance compared with Group-MP algorithm.Also, we note that as the grid distance decreases, the computational time of two algorithms increases due to the larger dimensional dictionary matrix, and the NMSE of BSBL shows little change, while NMSE of Group-MP degraded significantly, which is caused by the highly coherent dictionary.

Table 1 :
Comparison of computational time and NMSE.