Efficient Approximation of the Labeled Multi-Bernoulli Filter for Online Multitarget Tracking

Online tracking time-varying number of targets is a challenging issue due to measurement noise, target birth or death, and association uncertainty, especially when target number is large. In this paper, we propose an efficient approximation of the Labeled Multi-Bernoulli (LMB) filter to perform online multitarget state estimation and track maintenance efficiently. On the basis of the original LMB filer, we propose a target posterior approximation technique to use a weighted single Gaussian component representing each individual target. Moreover, we present the Gaussian mixture implementation of the proposed efficient approximation of the LMBfilter under linear, Gaussian assumptions on the target dynamicmodel andmeasurementmodel. Numerical results verify that our proposed efficient approximation of the LMB filer achieves accurate tracking performance and runs several times faster than the original LMB filer.


Introduction
Online tracking of time-varying number of targets is a challenging issue in the presence of measurement noise, target birth or death, and association uncertainty [1].Recently, the random finite set (RFS) based Bayesian framework has been proved to be unified elegant approach for multisensor multiobject estimation [2], other than the traditional Joint Probabilistic Data Association (JPDA) [3] and Multiple Hypothesis Tracking (MHT) [4] methods in the tracking area [5].The Probability Hypothesis Density (PHD) filter [6], Cardinalized PHD (CPHD) filter [7], and multi-Bernoulli filter [8] were established for multitarget state estimation by avoiding data association, which are incapable of track maintenance.
With the help of the labeled RFS, the -Generalized Labeled Multi-Bernoulli (-GLMB) filter has been proposed lately in order to handle multitarget state estimation and track maintenance simultaneously [9,10].Then, the Marginalized -Generalized Labeled Multi-Bernoulli (M-GLMB) [11] and the Labeled Multi-Bernoulli (LMB) filter [12] were proposed to perform multitarget tracking more efficiently, respectively, under different approximations and update paradigms.However, the efficiency issue of multitarget tracking method still remains challenging for online applications, especially for multisensor scenarios [13,14].Recently, [15] has proposed an efficient implementation of the GLMB filter via Gibbs sampling to solve the ranked assignment problem stochastically, which has solution complexity quadratic in the number of hypothesized labels and linear in the number of measurements.Nevertheless, this efficient implementation of the GLMB filter cannot enjoy the benefit of the parallelizability of the LMB filter, which makes the number of hypothesized labels and the number of measurements very large when tracking large number of targets.
In this paper, we further study the efficiency issue of multitarget tracking problem.We propose an efficient approximation of the LMB filter for online multitarget tracking, in which a single weighted Gaussian component is used to approximate the posterior state of each target.We present the Gaussian mixture implementation of proposed filter for the linear and Gaussian target dynamic model and observation model.The paper is organised as follows.Section 2 presents a short review of the basic knowledge about the RFS based Bayesian filtering, the labeled RFS, and the -GLMB filter.Section 3 illustrates the target posterior approximation approach and explains its efficiency.Section 4 provides the Gaussian mixture implementation of the approximation of the LMB filter in detail.Section 5 shows numerical results that verify the effectiveness and efficiency of proposed approximation.

Background
This section presents the basic knowledge of the following content in this paper.We first give the RFS based Bayesian filtering in Section 2.1.Then, we present typical types of the labeled RFS in Section 2.2.Section 2.3 provides the LMB filter which lays out the fundamentals for our proposed approximation.

Random Finite Sets Based Bayesian
Filtering.The RFS is a random variable in which the number of its elements and the value of each element are both random process.With regard to multitarget tracking problem, the RFS has been shown to be a more natural and powerful description for multitarget state compared to the conventional vector representation [16].Let   and   denote the state set and observation set with time-varying cardinalities () and (), respectively: (1) In a typical target tracking scenario, at every time step, the multitarget state RFS   is composed of two parts: an RFS for existing targets   and an RFS for spontaneous birth targets Γ  .Hence, at time step , the multitarget state RFS   =   ∪ Γ  .The observation RFS   is composed of two parts as well: target-oriented measurements Θ  and clutter   , which gives   = Θ  ∪   .By modeling the multitarget state RFS   and the observation RFS   using different types of probability distributions such as Binomial, Poisson, and multi-Bernoulli, the RFS based Bayesian framework for optimal estimation is given as follows: which represent the prediction and update process of Bayesian recursion, respectively.Notice that the key to solve the RFS based Bayesian filtering is the set integrals in (3) and ( 4), which can be found in the finite set statistics [2].Under different assumptions of the RFS type, the PHD filter [6], CPHD filter [7], and multi-Bernoulli filter [8] have been derived from (3) and (4) to perform multitarget state estimation without data association.

Labeled Random Finite Set.
With respect to the labeled RFS, a unique label is assigned to each element in the multitarget state RFS   in order to perform track maintenance in a multitarget tracking scenario.Assume that L is a countable label space; then target state vector x is augmented with label  ∈ L. Hence, the labeled target state x = (x, ) for each x ∈ X .There are several types of the labeled RFS whose density is conjugate with standard multiobject likelihood function and is closed under the multiobject Chapman-Kolmogorov equation using the standard multiobject dynamic model, such as the GLMB family and the LMB family.In other words, the Bayesian recursion can be derived using these labeled RFSs.Here, we first introduce some useful symbols for illustration and provide the GLMB family and the LMB family as two types of the labeled RFSs in the following.Let L : X × L → L represent the projection L((x, )) =  and Δ( X) =  | X| (|L( X)|) denote the distinct label indicator function.
A GLMB is a labeled RFS on X × L with the following distribution: in which C is a discrete index set, and  () () and  () (x) satisfy A GLMB RFS comprising a single component for each unique label can be simplified into a LMB RFS.So, the index set C is singleton and the index  is omitted, and a LMB is a labeled RFS on X × L with the following distribution: and () and (x) satisfy in which {( () ,  () )} ∈L is a given finite parameter set, with  () representing the existence probability of target  and  ()  denoting the probability density of the kinematic state of target  given its existence [12].Remark that the LMB family is a special case of the GLMB family with only one term for each target.

The LMB Filter.
The LMB filter is derived from the Bayesian filtering by assuming multitarget state to be the LMB family RFS.In the following, we briefly recall the prediction and update process for the LMB filter which was proposed in [12].
Prediction.Suppose that the multitarget posterior density is an LMB RFS on space X × L, and the parameter set π = {( () ,  () )} ∈L .Besides, the multitarget birth model is also an LMB RFS on space X × B (L ∩ B = 0), and the parameter set π = {( ()  ,  ()  )} ∈B .Then, the predicted multitarget density is also an LMB RFS on space X × L + , and the parameter set is given as follows: where (x, ) is target state dependent survival probability function.⟨⋅, ⋅⟩ is the integral of two real-valued function, that is, ⟨(), ()⟩ = ∫ () ⋅ ().Then,   () is the survival probability of target .(x + | x, ) is the state transition model for single target.
Update.Suppose that the multitarget predicted density is an LMB RFS on space X × L + , and the parameter set π+ = {( ()  + ,  () + )} ∈L + .Then, the posterior multitarget density is also an LMB RFS on space X × L + , and the parameter set is given as follows: where Here, F(L) is the subsets union drawn from space L. Θ  + is the space of mappings  :  + → {0, 1, . . ., ||} such that () = () > 0 only exists if  = .  (x, ) is target state dependent detection probability function.(z | x, ) is the likelihood function for single target, and (⋅) is the intensity of Poisson clutter. is the Dirac delta function.

Target Posterior Approximation
3.1.Gaussian Approximation.The LMB filter recursion described in Section 2.3 is an approximation of the Bayes multitarget tracking filter by only preserving the unlabeled intensity of the multitarget posterior density [15].Due to its simplification, the LMB filter can be parallelized by grouping and gating [12] and runs much faster than the -GLMB filter, especially for tracking large number of targets.However, in every update step, the component number to represent target posterior state, that is, Gaussian component number or particle number, would increase linearly with the number of measurements caused by the association mapping Θ  + .As a result, the computation of the LMB filter would grow exponentially over time if no component pruning techniques were applied for each target.Therefore, we propose to use a single weighted Gaussian component to approximate the posterior state of each target in order to perform the recursion more efficiently with small sacrifice on the tracking performance.
Let (⋅) denote the posterior distribution of single target state; we assume that at every time step (⋅) can be approximated by a single Gaussian distribution N(⋅), that is, (⋅) ≈ N(⋅). Figure 1 shows the procedure of using a single Gaussian distribution N(⋅) to approximate an arbitrary distribution (⋅).The Expectation Maximization (EM) algorithm is a classical approach to fit a discrete distribution into a Gaussian Mixture Model (GMM), and there are other more advanced alternatives [17,18].

Gaussian Components Reduction.
The aim of Gaussian components reduction is to maintain the statistical moments of the GMM using less number of Gaussian distributions.We adopt the method described in [19], which iteratively merges two chosen Gaussian components by Kullback-Leibler Discrimination (KLD) based selection.The KLD between the th and th Gaussian component is given as follows: in which  new ,  new , and  new , respectively, represent the weight, mean, and covariance of the merged Gaussian term.The procedure of the KLD based Gaussian components reduction is shown in Algorithm 1. Specifically,  new = 1 when ∑  =1  () = 1.The underlying rationale of our assumption is that target posterior state at a certain time should not contain furcation meanings; thus, the unimodal Gaussian distribution is enough to represent single target posterior state instead of using the multimodal GMM.In this case, the predicted multitarget density is a union composed of weighted Gaussian components with each one representing an individual target.Therefore, the update of the LMB filter can be highly boosted since the computation of each association hypothesis  is cheap.Moreover, the computation of the LMB filter using target posterior approximation will no longer grow exponentially over time without pruning the component number of each track.In this paper, we name the LMB filter with target posterior approximation as the Efficient LMB filter, referred to as the ELMB filter for short.
Notice that our contribution is the efficiency approximation approach described in Section 3, and this approximation approach can also be applied in other RFS based Bayesian filters.In the following paper, we just use phrase "the ELMB filter" to represent the original LMB filter with our proposed efficient approximation approach for brevity.

Gaussian Mixture Implementation
In this section, we present the ELMB filter with its Gaussian mixture implementation in detail.Assume that each target posterior state is a weighted Gaussian component; target  follows a linear Gaussian dynamical model; and the sensor has a linear Gaussian measurement model, respectively, given as (z | x, ) = N (z;  () x,  () ) .
We also assume that the survival and detection probabilities are state independent (we drop the subscript  in the following), that is, The ELMB filter exactly follows the paradigm of the original LMB filter, which is composed of four phases: prediction, grouping, parallel update, and state estimation.Here, we refer the readers to Section IV of [12] for detailed description and equations and present the ELMB filter in the following.

Grouping.
Partition the predicted LMB parameters into mutually exclusive subsets, and then assign measurements to these subsets by measuring the Mahalanobis distance (MHD) between the predicted measurement ẑ =  () x () for target  and the received measurement z ∈ .The detection probability of ẑ falling in the view of any measurement z that is uniquely determined by the Mahalanobis distance  is computed as follows: For any  MHD (ẑ  , z) ≤ √ , target  and measurement z should be grouped together. is the gating distance threshold calculated using the inverse Chi-squared cumulative distribution corresponding to the desired -gate size for gating of measurements from tracks.Let L + = ⋃  =1 L () + be a valid partition in the label space which is associated with target state space X, with L () + ∩ L () + = 0 for any  ̸ = . =  (0) ⋃  =1  () is the according partition for measurements, where  (0) is the measurement subset that not associated with any targets and  () is associated with L () + .Let G = ⋃  =1 G () denote the grouped partition and G () = (L () + ,  () ) is the th group.
The aforementioned grouping problem is in essence to find the disjoint set unions, which can be solved by unionfind set via Find and Union two basic operations [20].Given  Finds and  Unions, the computational complexity of using union-find set is O(()), where  is the inverse Ackerman function and () is less than 5 for practical values of .
Remark that the grouping and gating approach is critical for upcoming parallel update scheme, which is the merit of the LMB filter over the -GLMB filter (also the implementation [15]) when tracking enormous number of targets.This is because the number of association hypotheses in each group is much smaller than that when targets and measurements are considered as one group.

Parallel Update.
The parallel update of the ELMB filter has three steps: generating hypotheses as the -GLMB RFS, then updating each hypothesis with measurement set, and then merging the posterior distribution with the same labels using the proposed target posterior approximation described in Section 3.For the th group of targets and measurements G () = (L () + ,  () ), suppose that each target is a weighted Gaussian component and the multiobject prior π() = {( (,) + , N (,) + (⋅; x (,) ,  (,) ))} ∈L () + ; then the multitarget prior in -GLMB form is given as The generated label hypothesis  + represents one of the possible combinations for the label set L () + , and the overall number of possible combinations is 2 |L () + | .The shortest paths algorithm is necessary to only generate the  most significant hypotheses when L ()  + is big [10].For each hypothesis  + in group G () , the multitarget updated posterior given measurement set  () is given in the -GLMB form as follows: where X() is the multitarget posterior state for the th group, Θ  + represents the association mapping  :  + → {0, 1, . . ., | () |}, such that () = (  ) > 0 implies  =   , and is the gating probability involved by grouping.Substitute ( 17) and ( 27) into ( 26) and (25), and use Lemma 2 in [1]; then with 0 (()) = 0 represents that target  is not associated with any measurement z ∈  () which indicates that target  is misdetected, and  0 (()) =  > 0 represents that target  is associated with the th measurement z  .The update process generates lots of hypotheses due to the combinatorial nature of association mapping .Thus, the ranked assignment algorithm is necessary for larger L ()  + in order to only update the  most significant hypotheses [10].Remark that the Gibbs sampler proposed in [15] (see Section 3.C in [15] for reference) can be applied here to solve the ranked assignment problem in order to further boost the efficiency.At last, the multitarget posterior in the -GLMB form can be approximated into the LMB form given as follows: It is clear that the posterior distribution  (,) (x) of each target (for any  ∈ L () + ) in the th group is a GMM distribution with each Gaussian component representing part of the target state.Then, we adopt the proposed target posterior approximation approach to approximate the  (,) (x) of each target into a single Gaussian component via KLD based Gaussian components reduction given in Algorithm 1.The target posterior approximation is the main difference between the ELMB filter and the original LMB filter.

State Extraction.
In the ELMB filter, target state extraction is performed exactly in the same way as in the LMB filter.Suppose that the multitarget posterior distribution is π = {( ()   , N ()  (⋅; x ()  ,  ()  ))} ∈L  at time ; the history existence probability of track  is  ()  0: ; then the estimated multitarget state is to evaluate the existence probability of track  for  ∈ L  ; given an upper threshold   and a lower threshold   (  <   ), the multitarget state estimation at time  is given as follows: Besides, the tracks with existence probability smaller than   (  ≪   ) are eliminated to prune tracks already stopped.
4.5.Discussion.Remark that the ELMB filter can remarkably accelerate the update process in twofold: firstly, the update of every hypothesis  only evaluates a single weighted Gaussian component with measurement z ∈  () ; secondly, the target posterior approximation approach can guarantee that each target posterior state is always a weighted Gaussian; thus the overall number of multitarget posterior hypotheses is always equal to target number which is different from that growing exponentially over time in the original LMB filter.However, the pruning for tracks is still necessary to eliminate tracks with very low existence probability which are already dead tracks.
Another intriguing fact is that the JPDA filter [3] when there is no target death and no new births is a special case of our proposed ELMB filter in Gaussian implementation.In the JPDA filter, the association from labeled target to measurement set is called an event, which is equivalent to the  ∈ Θ  + in the ELMB filter.Then, each association hypothesis is assigning proper weights by enumerating these events in both the JPDA and ELMB filter.Finally, in the JPDA filter each target updated state is extracted by summing up all possible association hypothesis, which is the same as the ELMB filter by approximating the multitarget posterior in the -GLMB into the LMB form.Here, we do not present the filter equations for the tedious repetition.The readers can omit the new birth term in prediction and substitute   = 1 into the given ELMB filter equations and compare them with the JPDA filter equations in Section 3 in [3].

Numerical Studies
In this section, we present numerical results for a twodimensional coordinate tracking scenario, in which there are unknown and time-varying number of targets observed with position measurements in cluttered environment.The tracking area is [−1000 m, 1000 m] × [−1000 m, 1000 m], and the filter runs for 100 steps with the sampling period  = 1 s.Each target moves according to the nearly constant velocity model given by where ].  2 is 2×2 identity matrix and ⊗ denotes Kronecker product.The process noise is zero mean Gaussian noise with standard deviation  V = 5 m/s for both V , and V , .Targets can appear or disappear in the scene at any time, and survival probability   = 0.99 for each existing target.Newborn targets can appear spontaneously at predefined places as known birth parameters.The birth rate for each candidate location is   = 0.05 with zero mean velocity, and the variance and  = diag([10 m, 10 m/s, 10 m, 10 m/s] 2 ), where diag() denotes the diagonal matrix.The detection probability of the sensor is   = 0.98.Measurement noise is also zero mean Gaussian with standard deviation   = 10m.Clutter is uniformly distributed over the tracking area and clutter rate   per scan is known.
We set up two tracking scenarios to evaluate the performance of the ELMB filter compared to the original LMB filter.There are 12 predefined tracks in each scene, and we run 1000 times Monte Carlos of the simulation to gain a stable performance of the ELMB and LMB filter.Figures 2 and 3, respectively, show one of the simulation results of the ELMB filter in each scene, in which the solid line represents ground truth trajectory and the colored dots are estimation from the ELMB filter.It is shown that the ELMB filter is capable of multitarget tracking with track maintenance.Some tracking videos of each scene are provided as Supplementary Material available online at https://doi.org/10.1155/2017/8742897for visualization.
In order to compare the performance of the ELMB filter and the LMB filter, the Optimal Subpattern Assignment (OSPA) metric composed of location error and cardinality error is adopted for the tracking performance evaluation [21].Here, we use the  2 norm and the cut-off value as 300.Figures 4 and 5, respectively, for Scene 1 and Scene 2, show the OSPA distance from 1000 Monte Carlo runs for both the ELMB filter and the LMB filter.It is evident that the ELMB filter almost achieves the same the performance as the LMB filter does with only very slight error in the two tracking scenarios.Moreover, in Scene 2 at frame 80, the proposed ELMB filter can alleviate the mislabeling issue of the original LMB filter when 3 targets are close to each other.This is because the single Gaussian component approximation in ELMB filter starts at the "most probable" Gaussian component and then other components add some innovations to this "most probable" component.As a result, the ELMB filter only maintains the "most probable" Gaussian component crossvalidated over time and leaves out the negligible components that would cause false association and mislabeling in target crossings.This characteristic is critical in handling target crossings for multitarget tracking.Thus, the ELMB filter can be treated as a greedy association method over time with only small sacrifices in tracking performance.
To illustrate the efficiency of the ELMB filter over the LMB filter, Figures 6 and 7, respectively, for Scene 1 and Scene 2, show the mean value of time consumption for the ELMB filter and the LMB filter from 1000 Monte Carlo runs.It can be easily seen that the ELMB filter generally runs much (about 3 times) faster than the LMB filter both in Scene 1 and in Scene 2.Even in a tracking scenario with dense clutter, the computational cost of the ELMB filter would be acceptable for online tracking of multiple targets.To summarize, the ELMB filter can perform almost as well as the LMB filter for multitarget tracking, and is much more efficient than the LMB filter.Besides, the ELMB filter can also alleviate the mislabeling issue caused by target crossings in a greedy association manner.

Conclusion
In this paper, we propose an efficient approximation of the Labeled Multi-Bernoulli filter in order to boost the efficiency of multitarget tracking by reducing the number of target posterior components.We propose the target posterior approximation approach to represent each individual target with a single weighted Gaussian, so that the number of posterior hypotheses can be remarkably reduced.Given the performance comparison with the original LMB filter in the numerical simulation, it is shown that our proposed approximation achieves good tracking performance and runs several times faster than the LMB filter.Our future work would concern about using the Gibbs sampler in [15] to further speed up the efficiency of the proposed filter to track very large number of targets due to the linear complexity of the Gibbs sampler and the parallelizability of our approximation.
sort Gaussian components in decreasing order of weight to make the 1st component is with biggest weight; { new , N(⋅;  new ,  new )} (  +   ) 2 (  −   ) (  −   )  .(14)If two Gaussian components have the minimum KLD meaning, they are close to each other; then it is reasonable to merge them into one new Gaussian term { new , N(⋅;  new ,  new )}.The merging of the th and th Gaussian component is given as follows: new =   +   (  +   ) 2 (  −   ) (  −   )