MPE Mathematical Problems in Engineering 1563-5147 1024-123X Hindawi Publishing Corporation 10.1155/2014/653259 653259 Research Article Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter http://orcid.org/0000-0002-4406-0473 Yan Xiaoxi Khoo Suiyang 1 School of Electrical and Information Engineering Jiangsu University Zhenjiang 212013 China ujs.edu.cn 2014 1572014 2014 23 04 2014 30 06 2014 15 7 2014 2014 Copyright © 2014 Xiaoxi Yan. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

1. Introduction

The objective of multitarget tracking is to estimate target number and target states from a sequence of noisy and cluttered measurement sets. The tracked target is generally simplified as a point . Most of the existing point target tracking algorithms are based on data association where the correspondence of measurements to targets has to be set up. The simplest data association algorithm is the nearest-neighbour algorithm in which the measurement closest in statistical distance to predicted state is used to update target state estimate. Probabilistic data association is another typical algorithm in which all the measurements close to the predicted state are used to update target state estimate . Joint probabilistic data association is a generalization of probabilistic data association for multiple target tracking in which association probabilities of all the targets and measurements are described by confirmed matrices [5, 6]. Multitarget tracking algorithms based on data association are in individual view, where the problem of multitarget tracking is converted into the multiple problems of single target tracking. In the multitarget tracking, both the measurements and the estimations are gained in the set form. Thus, multitarget tracking is naturally a class of set-valued estimation problems. The probability hypothesis density (PHD) filter derived by Mahler based on random finite sets statistics theory is an elegant and tractable approximate solution to the multitarget tracking problem [7, 8]. Another interpretation of the PHD in bin-occupancy view is presented in . By now, there have been two implementations of PHD filter, Gaussian mixture implementation [10, 11] and sequential Monte Carlo implementation , which are suitable for linear Gaussian dynamics and nonlinear non-Gaussian dynamics. The convergence of Gaussian mixture implementation is discussed in  and the convergence of sequential Monte Carlo implementation in [15, 18, 19]. The cardinalized PHD (CPHD) filter propagating both the PHD and the distribution of target number is developed to improve the performance of the PHD filter . Generally, the CPHD filter is computationally less tractable compared to the PHD filter. There have been the Gaussian mixture implementation of CPHD filter under multitarget linear Gaussian assumptions  and the sequential Monte Carlo implementation . As promising and unified methodologies, the PHD and CPHD filters have been widely applied in many fields, such as maneuvering target tracking [23, 24], sonar tracking [25, 26], and visual tracking . As the sensor resolution is greatly improved, target tracking should be formulated as extended object tracking . Extended object PHD filter is also derived by Mahler in . There have been some implementations of extended object probability hypothesis density filter by now . The convergence of the Gaussian mixture implementation of extended object probability hypothesis density filter is discussed in . When the Gaussian mixture model is applied in set-valued multitarget tracking, the Gaussian mixture reduction is an important topic [10, 38]. The earlier work in Gaussian mixture reduction for target tracking has been done in [39, 40]. As the Gaussian mixture reduction is implemented, there are several criterions such as maximum similarity , Euclidean distance , and Kullback-Leibler divergence measure . The concentrations of this paper are on the Gaussian mixture reduction of the Gaussian mixture implementation of PHD filter.

As far as the Gaussian mixture implementation of the PHD filter is concerned, it approximates the PHD by the summation of weighted Gaussian components under the multitarget linear Gaussian assumptions . In the Gaussian mixture PHD filter, the PHD is presented by a large number of weighted Gaussian components that are propagated over time. The sum of the weights of Gaussian components is the expected target number since the integral of the PHD over the state space is the expected target number. The output of Gaussian mixture PHD filter is weighted Gaussian components. However, the Gaussian mixture PHD filter suffers from computation problems associated with the increasing number of Gaussian components as time progresses, since mixture component number increases both at prediction step and at update step. In fact, component number increases without bound. Thus, the Gaussian mixture PHD filter is infeasible without component pruning operation. The goal of this paper is to prune the Gaussian components to make the Gaussian mixture PHD filter feasible. An iterative mixture component pruning algorithm is proposed for the Gaussian mixture PHD filter. The pruning operation of mixture components is done by setting mixture weights to zeros during the iteration procedure.

The remaining parts of this paper are organized as follows. Section 2 describes the component increasing problem in Gaussian mixture PHD filter. The iterative mixture component pruning algorithm is derived in Section 3. Section 4 is devoted to the simulation study. Conclusion is provided in Section 5.

2. Problem Description

The predictor and connector of PHD filter [7, 8] are (1) v k k - 1 ( x ) = p S , k ( ζ ) f k k - 1 ( x ζ ) v k - 1 ( ζ ) d ζ + β k k - 1 ( x ζ ) v k - 1 ( ζ ) d ζ + γ k ( x ) , (2) v k ( x ) = [ 1 - p D , k ( x ) ] v k k - 1 ( x ) + z Z k φ z , k ( x ) v k k - 1 ( x ) κ k ( z ) + φ z , k ( ξ ) v k k - 1 ( ξ ) d ξ , respectively, where v ( · ) is the PHD, γ k ( x ) is the birth PHD at time step k , β k k - 1 ( · ζ ) is the spawned PHD from ζ at time step k - 1 , κ k ( z ) is the clutter PHD, p S , k ( ζ ) is the survival probability, p D , k ( x ) is the detection probability, φ z , k ( x ) = p D , k ( x ) g k ( z x ) , g k ( z x ) is the single target likelihood, and Z k is the measurements at time step k .

Under the linear Gaussian assumptions, the Gaussian mixture PHD filter is derived in . The main steps of the Gaussian mixture PHD filter are summarized as follows. If the PHD at time step k - 1 is in the form of Gaussian mixture (3) v k - 1 ( x ) = i = 1 J k - 1 w k - 1 ( i ) N ( x ; m k - 1 ( i ) , P k - 1 ( i ) ) , where w is the mixture weight, N ( · ) is the Gaussian distribution, m is the mean, P is the covariance, and J is the component number, then the predicted PHD for time step k is given by (4) v k k - 1 ( x ) = v S , k k - 1 ( x ) + v β , k k - 1 ( x ) + γ k ( x ) , where γ k is the birth PHD (5) γ k ( x ) = i = 1 J γ , k w γ , k ( i ) N ( x ; m γ , k ( i ) , P γ , k ( i ) ) , v S , k k - 1 is the survival PHD (6) v S , k k - 1 ( x ) = p S , k j = 1 J k - 1 w k - 1 ( j ) N ( x ; m S , k k - 1 ( j ) , P S , k k - 1 ( j ) ) , m S , k k - 1 ( j ) is the predicted mean of the Gaussian component (7) m S , k k - 1 ( j ) = F k - 1 m k - 1 ( j ) , P S , k k - 1 ( j ) is the predicted covariance of the Gaussian component (8) P S , k k - 1 ( j ) = Q k - 1 + F k - 1 P k - 1 ( j ) F k - 1 T , v β , k k - 1 is the spawned PHD (9) v β , k k - 1 ( x ) = j = 1 J k - 1 l = 1 J β , k w k - 1 ( j ) w β , k ( l ) N ( x ; m β ( j , l ) , P β ( j , l ) ) , m β ( j , l ) is the spawned mean of the Gaussian component (10) m β ( j , l ) = F β , k - 1 ( l ) m k - 1 ( j ) + d β , k - 1 ( l ) , and P β ( j , l ) is the spawned covariance of the Gaussian component (11) P β ( j , l ) = Q β , k - 1 ( l ) + F β , k - 1 ( l ) P β , k - 1 ( j ) ( F β , k - 1 ( l ) ) T . If formula (4) is rewritten in the simple form of the Gaussian mixture (12) v k k - 1 ( x ) = i = 1 J k k - 1 w k k - 1 ( i ) N ( x ; m k k - 1 ( i ) , P k k - 1 ( i ) ) , then the posterior PHD at time step k is (13) v k ( x ) = ( 1 - p D , k ) v k k - 1 ( x ) + z Z k v D , k ( x ; z ) , where v D , k is the detected PHD (14) v D , k ( x ; z ) = j = 1 J k k - 1 w k ( j ) ( z ) N ( x ; m k k ( j ) ( z ) , P k k ( j ) ) , w k ( j ) is the updated weight (15) w k ( j ) ( z ) = p D , k w k k - 1 ( j ) q k ( j ) ( z ) κ k ( z ) + p D , k l = 1 J k k - 1 w k k - 1 ( l ) q k ( l ) ( z ) , m k k ( j ) ( z ) is the updated mean (16) m k k ( j ) ( z ) = m k k - 1 ( j ) + K k ( j ) ( z - H k m k k - 1 ( j ) ) , P k k ( j ) is the updated covariance (17) P k k ( j ) = [ I - K k ( j ) H k ] P k k - 1 ( j ) , and K k ( j ) is the gain (18) K k ( j ) = P k k - 1 ( j ) H k T ( H k P k k - 1 ( j ) H k T + R k ) - 1 .

It can be seen from formula (4) that component number increases from J k - 1 to J k k - 1 by J k - 1 · J β , k + J γ , k at the prediction step. It is obvious in formula (13) that component number increases from J k k - 1 to J k by J k k - 1 · | Z k | at the update step. Hence, the number of Gaussian components J k representing PHD v k at time step k in Gaussian mixture PHD filter is (19) J k = ( J k - 1 ( 1 + J β , k - 1 ) + J γ , k ) ( 1 + | Z k | ) , where J k - 1 is the number of components of the PHD v k - 1 at time step k - 1 . In formula (19), the component number increases in O ( J k - 1 | Z k | ) . In particular, the component number mostly increases in ( J k - 1 ( 1 + J β , k ) + J γ , k ) | Z k | at the update step. Indeed, the number of Gaussian components increases without bound so that the computation of the Gaussian mixture PHD filter is intractable after several time steps. Therefore, it is necessary to reduce the number of components to make the Gaussian mixture PHD filter feasible. The goal of this paper is to prune the Gaussian mixture components to reduce component number in Gaussian mixture PHD filter.

3. Iterative Pruning Algorithm

For simplicity, the time index k is neglected and let M = J k k represent component number. w S is the sum of the weights of the Gaussian components: (20) w S = j = 1 M w j . In the iterative pruning algorithm, the weights of Gaussian components are normalized by { w 1 / w S , , w M / w S } at first so that (21) j = 1 M w j = 1 . Let θ j = { m ( j ) , P ( j ) } represent the parameters of the j th Gaussian component, where m ( j ) and P ( j ) are the mean and covariance, respectively. Then, the whole parameter set of M Gaussian components is θ = { w 1 , , w M , θ 1 , , θ M } .

The entropy distribution of the mixture weights is adopted as the prior of θ : (22) p ( θ ) exp ( - H ( w 1 , , w M ) ) , where H ( w 1 , , w M ) = - j = 1 M w j log w j is the entropy measure [46, 47]. The goal of this choice of prior distribution, which depends only on the mixture weights, is to reduce mixture components by the adjustment of mixture weights during the iteration procedure. If we define the log-likelihood of the measurements Z = { z 1 , , z n } given the mixture parameters as (23) log p ( Z θ ) = i = 1 n log j = 1 M w j g ( z i θ j ) , where g ( z θ j ) is the single target likelihood in j th component, then the MAP estimate of θ is (24) θ ^ = arg max θ { log p ( Z θ ) + log p ( θ ) } . For the mixture weight w j , the MAP estimate can be computed by setting the derivative of the log-posterior to zero: (25) w j ( log p ( Z θ ) + log p ( θ ) ) = 0 . The MAP estimate of w j is computed by maximizing log p ( Z θ ) + log p ( θ ) under the constraint (21): (26) w j ( log p ( Z θ ) + log p ( θ ) + λ ( j = 1 M w j - 1 ) ) = 0 , where λ is Lagrange multiplier. Substituting formulas (22) and (23) into formula (26) gives (27) i = 1 n ω j ( z i ) w j + log w j + λ + 1 = 0 , where ω j ( z ) represents the membership that z is from the j th mixture component: (28) ω j ( z ) = w j g ( z θ j ) l = 1 M w l g ( z θ l ) .

Formula (27) is a simultaneous transcendental equation. We solve it for the w j using the Lambert W function , an inverse mapping satisfying W ( y ) e W ( y ) = y , and therefore log W ( y ) + W ( y ) = log y . The Lambert W function of complex y is defined as W ( y ) , which is a set of functions. The complex y can be computed by the equation W ( y ) e W ( y ) = y , where e W ( y ) is the exponential function. Lambert W function W ( y ) is a multivalued function defined in general for y complex and assumed W ( y ) complex. If y is real and y < - 1 / e , then W ( y ) is multivalued complex. If y is real and - 1 / e y < 0 , W ( y ) has two possible real values. If y is real and y > 0 , W ( y ) has one real value. Then, for the Lambert W function W ( y ) , (29) - W ( y ) - log W ( y ) + log y = 0 . Setting y = e x , formula (29) can be rewritten as (30) - W ( e x ) - log W ( e x ) + x = 0 .

In formula (27), it is assumed that (31) ω j = i = 1 n ω j ( z i ) . Consequently, formula (30) is (32) ω j - ω j / W ( e x ) + log ( - ω j W ( e x ) ) + x - log ( - ω j ) = 0 . Comparing the Lambert W function (32) to formula (27), (32) can be reduced to (27) by setting x = 1 + λ + log ( - ω j ) : (33) ω j - ω j / W ( e x ) + log ( - ω j W ( e x ) ) + 1 + λ = 0 . Consequently, (34) w j = - ω j W ( e 1 + λ + log ( - ω j ) ) = - ω j W ( - ω j e 1 + λ ) .

Formula (27) and formula (34) constitute an iterative procedure for the MAP estimates of { w 1 , , w M } : (1) given λ , { w 1 , , w M } are calculated by formula (34); (2) { w 1 , , w M } are normalized; (3) given normalized { w 1 , , w M } , λ is computed by formula (27). The iteration procedure stops when the difference rate of log-posterior is smaller than the given threshold.

At the normalization step of the iteration procedure, if a mixture weight becomes negative, the corresponding component is removed from the mixture components by setting its weight to zero. The removed mixture component will not be considered when the log-posterior is computed in the following iterations. The mixing weights of survival mixture components are normalized at the end of this step.

The effect of entropy distribution of mixing weights is taken during the iterative procedure. The mixture weights of components negligible to the PHD become smaller and smaller iteration by iteration, since the parameter estimates are driven into low-entropy direction by entropy distribution. The low-entropy tendency can also promote competition among the mixture components with similar parameters which can then be merged into one mixture component with larger weight.

For the mean m ( j ) and covariance P ( j ) of mixture component with nonzero weight w j , they are updated by (35) m ( j ) = ( ω j ) - 1 i = 1 n z i ω j ( z i ) , (36) P ( j ) = ( ω j ) - 1 i = 1 n ( z i - m ( j ) ) ( z i - m ( j ) ) T ω j ( z i ) .

The main steps of iterative mixture component pruning algorithm are summarized in Algorithm 1.

<bold>Algorithm 1: </bold>Iterative pruning algorithm.

(1)  normalize w 1 , , w M by formula (20).

(2)    t = 0 .

(3)    t = t + 1 .

(4)  for   i = 1 , , n   do

(5)         for   j = 1 , , M   do

(6)         compute ω j ( z i ) by formula (28).

(7)         end for

(8)  end for

(9)  for   j = 1 , , M   do

(10)   compute ω j by formula (31).

(11) end for

(12) for   j = 1 , , M   do

(13)   compute w j by formula (34).

(14) end for

(15) for   j = 1 , , M   do

(16)       if   w j < 0   do

(17)       for   l = j , , M - 1   do

(18)        w l = w l + 1 .

(19)        m ( l ) = m ( l + 1 ) .

(20)        P ( l ) = P ( l + 1 ) .

(21)       end for

(22)        j = j - 1 .

(23)        M = M - 1 .

(24)       else  do

(25)       compute m ( j ) by formula (35).

(26)       compute P ( j ) by formula (36).

(27)       end if

(28) end for

(29) normalize w 1 , , w M ;

(30) compute λ by formula (27);

(31) if   log p ( θ ( t ) Z ) - log p ( θ ( t - 1 ) Z ) > ε · log p ( θ ( t - 1 ) Z )   do

(32)   goto step  3;

(33) end if.

4. Simulation Study

A two-dimensional scenario with unknown and time-varying target number is considered to test the proposed iterative mixture component pruning algorithm. The surveillance region is [−1000, 1000] × [−1000, 1000] (in meter). The target state consists of position and velocity, while target measurement is the position. Each target moves according to the following dynamics: (37) x k = [ 1 0 T 0 0 1 0 T 0 0 1 0 0 0 0 1 ] x k - 1 + [ T 2 2 0 0 T 2 2 T 0 0 T ] [ v 1 , k v 2 , k ] , where x k = [ x 1 , k , x 2 , k , x 3 , k , x 4 , k ] T is the target state, [ x 1 , k , x 2 , k ] T is the target position, and [ x 3 , k , x 4 , k ] T is the target velocity at time step k . The process noises are a zero-mean Gaussian white noise with standard deviations σ v 1 = σ v 2 = 5  ( m / s 2 ). The survival probability is p S , k = 0.99 . The number of targets is unknown and variable over all scans. New targets appear spontaneously according to a Poisson point process with PHD function γ k = 0.2 N ( · ; x ¯ , Q ) , where (38) x ¯ = [ - 400 - 400 0 0 ] , Q = [ 100 0 0 0 0 100 0 0 0 0 25 0 0 0 0 25 ] . N ( · ; x ¯ , Q ) is the Gaussian component with mean x ¯ and covariance Q . The spawned PHD is β k k - 1 ( x ζ ) = 0.05 N ( x ; ζ , Q β ) , where Q β = diag ( [ 100,100,400,400 ] T ) .

Each target is detected with probability p D , k = 0.98 . The target-originated measurement model is (39) y k = [ 1 0 0 0 0 1 0 0 ] x k + [ w 1 , k w 2 , k ] , where the measurement noise is a zero-mean Gaussian white noise with standard deviation σ w 1 = σ w 2 = 10  (m). Clutter is modelled as a Poisson random finite set with intensity (40) κ k ( z k ) = λ c · c k ( z k ) , where λ c is the average number of clutter measurements per scan and c ( z ) is the probability distribution over surveillance region. Here c ( z ) is a uniform distribution and λ c is assumed to be 50.

The means of the Gaussian mixture components with mixing weights greater than 0.5 are chosen as the estimates of multitarget states after the mixture reduction.

The tracking results in one Monte Carlo trial are presented in Figures 1 and 2. It can be seen from Figures 1 and 2 that the Gaussian mixture PHD filter with the proposed iterative mixture component pruning algorithm is able to detect the spontaneous and spawned targets and estimate the multiple target states.

True traces and estimates of X coordinates.

True traces and estimates of Y coordinate.

The mixture components with weights larger than 0.0005 at the 86th time step before pruning operation in the above Monte Carlo simulation trial are presented in Figure 3. The mixture components with weights larger than 0.01 after pruning operation are presented in Figure 4. It is obvious that the mixture components with similar parameters describing the same PHD peak can be merged into one mixture component.

Components before pruning operation.

Components after pruning operation.

The typical mixture component pruning algorithm based on thresholds in  is adopted as the comparison algorithm. The thresholds in typical mixture component pruning algorithm are weight pruning threshold 1 0 - 5 , mixture component merging threshold 4, and maximum allowable mixture component number 100. We evaluate the tracking performance of proposed algorithm against the typical algorithm by Wasserstein distance . The Wasserstein distance is defined as (41) d p ( X ^ , X ) = min C i = 1 X ^ j = 1 X C i j x ^ i - x j p p , where X ^ is the estimate of multitarget state set and X is the true multitarget state set. The minimum is taken over the set of all transportation matrices C (a transportation matrix C is one whose entries C i j satisfy C i j 0 , j = 1 | X | C i j = 1 / | X ^ | , and i = 1 | X ^ | C i j = 1 / | X | ). This distance is not defined if either X or X ^ is not defined. Figure 5 shows the mean Wasserstein distances of two algorithms over 100 simulation trials. Process noise, measurement noise, and clutter are independently generated at each trial. It can be seen from Figure 5 that the proposed iterative mixture component pruning algorithm is superior to the typical algorithm at most time steps. The proposed iterative mixture component pruning algorithm is worse than typical algorithm when spawned target is generated and two or more targets are close to each other. Two PHD peaks of two close targets may be regarded as one PHD peak in the proposed algorithm as a result of low-entropy tendency of entropy distribution. Then, some targets are not detected.

The averaged Wasserstein distances.

Figure 6 shows the estimates of target numbers of two algorithms. It is obvious that the estimates of target number of proposed algorithm are closer to the ground truth than typical algorithm at most time steps.

Estimates of target numbers.

Figure 7 shows the mean component numbers of two algorithms after component pruning operations over 100 simulation trials. The component numbers of proposed algorithm are smaller than typical algorithm.

The averaged component numbers.

The case of low signal-to-noise rate (SNR) is yet considered for the further comparison of two algorithms. λ c is assumed 80 in this low SNR case. The corresponding Wasserstein distances, target number estimates, and component numbers are presented in Figures 8, 9, and 10. It can be seen that the proposed iterative mixture component pruning algorithm is also superior to the typical mixture component pruning algorithm based on thresholds in low SNR case.

The averaged Wasserstein distances under low SNR.

Estimates of target numbers under low SNR.

The averaged component numbers under low SNR.

5. Conclusion

An iterative mixture component pruning algorithm is proposed for the Gaussian mixture PHD filter. The entropy distribution of the mixture weights is used as the prior distribution of mixture parameters. The update formula of the mixture weight is derived by Lagrange multiplier and Lambert W function. When the mixture weight becomes negative during the iteration procedure, the corresponding mixture component is pruned by setting the weight to zero. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical mixture component pruning algorithm based on thresholds at most time steps.

Conflict of Interests

The author declares that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (Grant no. 61304261), the Senior Professionals Scientific Research Foundation of Jiangsu University (Grant no. 12JDG076), and a Project Funded by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).

Pulford G. W. Taxonomy of multiple target tracking methods IEE Proceedings—Radar, Sonar and Navigation 2005 152 5 291 304 Blackman S. Popoli R. F. Design and Analysis of Modern Tracking System 1999 Boston, Mass, USA Artech House Barshalom Y. Li X. R. Multitarget-Multisensor Tracking: Principles and Techniques 1995 Storrs, Conn, USA Yaakov Bar-Shalom Barshalom Y. Tracking in a cluttered enviroment probabilistic data association Automatica 1975 11 5 451 460 10.1016/0005-1098(75)90021-7 Barshalom Y. Tracking methods in a multitarget enviroment IEEE Transactions on Automatic Control 1978 23 4 618 626 Chang K. C. Barshalom Y. Joint probabilistic data association for multitarget tracking with possibly unresolved measurements and maneuvers IEEE Transactions on Automatic Control 1984 29 7 585 594 Mahler R. P. S. Multi-target Bayes filtering via first-order multi-target moments IEEE Transactions on Aerospace and Electronic Systems 2003 39 4 1152 1178 Mahler R. P. S. Statistical Multisource-Multitarget Information Fusion. 1em Plus 0.5em Minus 0.4em 2007 Norwood, Mass, USA Artech House Erdinc O. Willett P. Bar-Shalom Y. The bin-occupancy filter and its connection to the PHD filters IEEE Transactions on Signal Processing 2009 57 11 4232 4246 10.1109/TSP.2009.2025816 MR2744093 Vo B.-N. Ma W.-K. The Gaussian mixture probability hypothesis density filter IEEE Transactions on Signal Processing 2006 54 11 4091 4104 10.1109/TSP.2006.881190 Pasha S.-A. Vo B.-N. Tuan H.-D. A Gaussian mixture PHD filter for jump Markov system models IEEE Transactions on Aerospace and Electronic Systems 2009 45 3 919 936 10.1109/TAES.2009.5259174 Sidenbladh H. Multi-target particle filtering for the probability hypothesis density 2 Proceedings of the 6th International Conference of Information Fusion July 2003 Cairns, Australia 800 806 10.1109/ICIF.2003.177321 Vo B.-N. Singh S. Doucet A. Sequential Monte Carlo implementation of the PHD filter for multi-target tracking Proceedings of the International Conference on Information Fusion 2003 Cairns, Australia 792 799 Zajic T. Mahler R. P. S. Kadar I. A particle-systems implementation of the PHD multi-target tracking filter 5096 Signal Processing, Sensor Fusion and Target Recognition XI 2003 291 299 Proceedings of SPIE Vo B.-N. Singh S. Doucet A. Sequential Monte Carlo methods for multi-target filtering with random finite sets IEEE Transactions on Aerospace and Electronic Systems 2005 41 4 1224 1245 10.1109/TAES.2005.1561884 Whiteley N. Singh S. Godsill S. Auxiliary particle implementation of probability hypothesis density filter IEEE Transactions on Aerospace and Electronic Systems 2010 46 7 1437 1454 Clark D.-E. Vo B.-N. Convergence analysis of the Gaussian mixture PHD filter IEEE Transactions on Signal Processing 2007 55 4 1204 1212 10.1109/TSP.2006.888886 MR2464984 Clark D.-E. Bell J. Convergence results for the particle PHD filter IEEE Transactions on Signal Processing 2006 54 7 2252- 2261 10.1109/TSP.2006.873590 Johansen A. M. Singh S. S. Doucet A. Vo B.-N. Convergence of the SMC implementation of the PHD filte Methodology and Computing in Applied Probability 2006 8 2 265 291 10.1007/s11009-006-8552-y MR2324875 Mahler R. P. S. PHD filters of higher order in target number IEEE Transactions on Aerospace and Electronic Systems 2007 43 4 1523 1543 10.1109/TAES.2007.4441756 Vo B. Vo B. Cantoni A. Analytic implementations of the cardinalized probability hypothesis density filter IEEE Transactions on Signal Processing 2007 55 7, part 2 3553 3567 10.1109/TSP.2007.894241 MR2517522 Vo B.-T. Vo B.-N. Cantoni A. The cardinality balanced multi-target multi-Bernoulli filter and its implementations IEEE Transactions on Signal Processing 2009 57 2 409 423 10.1109/TSP.2008.2007924 MR2603371 Punithakumar K. Kirubarajan T. Sinha A. Multiple-model probability hypothesis density filter for tracking maneuvering targets IEEE Transactions on Aerospace and Electronic Systems 2008 44 1 87 98 Dunne D. Kirubarajan T. Multiple model multi-Bernoulli filters for manoeuvering targets IEEE Transactions on Aerospace and Electronic Systems 2013 49 4 2627 2692 10.1109/TAES.2013.6621845 Clark D.-E. Bell J. Bayesian multiple target tracking in forward scan sonar images using the PHD filter IEE Proceedings-Radar Sonar and Navigation 2005 152 5 327 334 10.1049/ip-rsn:20045068 Clark D.-E. Ruiz I.-T. Petillot Y. Particle PHD filter multiple target tracking in sonar image IEEE Transactions on Aerospace and Electronic Systems 2007 43 1 409 416 10.1109/TAES.2007.357143 Maggio E. Taj M. Cavallaro A. Efficient multitarget visual tracking using random finite sets IEEE Transactions on Circuits and Systems for Video Technology 2008 18 8 1016 1027 Wang Y. D. Wu J. K. Kassim A. A. Data-driven probability hypothesis density filter for visual tracking IEEE Transactions on Circuits and Systems for Video Technology 2008 18 8 1085 1095 Maggio E. Cavallaro A. Learning scene context for multiple object tracking IEEE Transactions on Image Processing 2009 18 8 1873 1884 10.1109/TIP.2009.2019934 MR2750698 Gilholm K. Salmond D. Spatial distribution model for tracking extended objects IEE Proceedings Radar, Sonar and Navigation 2005 152 5 364 371 Mahler R. P. S. PHD filters for nonstandard targets, I: extended targets 1–4 Proceedings of the 12th International Conference on Information Fusion, IEEE 2009 915 921 Granstrom K. Lundquist C. Orguner U. A Gaussian mixture PHD filter for extended target Proceedings of the International Conference on Information Fusion 2010 Edinburgh, UK Granstrom K. Lundquist C. Orguner U. Extended target tracking using a Gaussian-mixture PHD filter IEEE Transactions on Aerospace and Electronic Systems 2012 48 4 3268 3286 10.1109/TAES.2012.6324703 Granstrom K. Orguner U. A PHD filter for tracking multiple extended targets using random matrices IEEE Transactions on Signal Processing 2012 60 11 5657 5671 10.1109/TSP.2012.2212888 MR2990274 Lundquist C. Granstrom K. Orguner U. An extended target CPHD filter and a gamma gaussian inverse wishart implementation IEEE Journal of Selected Topics in Signal Processing 2013 7 3 472 483 10.1109/JSTSP.2013.2245632 Granstrom K. Orguner U. On spawning and combination of extended/group targets modeled with random matrices IEEE Transactions on Signal Processing 2013 61 3 678 692 10.1109/TSP.2012.2230171 MR3018321 Lian F. Han C. Liu J. Chen H. Convergence results for the Gaussian mixture implementation of the extended-target PHD filter and its extended Kalman filtering approximation Journal of Applied Mathematics 2012 2012 20 141727 10.1155/2012/141727 MR2956527 Vo B.-N. Vo B.-T. Mahler R. P. S. Closed-form solutions to forward-backward smoothing IEEE Transactions on Signal Processing 2012 60 1 2 17 10.1109/TSP.2011.2168519 MR2932098 Salmond D. J. Mixture reduction algorithms for target tracking in clutter 1305 Signal and Data Processing of Small Targets Conference 1990 434 445 Proceedings of SPIE Salmond D. J. Mixture reduction algorithms for point and extendedobject tracking in clutter IEEE Transactions on Aerospace and Electronic Systems 2009 45 2 667 686 Harmse J. E. Reduction of Gaussian mixture models by maximum similarity Journal of Nonparametric Statistics 2010 22 5-6 703 709 10.1080/10485250903377293 MR2682216 ZBL1204.62055 Huber M. F. Hanebeck U. D. Progressive Gaussian mixture reduction Proceedings of the 11th International Conference on information Fusion 2008 Cologne, Germany 1 8 Schrempf O. C. Feiermann O. Hanebeck U. D. Optimal mixture approximation of the product of mixtures Proceedings of the 8th International Conference on Information Fusion 2005 Philadelphia, Pa, USA 85 92 Schieferdecker D. Huber M. F. Gaussian mixture reduction via clustering Proceedings of the 12th International Conference on Information Fusion (FUSION '09) July 2009 Seattle, Wash, USA 1536 1543 Runnalls A. R. Kullback-Leibler approach to Gaussian mixture reduction IEEE Transactions on Aerospace and Electronic Systems 2007 43 3 989 999 Caticha A. Preuss R. Maximum entropy and Bayesian data analysis: entropic prior distributions Physical Review E 2004 70 4 12 046127 Brand M. Structure learning in conditional probability models via an entropic prior and parameter extinction Neural Computation 1999 11 5 1155 1182 10.1162/089976699300016395 Corless R. M. Gonnet G. H. Hare D. E. G. Knuth D. E. On the Lambert W function Advances in Computational Mathematics 1996 5 1 329 359 10.1007/BF02124750 MR1414285 Hoffman J. Mahler R. P. S. Multitarget miss distance via optimal assignment IEEE Transactions on Systems, Man, and Cybernetics-Part A 2004 34 3 327 336