Download presentation

Presentation is loading. Please wait.

Published byAnna Frost Modified over 2 years ago

1
Random Set/Point Process in Multi-Target Tracking Ba-Ngu Vo EEE Department University of Melbourne Australia SAMSI, RTP, NC, USA, 8 September 2008 Collaborators (in no particular order): Mahler R., Singh. S., Doucet A., Ma. W.K., Panta K., Clark D., Vo B.T., Cantoni A., Pasha A., Tuan H.D., Baddeley A., Zuyev S., Schumacher D.

2
The Bayes (single-target) filter Multi-target tracking System representation Random finite set & Bayesian Multi-target filtering Tractable multi-target filters Probability Hypothesis Density (PHD) filter Cardinalized PHD filter Multi-Bernoulli filter Conclusions Outline Outline

3
The Bayes (single-target) Filter The Bayes (single-target) Filter state-vector target motion state space observation space xkxk x k-1 z k-1 zkzk f k|k-1 (x k | x k-1 ) Markov Transition Density Measurement Likelihood g k (z k | x k ) Objective measurement history (z 1,…, z k ) posterior (filtering) pdf of the state p k (x k | z 1:k ) System Model

4
state-vector target motion state space observation space xkxk x k-1 z k-1 zkzk Bayes filter p k-1 (x k-1 |z 1:k-1 ) p k|k-1 (x k | z 1:k-1 ) p k (x k | z 1:k ) prediction data-update p k-1 (x k-1 | z 1:k-1 ) dx k- 1 f k|k-1 (x k | x k-1 ) g k (z k | x k )K -1 p k|k-1 (x k | z 1:k-1 ) The Bayes (single-target) Filter The Bayes (single-target) Filter

5
p k-1 (. |z 1:k-1 ) p k|k-1 (. | z 1:k-1 ) p k (. | z 1:k ) prediction data-update Bayes filter N (. ; m k-1, P k-1 ) N (. ; m k|k-1, P k|k-1 ) N (. ; (m k, P k ) Kalman filter i=1 N {w k|k-1, x k|k-1 } i=1 N (i)(i)(i)(i) {w k, x k } i=1 N (i)(i) (i)(i) {w k-1, x k-1 } (i)(i) (i)(i) Particle filter state-vector target motion state space observation space xkxk x k-1 z k-1 zkzk f k|k-1 (x k | x k-1 ) g k (z k | x k ) The Bayes (single-target) Filter The Bayes (single-target) Filter

6
Multi-target tracking Multi-target tracking

7
observation produced by targets target motion state space observation space 5 targets 3 targets X k-1 XkXk Objective: Jointly estimate the number and states of targets Challenges: Random number of targets and measurements Detection uncertainty, clutter, association uncertainty Multi-target tracking Multi-target tracking

8
System Representation System Representation Estimate is correct but estimation error ??? True Multi-target state Estimated Multi-target state How can we mathematically represent the multi-target state? 2 targets Usual practice: stack individual states into a large vector! Problem: Remedy: use

9
True Multi-target state Estimated Multi-target State 2 targets no target True Multi-target state Estimated Multi-target State 2 targets 1 target System Representation System Representation What are the estimation errors?

10
Error between estimate and true state (miss-distance ) fundamental in estimation/filtering & control well-understood for single target: Euclidean distance, MSE, etc in the multi-target case: depends on state representation For multi-target state: vector representation doesnt admit multi-target miss-distance finite set representation admits multi-target miss-distance: distance between 2 finite sets In fact the distance is a distance for sets not vectors System Representation System Representation

11
observation produced by targets target motion state space observation space 5 targets 3 targets X k-1 XkXk Number of measurements and their values are (random) variables Ordering of measurements not relevant! Multi-target measurement is represented by a finite set System Representation System Representation

12
RFS & Bayesian Multi-target Filtering RFS & Bayesian Multi-target Filtering targets target set observed set observations X Z Need suitable notions of density & integration p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction data-update Reconceptualize as a generalized single-target problem [Mahler 94] Bayesian: Model state & observation as Random Finite Sets [Mahler 94]

13
RFS & Bayesian Multi-target Filtering RFS & Bayesian Multi-target Filtering S N (S) = | S| point process or random counting measure random finite set or random point pattern state space E

14
Belief density of f : F (E) [0, ) (T ) = T f (X) X Belief distribution of (T ) = P( T ), T E E Probability density of p : F (E) [0, ) P ( T ) = T p (X) (dX) Probability distribution of P ( T ) = P( T ), T F (E) F(E)F(E) Collection of finite subsets of E State space Mahlers Finite Set Statistics (1994) Choquet (1968) T T Conventional integralSet integral Vo et. al. (2005) Point Process Theory ( s) RFS & Bayesian Multi-target Filtering RFS & Bayesian Multi-target Filtering

15
xx X x death creation X x spawn motion Multi-target Motion Model Multi-target Motion Model f k|k-1 (X k |X k-1 ) Multi-object transition density X k = S k|k-1 (X k-1 ) B k|k-1 (X k-1 ) k Evolution of each element x of a given multi-object state X k-1

16
Multi-target Observation Model Multi-target Observation Model g k (Z k |X k ) Multi-object likelihood Z k = k (X k ) K k (X k ) xz x likelihood misdetection clutter state space observation space Observation process for each element x of a given multi-object state X k

17
p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction data-update Computationally intractable in general No closed form solution Particle or SMC implementation [Vo, Singh & Doucet 03, 05, Sidenbladh 03, Vihola 05, Ma et al. 06] Restricted to a very small number of targets Multi-target Bayes Filter Multi-target Bayes Filter Multi-target Bayes filter

18
Particle Multi-target Bayes Filter Particle Multi-target Bayes FilterAlgorithm for i =1:N, % Initialise => Sample: Compute: end; normalise weights; for k =1: k max, for i =1:N, % Update => Sample: Update: end; normalise weights; resample; MCMC step; end;

19
p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction data-update Multi-target Bayes filter: very expensive! single-object Bayes filter multi-object Bayes filter state of system: random vector first-moment filter (e.g. - - filter) state of system: random set first-moment filter (PHD filter) Single-object Multi-object The PHD Filter The PHD Filter

20
x0x0 state space v PHD (intensity function) of a RFS S v (x 0 ) = density of expected number of objects at x 0 The Probability Hypothesis Density The Probability Hypothesis Density v (x)dx = expected number of objects in S S = mean of, N (S), the random counting measure at S

21
The PHD Filter The PHD Filter state space vk vk v k-1 PHD filter v k-1 (x k-1 |Z 1:k-1 )v k (x k |Z 1:k ) v k|k-1 (x k |Z 1:k-1 ) PHD prediction PHD update Multi-object Bayes filter p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction update Avoids data association!

22
PHD Prediction PHD Prediction v k|k-1 (x k |Z 1:k-1 ) = k|k-1 (x k, x k-1 ) v k-1 (x k-1 |Z 1:k-1 )dx k-1 k (x k ) intensity from previous time-step term for spontaneous object births = intensity of k k|k-1 (x k, x k-1 ) = e k|k-1 (x k-1 ) f k|k-1 (x k |x k-1 ) + k|k-1 (x k |x k-1 ) Markov transition intensity probability of object survival term for objects spawned by existing objects = intensity of B k (x k-1 ) Markov transition density predicted intensity N k|k-1 = v k|k-1 (x| Z 1:k-1 )dx predicted expected number of objects ( k|k-1 )(x k ) k|k-1 (x k, x) (x)dx k (x k ) v k|k-1 k|k-1 v k-1

23
PHD Update PHD Update v k (x k |Z 1:k ) z Z k D k (z) + k (z) p D,k (x k ) g k (z|x k ) + 1 p D,k (x k ) ] v k|k-1 (x k |Z 1:k-1 ) D k (z) = p D,k (x)g k (z|x)v k|k-1 (x|Z 1:k-1 )dx N k = v k (x| Z 1:k )dx Bayes-updated intensity predicted intensity (from previous time) intensity of false alarms sensor likelihood function probability of detection expected number of objects measurement v k k v k|k-1 ( k )(x) = z Z k + k (z) k,z (x) + 1 p D,k (x) ] (x) [

24
Particle PHD filter Particle PHD filter Particle approximation of v k-1 Particle approximation of v k state space [Vo, Singh & Doucet 03, 05], [Sidenbladh 03], [Mahler & Zajic 03] The PHD (or intensity function) v k is not a probability density The PHD propagation equation is not a standard Bayesian recursion Sequential MC implementation of the PHD filter Need to cluster the particles to obtain multi-target estimates

25
Particle PHD filter Particle PHD filterAlgorithm Initialise; for k =1: k max, for i =1: J k, Sample: ; compute: ; end; for i = J k +1: J k +L k-1, Sample: ; compute: ; end; for i =1: J k +L k-1, Update: ; end; Redistribute total mass among L k resampled particles; end; Convergence: [Vo, Singh & Doucet 05], [Clark & Bell 06], [Johansen et. al. 06]

26
Gaussian Mixture PHD filter Gaussian Mixture PHD filter Closed-form solution to the PHD recursion exists for linear Gaussian multi-target model v k-1 (. |Z 1:k-1 )v k (. |Z 1:k ) v k|k-1 (. |Z 1:k-1 ) {w k-1, m k-1, P k-1 } i=1 J k-1 (i)(i)(i)(i)(i)(i) {w k|k-1, m k|k-1, P k|k-1 } i=1 J k|k-1 (i)(i)(i)(i)(i)(i) {w k, m k, P k } i=1 Jk Jk (i)(i)(i)(i)(i)(i) PHD filter Gaussian Mixture (GM) PHD filter [Vo & Ma 05, 06] Gaussian mixture prior intensity Gaussian mixture posterior intensities at all subsequent times Extended & Unscented Kalman PHD filter [Vo & Ma 06] Jump Markov PHD filter [Pasha et. al. 06] Track continuity [Clark et. al. 06]

27
Cardinalised PHD Filter Cardinalised PHD Filter Drawback of PHD filter: High variance of cardinality estimate Relax Poisson assumption: allows arbitrary cardinality distribution Jointly propagate: intensity function & probability generating function of cardinality. More complex PHD update step (higher computational costs) CPHD filter [Mahler 06,07] v k-1 (x k-1 |Z 1:k-1 )v k (x k |Z 1:k ) v k|k-1 (x k |Z 1:k-1 ) intensity prediction intensity update p k-1 (n|Z 1:k-1 ) p k (n|Z 1:k ) p k|k-1 (n|Z 1:k-1 ) cardinality prediction cardinality update

28
Gaussian Mixture CPHD Filter Gaussian Mixture CPHD Filter {w k-1, x k-1 } i=1 J k-1 (i)(i)(i)(i) {w k|k-1, x k|k-1 } i=1 J k|k-1 (i)(i)(i)(i) {w k, x k } i=1 Jk Jk (i)(i)(i)(i) intensity prediction intensity update cardinality prediction cardinality update {p k-1 (n)} n=0 {p k|k-1 (n)} n=0 {p k (n)} n=0 Particle CPHD filter [Vo 08] Closed-form solution to the CPHD recursion exists for linear Gaussian multi-target model Gaussian mixture prior intensity Gaussian mixture posterior intensities at all subsequent times [Vo et. al. 06, 07] Particle-PHD filter can be extended to the CPHD filter

29
CPHD filter Demonstration CPHD filter Demonstration 1000 MC trial average GMCPHD filter GMPHD filter

30
CPHD filter Demonstration CPHD filter Demonstration 1000 MC trial average Comparison with JPDA: linear dynamics, v = 5, = 10, 4 targets,

31
Sonar images CPHD filter Demonstration CPHD filter Demonstration

32
MeMBer Filter MeMBer Filter {(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update Valid for low clutter rate & high probability of detection Multi-object Bayes filter p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction update (Multi-target Multi-Bernoulli ) MeMBer filter [Mahler 07], biased Approximate predicted/posterior RFSs by Multi-Bernoulli RFSs Cardinality-Balanced MeMBer filter [Vo et. al. 07], unbiased

33
Cardinality-Balanced MeMBer Filter Cardinality-Balanced MeMBer Filter {(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update {(r P,k|k-1, p P,k|k-1 )} {(r,k, p,k )} (i)(i) (i)(i) (i)(i) (i)(i) i=1 M k-1 i=1 M,k r k-1 p k-1, p S,k (i)(i) (i)(i) f k|k-1 ( | ), p k-1 p S,k (i)(i) p k-1, p S,k (i)(i) term for object births Cardinality-Balanced MeMBer filter [Vo et. al. 07]

34
{(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update {(r L,k, p L,k )} {(r U,k,(z), p U,k (z))} (i)(i) (i)(i) z Z k i=1 M k|k-1 1 p k|k-1, p D,k (i)(i) p k|k-1 (1 p D,k ) (i)(i) 1 r k|k-1 p k|k-1, p D,k (i)(i) (i)(i) r k|k-1 (1 p k|k-1, p D,k ) (i)(i) (i)(i) Cardinality-Balanced MeMBer Filter Cardinality-Balanced MeMBer Filter r k|k-1 (1 r k|k-1 ) p k|k-1, p D,k g k (z| ) 1 r k|k-1 p k|k-1, p D,k (i)(i) (i)(i) r k|k-1 p k|k-1, p D,k g k (z| ) (i)(i)(i)(i) i=1 M k|k-1 (1 r k|k-1 p k|k-1, p D,k ) 2 (i)(i) (i)(i) (i)(i)(i)(i)(i)(i) i=1 M k|k-1 (z) 1 r k|k-1 (i)(i) r k|k-1 p k|k-1 (i)(i)(i)(i) i=1 M k|k-1 p D,k g k (z| ) r k|k-1 p k|k-1, p D,k g k (z| ) 1 r k|k-1 (i)(i) (i)(i) (i)(i) i=1 M k|k-1 Cardinality-Balanced MeMBer filter [Vo et. al. 07]

35
Cardinality-Balanced MeMBer Filter Cardinality-Balanced MeMBer Filter Closed-form (Gaussian mixture) solution [Vo et. al. 07], Particle implementation [Vo et. al. 07], {(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update {w k-1, x k-1 } j=1 J k-1 (i,j) j=1 J k|k-1 (i,j) {w k|k-1, x k|k-1 } {w k, x k } j=1 Jk Jk (i,j) {w k-1, m k-1, P k-1 } j=1 J k-1 (i,j) {w k|k-1, m k|k-1, P k|k-1 } j=1 J k|k-1 (i,j) {w k, m k, P k } j=1 Jk Jk (i,j) More useful than PHD filters in highly non-linear problems

36
Performance comparison Performance comparison Example: Example: 10 targets max on scene, with births/deaths 4D states: x-y position/velocity, linear Gaussian observations: x-y position, linear Gaussian / start/end positions Dynamics constant velocity model: v = 5ms -2, survival probability: p S,k = 0.99, Observations additive Gaussian noise: =10m, detection probability: p D,k = 0.98, uniform Poisson clutter: c = 2.5x10 -6 m -2

37
Cardinality-Balanced Recursion Mahlers MeMBer Recursion 1000 MC trial average Gaussian implementation Gaussian implementation

38
1000 MC trial average CPHD Filter has better performance

39
Particle implementation Particle implementation 1000 MC trial average CB-MeMBer Filter has better performance

40
Concluding Remarks Concluding Remarks Thank You! Random Finite Set framework Rigorous formulation of Bayesian multi-target filtering Rigorous formulation of Bayesian multi-target filtering Leads to efficient algorithms Leads to efficient algorithms Future research directions Track before detect Track before detect Performance measure for multi-object systems Performance measure for multi-object systems Numerical techniques for estimation of trajectories Numerical techniques for estimation of trajectories For more info please see

41
References References D. Stoyan, D. Kendall, J. Mecke, Stochastic Geometry and its Applications, John Wiley & Sons, 1995 D. Daley and D. Vere-Jones, An Introduction to the Theory of Point Processes, Springer-Verlag, I. Goodman, R. Mahler, and H. Nguyen, Mathematics of Data Fusion. Kluwer Academic Publishers, R. Mahler, An introduction to multisource-multitarget statistics and applications, Lockheed Martin Technical Monograph, R. Mahler, Multi-target Bayes filtering via first-order multi-target moments, IEEE Trans. AES, vol. 39, no. 4, pp. 1152–1178, B. Vo, S. Singh, and A. Doucet, Sequential Monte Carlo methods for multi-target filtering with random finite sets, IEEE Trans. AES, vol. 41, no. 4, pp. 1224–1245, 2005,. B. Vo, and W. K. Ma, The Gaussian mixture PHD filter, IEEE Trans. Signal Processing, IEEE Trans. Signal Processing, Vol. 54, No. 11, pp , R. Mahler, A theory of PHD filter of higher order in target number, in I. Kadar (ed.), Signal Processing, Sensor Fusion, and Target Recognition XV, SPIE Defense & Security Symposium, Orlando, April 17-22, 2006 B. T. Vo, B. Vo, and A. Cantoni, "Analytic implementations of the Cardinalized Probability Hypothesis Density Filter," IEEE Trans. SP, Vol. 55, No. 7, Part 2, pp , D. Clark & J. Bell, Convergence of the Particle-PHD filter, IEEE Trans. SP, A. Johansen, S. Singh, A. Doucet, and B. Vo, "Convergence of the SMC implementation of the PHD filter," Methodology and Computing in Applied Probability, A. Pasha, B. Vo, H. D Tuan and W. K. Ma, "Closed-form solution to the PHD recursion for jump Markov linear models," FUSION, D. Clark, K. Panta, and B. Vo, "Tracking multiple targets with the GMPHD filter," FUSION, B. T. Vo, B. Vo, and A. Cantoni, On Multi-Bernoulli Approximation of the Multi-target Bayes Filter," ICIF, Xian, See also:

42
Optimal Subpattern Assignment (OSPA) metric [Schumacher et. al 08] Fill up X with n - m dummy points located at a distance greater than c from any points in Y Calculate p th order Wasserstein distance between resulting sets Efficiently computed using the Hungarian algorithm Representation of Multi-target state Representation of Multi-target state

43
Gaussian Mixture PHD Prediction Gaussian Mixture PHD Prediction v k-1 (x) = w k-1 N (x; m k-1, P k-1 ) i=1 J k-1 (i)(i)(i)(i)(i)(i) v k|k-1 (x) = [ p S,k w k-1 N (x; m S,k|k-1, P S,k|k-1 ) + i=1 J k-1 (i)(i)(i)(i) (i)(i) w k-1 w,k N (x; m,k|k-1, P,k|k-1 ) ] + k (x) (i)(i)(i,l) l=1 J,k (l)(l) Gaussian mixture posterior intensity at time k-1 : Gaussian mixture predicted intensity to time k : k|k-1 v k-1 m S,k|k-1 = F k-1 m k-1 P S,k|k-1 = F k-1 P k-1 F k-1 + Q k-1 (i)(i)(i)(i) T(i)(i)(i)(i) (i,l) (l)(l) m,k|k-1 = F,k-1 m k-1 + d,k-1 P,k|k-1 = F,k-1 P k-1 (F,k-1 ) T + Q,k-1 (l)(l) (l)(l) (l)(l)(i)(i) (i)(i) (l)(l)

44
Gaussian Mixture PHD Update Gaussian Mixture PHD Update v k|k-1 (x) = w k|k-1 N (x; m k|k-1, P k|k-1 ) i=1 J k|k-1 (i)(i)(i)(i)(i)(i) Gaussian mixture predicted intensity to time k : Gaussian mixture updated intensity at time k : v k (x) = i=1 J k|k-1 (i)(i) (i)(i) N (x; m k|k (z), P k|k ) + (1 p D,k )v k|k-1 (x) z Z k (i)(i) (j)(j) (i)(i) j=1 J k|k-1 p D,k w k|k-1 q k (z) + k (z) p D,k w k|k-1 q k (z) (j)(j) P k|k = (I K k H k )P k|k-1 (i)(i)(i)(i)(i)(i) K k = P k|k-1 H k (H k P k|k-1 H k + R k ) 1 (i)(i)(i)(i)(i)(i)TT m k|k (z) = m k|k-1 + K k ( z H k m k|k-1 ) (i)(i)(i)(i)(i)(i)(i)(i) q k (z) = N (z; H k m k|k-1, H k P k|k-1 H k + R k ) T(i)(i)(i)(i)(i)(i) k v k|k-1

45
v k|k-1 (x k ) = p S,k (x k-1 ) f k|k-1 (x k |x k-1 ) v k-1 (x k-1 )dx k-1 k (x k ) intensity from previous time-step intensity of spontaneous object births k probability of survival Markov transition density predicted intensity p k|k-1 (n) = p,k (n - j) k|k-1 [ v k-1,p k-1 ](j) probability of n - j spontaneous births predicted cardinality j=0 n probability of j surviving targets Cardinalised PHD Prediction Cardinalised PHD Prediction C j l j l-j l=jl=j l p k-1 (l)

46
v k (x k ) = v k|k-1 (x k ) k, Z k (x k ) predicted intensity updated intensity z Z k k,z (x k ) (1 p D,k (x k )) predicted cardinality distribution k [v k|k-1, Z k ](n)p k|k-1 (n) updated cardinality distribution 0 p k (n) = 0 Cardinalised PHD Update Cardinalised PHD Update k [v, Z](n) = p K,k (|Z|–j) (|Z|–j)! P j+u esf j ({ : z Z k } ) n-(j+u) n n j=0 min(|Z|,n) u S S Z,|S|=j esf j (Z) = likelihood function prob. of detection clutter intensity p D,k (x k )g k (z|x k ) / k (z) clutter cardinality distribution

47
Mahlers MeMBer Filter Mahlers MeMBer Filter {(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update Valid for low clutter rate & high probability of detection Multi-object Bayes filter p k-1 (X k-1 |Z 1:k-1 ) p k (X k |Z 1:k ) p k|k-1 (X k |Z 1:k-1 ) prediction update (Multi-target Multi-Bernoulli ) MeMBer filter [Mahler 07] Approximate predicted/posterior RFSs by Multi-Bernoulli RFSs Biased in Cardinality (except when probability of detection = 1)

48
{(r k-1, p k-1 )} i=1 M k-1 (i)(i) (i)(i) {(r k|k-1, p k|k-1 )} i=1 M k|k-1 (i)(i) (i)(i) {(r k, p k )} i=1 Mk Mk (i)(i) (i)(i) prediction update 1 r k|k-1 p k|k-1, p D,k (i)(i) (i)(i) r k|k-1 p k|k-1 (i)(i)(i)(i) i=1 M k|k-1 v k|k-1 = ~ (1 r k|k-1 p k|k-1, p D,k ) 2 (i)(i) (i)(i) r k|k-1 (1 r k|k-1 ) p k|k-1 (i)(i)(i)(i)(i)(i) i=1 M k|k-1 v k|k-1 = (1) 1 r k|k-1 (i)(i) r k|k-1 p k|k-1 (i)(i)(i)(i) i=1 M k|k-1 v k|k-1 = ~* {(r L,k, p L,k )} {(r U,k,(z), p U,k (z))} (i)(i) (i)(i) z Z k i=1 M k|k-1 (z) v k|k-1, p D,k g k (z| ) v k|k-1, p D,k g k (z| ) (1) ~ 1 p k|k-1, p D,k (i)(i) p k|k-1 (1 p D,k ) (i)(i) v k|k-1, p D,k g k (z| ) v k|k-1 p D,k g k (z| ) ~* 1 r k|k-1 p k|k-1, p D,k (i)(i) (i)(i) r k|k-1 (1 p k|k-1, p D,k ) (i)(i) (i)(i) Cardinality-Balanced MeMBer Filter Cardinality-Balanced MeMBer Filter Cardinality-Balanced MeMBer filter [Vo et. al. 07]

49
Linear Jump Markov PHD filter [Pasha et. al. 06] Extensions of the PHD filter Extensions of the PHD filter

50
Example: 4-D, Linear JM target dynamics with 3 models 4 targets, birth rate= 3x0.05, death prob. = 0.01, clutter rate = 40 Extensions of the PHD filter Extensions of the PHD filter

51
What is a Random Finite Set (RFS)? What is a Random Finite Set (RFS)? The number of points is random, The points have no ordering and are random Loosely, an RFS is a finite set-valued random variable Also known as: (simple finite) point process or random point pattern Pine saplings in a Finish forest [Kelomaki & Penttinen] Childhood leukaemia & lymphoma in North Humberland [Cuzich & Edwards]

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google