Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Learning Approaches for Multi-target Tracking Jesse McCrosky Nikki Hu.

Similar presentations


Presentation on theme: "Artificial Learning Approaches for Multi-target Tracking Jesse McCrosky Nikki Hu."— Presentation transcript:

1 Artificial Learning Approaches for Multi-target Tracking Jesse McCrosky Nikki Hu

2 Introduction Work for SPIE paper and for Lockheed Martin: Racecar problem Multiple independent targets Signal measure and PHD methods Rho optimization

3 Overview Model Model –signal SDE –Noise Filtering Filtering –Signal measure –PHD Rho Optimization Rho Optimization

4 Model High-speed, radio reflective Canada Geese High-speed, radio reflective Canada Geese Observations are doppler shifted radio frequencies: Observations are doppler shifted radio frequencies:

5 Doppler Actually track radio signals but determined by bird flight Actually track radio signals but determined by bird flight

6 Signal Model Model radio frequencies, not birds Model radio frequencies, not birds Domain is [0, 2pi) (one dimensional) Domain is [0, 2pi) (one dimensional) Current SDE: Current SDE: –Where D t is in {-1, 1} and changes as poisson process –Wraps around domain But this is not a good model But this is not a good model Doug Blount is working on better SDE Doug Blount is working on better SDE

7 Observation Model Amplitude over discrete time and frequency Amplitude over discrete time and frequency Radio stations, doppler reflections Radio stations, doppler reflections Multiplicative lognormal noise Multiplicative lognormal noise

8 Problems Analysis of noise is limited by need to “filter” out signal Analysis of noise is limited by need to “filter” out signal –Thresholding Noise appears to be correlated in frequency Noise appears to be correlated in frequency –Not in current model Signal has varying width, brightness Signal has varying width, brightness –Not in current model

9 Signal measure filtering Using a version of SERP modified for multiple independent targets Using a version of SERP modified for multiple independent targets Three approaches Three approaches –Shared path filtering –Fully independent target filtering –Pseudo-model selection filtering

10 Shared path filtering Each particle refers to some number of paths from pool Each particle refers to some number of paths from pool –Two particles may refer to same path Do independent evolve of paths, not particles Do independent evolve of paths, not particles Particle evolution is not truly independent Particle evolution is not truly independent

11 Shared Path Filtering Upon resampling, one particle will refer to the other particles paths Upon resampling, one particle will refer to the other particles paths Previous paths may become orphaned Previous paths may become orphaned –Filter can degenerate to very few active paths Solution is path splitting Solution is path splitting –Replace orphaned path with an independent copy of a very active path

12 Shared Path Filtering Advantage is fewer paths to store and evolve Advantage is fewer paths to store and evolve Disadvantage is overhead in path splitting Disadvantage is overhead in path splitting Racecar paths are small and quick to evolve - disadvantages outweigh advantages Racecar paths are small and quick to evolve - disadvantages outweigh advantages

13 Fully independent target filtering Each particle has its own set of paths Each particle has its own set of paths –Enough for maximum possible number of targets in signal Has number indicating how many are active Has number indicating how many are active Evolve each particle which evolves the appropriate paths Evolve each particle which evolves the appropriate paths Resampling produces independent copies of paths and same number of active paths Resampling produces independent copies of paths and same number of active paths

14 Initial distribution Works better with non-uniform initial distribution of number of targets Works better with non-uniform initial distribution of number of targets –More targets = more particles How many more? How many more? –Linear, exponential, etc? Should weights be used to compensate? Should weights be used to compensate?

15 Pseudo-model selection filtering Run instance of SERP for each possible number of targets Run instance of SERP for each possible number of targets Resample between instances with very high rho. Resample between instances with very high rho. Not tried yet. Not tried yet.

16 Results Available hardware can only handle ~100000 particles Available hardware can only handle ~100000 particles Can only reliably filter 2 targets Can only reliably filter 2 targets Problem “looks” easy Problem “looks” easy Investigating possible error in resampling code Investigating possible error in resampling code

17 Simulation

18 Speed issues Two major issues Two major issues –MSE calculation –Reweighting Fixing these may allow more particles to be used Fixing these may allow more particles to be used Iteration profile: Iteration profile: resamplingreweightingevolutiondrawingtotal 210 ms 7750 ms 100 ms 2030 ms 10070 ms

19 MSE Calculation Required for Rho Optimization Required for Rho Optimization Unknown number of targets requires more complex error calculation Unknown number of targets requires more complex error calculation Using equation from last year’s SPIE paper: Using equation from last year’s SPIE paper: is a distance function on the target domain is a distance function on the target domain is the number of targets in X, a particle or signal is the number of targets in X, a particle or signal Note this requires estimated locations for each particle, not just distribution

20 MSE Calculation Is there an alternate way? Is there an alternate way? –Consider only particles with correct number of targets? Only needed to tell rho optimizer how well it is doing Only needed to tell rho optimizer how well it is doing

21 Update function Update weighting function for multiplicative lognormal noise is fairly expensive Update weighting function for multiplicative lognormal noise is fairly expensive Is there a more efficient approximation that could be used? Is there a more efficient approximation that could be used?

22 PHD Filter PHD Filter Contents  Brief introduction  Number of Targets Estimation  State Estimation  implementation  Questions

23 1. Brief Introduction 1. Brief Introduction A new method for tracking and identifying multiple targets. A new method for tracking and identifying multiple targets. Consider a single-sensor, single-target problem, the recursive equations for the posterior distribution can be written as Consider a single-sensor, single-target problem, the recursive equations for the posterior distribution can be written as

24 Where The generalization of the above equation for the multiple-sensor, multiple-target system can be reformulated as:

25 Where is a multi-target state is a multi-target posterior density is a multi-target posterior density is a multi-target likelihood function is a multi-target likelihood function The first-order moment can’t be defined. In order to compute it, we have to use some function h that maps the state-set X into a vector space.Then is computed indirectly as

26 One of the possible choices for the function h is One of the possible choices for the function h is This makes the first-order moment, denoted by This makes the first-order moment, denoted by, is the probability hypothesis density(PHD), is the probability hypothesis density(PHD)

27 It has the property that is the expected number of targets contained in the region S It has the property that is the expected number of targets contained in the region S If the SNR and the SCR are high enough, and the multi-target system has a zero covariance, then the PHD is a good approximation for the unnormalized multi-target posterior density, and an explicit recursive equation can be derived for If the SNR and the SCR are high enough, and the multi-target system has a zero covariance, then the PHD is a good approximation for the unnormalized multi-target posterior density, and an explicit recursive equation can be derived for.

28 2.Number of Targets Estimation 2.Number of Targets Estimation In our current implementation, we consider that the number of targets is constant and unknown. An explicit recursive equation can be derived for In our current implementation, we consider that the number of targets is constant and unknown. An explicit recursive equation can be derived for as: as:

29 Where - average no. of false alarms - distribution of false alarms - distribution of false alarms - prob. Of detection - prob. Of detection

30 3. State Estimation Using the results of the previous section, I.e. Using the results of the previous section, I.e. is given. Then, how can this information be used to find the state of each target? is given. Then, how can this information be used to find the state of each target? The approach is based on the following idea. Suppose that The approach is based on the following idea. Suppose that where is the unknown means where is the unknown means is the covariance matrices, is the covariance matrices, M is the expected number of targets, and M is the expected number of targets, and

31 The likelihood of the parameters given data is The likelihood of the parameters given data is It is reasonable to expect that maximization of the likelihood, using the set of particles representing the PHD as the data, should give the values of the parameters, It is reasonable to expect that maximization of the likelihood, using the set of particles representing the PHD as the data, should give the values of the parameters, which will provide a good approximation for, Then the means,,should be good estimators for the states of targets., Then the means,,should be good estimators for the states of targets.

32 4. Implement issues 4. Implement issues Equations (1.1) and (1.2) contain an integral, whose computation isn’t easy. To deal with this problem, a sequential Monte Carlo method can be used. And this method leads to the Interactive Particle Filter. Equations (1.1) and (1.2) contain an integral, whose computation isn’t easy. To deal with this problem, a sequential Monte Carlo method can be used. And this method leads to the Interactive Particle Filter. At each time step k > 1 At each time step k > 1

33 Motion update Motion update  Setting  Moving each according to the motion of the targets Observation update Observation update  compute I(y)  the total mass is

34 Computing weights Computing weights Resampling the particles Resampling the particles

35 5. Questions 5. Questions 1. Is a copy of a RacecarTarget or RacecarSignal? 2. Did I compute right?

36 Rho Optimization Develop a policy that maps states to actions Develop a policy that maps states to actions –State is aggregate of variance of particle system and distribution of particle weights –Actions are rho values to use for next iteration Optimal policy will minimize some cost function Optimal policy will minimize some cost function

37 Cost Function For a single iteration use cost function: For a single iteration use cost function: Where a, b, and c are variance, MSE, and computation time and c 1, c 2, c 3 are constant weights But rho value may have consequences later on

38 Time Horizon Cost To consider future costs use discounted To consider future costs use discounted

39 Conclusion Lots of progress made, lots to be done! Lots of progress made, lots to be done!


Download ppt "Artificial Learning Approaches for Multi-target Tracking Jesse McCrosky Nikki Hu."

Similar presentations


Ads by Google