Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280.

Similar presentations


Presentation on theme: "Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280."— Presentation transcript:

1 Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280

2 Main References Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data (Liao L., Fox D., Hightower J., Kautz H., Shultz D.) – in International Conference on Intelligent Robots and Systems (2003) Inferring High-Level Behavior from Low-Level Sensors (Paterson D., Liao L., Fox D., Kautz H.) – In UBICOMP 2003 Learning and Inferring Transportation Routines (Liao L., Fox D., Kautz H.) – In AAAI 2004

3 Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

4 Motivation ACTIVITY COMPASS - software which indirectly monitors your activity and offers proactive advice to aid in successfully accomplishing inferred plans. Healthcare Monitoring Automated Planning Context Aware Computing Support

5 Research Goal To bridge the gap between sensor data and symbolic reasoning. To allow sensor data to help interpret symbolic knowledge. To allow symbolic knowledge to aid sensor interpretation.

6 Executive Summary GPS data collection 3 months, 1 user’s daily life Inference Engine Infers location and transportation “mode” on-line in real-time Learning Transportation patterns Results Better predictions Conceptual understanding of routines

7 Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

8 Tracking on a Graph Tracking person’s location and mode of transportation using street maps and GPS sensor data. Formally, the world is modeled as: graph G = (V,E), where: V is a set of vertices = intersections E is a set of directed edges = roads/foot paths

9 Example

10 Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

11 State Space Location Which street user is on. Position on that street Velocity GPS Offset Error Transportation Mode L = ‹L s, L p › O = ‹O x, O y › V M ε {BUS, CAR, FOOT} X = ‹ L s, L p, V, O x, O y, M ›

12 GPS as a Sensor GPS is not a trivial location sensor to use GPS has inherent inaccuracies: Atmospherics Satellite Geometry Multi-path propagation errors Signal blockages Using the data is even harder Resolution > 15m Coordinate mismatches

13 Dynamic Bayesian Networks Extension of a Markov Model Statistical model which handles Sensor Error Enormous but Structured State Spaces Probabilistic Temporal A single framework to manage all levels of abstraction

14 Model (I)

15 Model (II)

16 Model (III)

17 Dependencies

18 Inference We want to compute the posterior density:

19 Inference Particle Filtering A Technique for Solving DBNs Approximate Solutions Stochastic/ Monte Carlo In our case, a particle represents an instantiation of the random variables describing: the transportation mode: m t the location: l t (actually the edge e t ) the velocity: v t

20 Particle Filtering Step 1 (SAMPLING) Draw n samples X t-1 from the previous set S t-1 and generate n new samples X t according to the dynamics p(x t |x t-1 ) (i.e. motion model ) Step 2 (IMPORTANCE SAMPLING) assign each sample x t an importance weight according to the likelihood of the observation z t : ω t ≈ p(z t |x t ) Step 3 (RE-SAMPLING) draw samples with replacement according to the distribution defined by the importance weights, ω t

21 Motion Model – p(x t |x t-1 ) Advancing particles along the graph G Sample transportation mode m t from the distribution p(m t |m t-1,e t-1 ) Sample velocity v t from density p(v t |m t ) - (mixture of Gauss densities – see picture) Sample the location using current velocity: draw at random the traveled distance d (from a Gauss density centered at v t ). If the distance implies an edge transition then we select next edge e t with probability p(e t |e t-1,m t-1 ). Otherwise we stay on the same edge e t = e t-1

22 Animation Play short video clip

23 Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

24 Learning We want to learn from history the components of the motion model: p(e t |e t-1,m t-1 ) - is the transition probability on the graph, conditioned on the mode of transportation just prior to transitioning to the new edge p(m t |m t-1,e t-1 ) - is the transportation mode transition probability. It depends on the previous mode m t-1 and the location of the person described by the edge e t-1 Use the Monte Carlo version of EM algorithm

25 Learning At each iteration it performs both a forward and a backward (in time) particle filtering step. At each forward and backward filtering steps the algorithm counts the number of particles transiting between the different edges and modes. To obtain probabilities for different transitions, the counts of the forward and backward pass are normalized and then multiplied at the corresponding time slices.

26 Implementation Details (I) α t (e t,m t ) the number of particles on edge e t and in mode m t at time t in the forward pass of particle filtering β t (e t,m t ) the number of particles on edge e t and in mode m t at time t in the backward pass of particle filtering ξ t-1 (e t,e t-1,m t-1 ) probability of transiting from edge e t-1 to e t at time t-1 and in mode m t-1 ψ t-1 (m t,m t-1,e t-1 ) probability of transiting from mode m t-1 to m t on edge e t-1 at time t-1

27 Implementation Details (II) After we have ξ t-1 and ψ t-1 for all t from 2 to T, we can update the parameters as:

28 Implementation details (III)

29 E-step 1. Generate n uniformly distributed samples 2. Perform forward particle filtering a) Sampling: generate n new samples from the existing ones using current parameter estimation p(e t |e t-1,m t-1 ) and p(m t |m t-1,e t-1 ). b) Re-weight each sample, re-sample, count and save α t (e t,m t ). c) Move to next time slice ( t = t+1 ). 3. Perform backward particle filtering a) Sampling: generate n new samples from the existing ones using the backward parameter estimation p(e t-1 |e t,m t ) and p(m t-1 |m t,e t ). b) Re-weight each sample, re-sample, count and save β(e t,m t ). c) Move to previous time slice ( t = t-1 ).

30 M-step 1. Compute ξ t-1 (e t,e t-1,m t-1 ) and ψ t-1 (m t,m t-1,e t-1 ) using (5) and (6) and then normalize. 2. Update p(e t |e t-1,m t-1 ) and p(m t |m t-1,e t-1 ) using (7) and (8). LOOP: Repeat E-step and M-step using updated parameters until model converges.

31 Outline Motivation Problem Definition Modeling and Inference Dynamic Bayesian Networks Particle Filtering Learning Results Conclusions

32 Dataset Single user 3 months of daily life Collected GPS position and velocity data at 2 and 10 second sample intervals Evaluation data was 29 “trips” - 12 hours of logs All continuous outdoor data Divided chronologically into 3 cross- validation groups

33 Goals Mode Estimation and Prediction Learning a motion model that would be able to estimate and predict the mode of transportation at any given instant. Location Prediction Learning a motion model that would be able to predict the location of the person into the future.

34 Results – Mode Estimation Model Mode Prediction Accuracy Decision Tree (supervised) 55% Prior w/o bus info 60% Prior w/ bus info 78% Learned 84%

35 Results – Mode Prediction Evaluate the ability to predict transitions between transportation modes. Table shows the accuracy in predicting qualitative change in transportation mode within 60 seconds of the actual transition (e.g. correctly predicting that the person goes off the bus). PRECISION: percentage of time when the algorithm predicts a transition that will actually occur. RECALL: percentage of real transitions that were correctly predicted.

36 Results – Mode Prediction Model Mode Transition Prediction Accuracy PrecisionRecall Decision Tree 2%83% Prior w/o bus info 6%63% Prior w/ bus info 10%80% Learned 40%80%

37 Results – Location Prediction

38

39 Conclusions We developed a single integrated framework to reason about transportation plans Probabilistic Successfully manages systemic GPS error We integrate sensor data with high level concepts such as bus stops We developed an unsupervised learning technique which greatly improves performance Our results show high predictive accuracy and interesting conceptual conclusions

40 Possible Future Work Craig’s “cookie” framework may provide the low-level sensor information. Try and formalize Craig’s problem in the context of dynamic probabilistic systems.


Download ppt "Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280."

Similar presentations


Ads by Google