Week Aug-24 – Aug-29 Introduction to Spatial Computing CSE 5ISC Some slides adapted from the book Computing with Spatial Trajectories, Yu Zheng and Xiaofang.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

On the Effect of Trajectory Compression in Spatio-temporal Querying Elias Frentzos, and Yannis Theodoridis Data Management Group, University of Piraeus.
David Rosen Goals  Overview of some of the big ideas in autonomous systems  Theme: Dynamical and stochastic systems lie at the intersection of mathematics.
Probabilistic Reasoning over Time
State Estimation and Kalman Filtering CS B659 Spring 2013 Kris Hauser.
Dynamic Bayesian Networks (DBNs)
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Advanced Artificial Intelligence
Formation et Analyse d’Images Session 8
Kalman’s Beautiful Filter (an introduction) George Kantor presented to Sensor Based Planning Lab Carnegie Mellon University December 8, 2000.
… Hidden Markov Models Markov assumption: Transition model:
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Lecture 5: Learning models using EM
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
SLAM: Simultaneous Localization and Mapping: Part I Chang Young Kim These slides are based on: Probabilistic Robotics, S. Thrun, W. Burgard, D. Fox, MIT.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Chapter 1 Trajectory Preprocessing
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
© 2003 by Davi GeigerComputer Vision November 2003 L1.1 Tracking We are given a contour   with coordinates   ={x 1, x 2, …, x N } at the initial frame.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Bayesian Filtering for Robot Localization
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
6 am 11 am 5 pm Fig. 5: Population density estimates using the aggregated Markov chains. Colour scale represents people per km. Population Activity Estimation.
Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.
 1  Outline  stages and topics in simulation  generation of random variates.
Brian Renzenbrink Jeff Robble Object Tracking Using the Extended Kalman Particle Filter.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Learning and Recognizing Human Dynamics in Video Sequences Christoph Bregler Alvina Goh Reading group: 07/06/06.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Machine Learning Lecture 23: Statistical Estimation with Sampling Iain Murray’s MLSS lecture on videolectures.net:
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and David H.C. Du Dept. of.
Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Learning and Inferring Transportation Routines By: Lin Liao, Dieter Fox and Henry Kautz Best Paper award AAAI’04.
The Dirichlet Labeling Process for Functional Data Analysis XuanLong Nguyen & Alan E. Gelfand Duke University Machine Learning Group Presented by Lu Ren.
Kalman Filter Notes Prateek Tandon.
Mobile Robot Localization (ch. 7)
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
Maximum a posteriori sequence estimation using Monte Carlo particle filters S. J. Godsill, A. Doucet, and M. West Annals of the Institute of Statistical.
CS Statistical Machine learning Lecture 24
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
State Estimation and Kalman Filtering
QUIZ!!  In HMMs...  T/F:... the emissions are hidden. FALSE  T/F:... observations are independent given no evidence. FALSE  T/F:... each variable X.
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Trajectory Data Mining
CS 547: Sensing and Planning in Robotics Gaurav S. Sukhatme Computer Science Robotic Embedded Systems Laboratory University of Southern California
U of Minnesota DIWANS'061 Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and.
Cameron Rowe.  Introduction  Purpose  Implementation  Simple Example Problem  Extended Kalman Filters  Conclusion  Real World Examples.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
DSP-CIS Part-III : Optimal & Adaptive Filters Chapter-9 : Kalman Filters Marc Moonen Dept. E.E./ESAT-STADIUS, KU Leuven
Kalman Filter and Data Streaming Presented By :- Ankur Jain Department of Computer Science 7/21/03.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Introduction to Sampling based inference and MCMC
Today.
Kalman’s Beautiful Filter (an introduction)
Presented by Prashant Duhoon
Probabilistic Map Based Localization
Bayes and Kalman Filter
Hidden Markov Models Markov chains not so useful for most agents
Presentation transcript:

Week Aug-24 – Aug-29 Introduction to Spatial Computing CSE 5ISC Some slides adapted from the book Computing with Spatial Trajectories, Yu Zheng and Xiaofang Zhou. Springer

GPS Tracking gNO X /m Emission Hot Spots After Bus Stops

GPS Track Processing

GPS Tracking {,,..., } Positioning technologies Global positioning system (GPS) Network-based (e.g., using cellular or wifi access points) Dead-Reckoning (for estimation)

Trajectory Preprocessing  Problems to solve with trajectories  Lots of trajectories → lots of data  Noise and Errors complicates analysis and inference  Example, errors caused due inaccuracy from switch between GPS / Wi-Fi /cell phone tower signals.  Employ the data reduction and filtering techniques  Specialized data compression for trajectories  Principled filtering techniques

GPS Track Processing: Data Reduction

Trajectory Preprocessing - Compression

Performance metrics for Trajectory Compression  Trajectory data reduction techniques aims to reduce trajectory size w/o compromising much precision.  Performance Metrics  Processing time  Compression Rate  Error Measure  The distance between a location on the original trajectory and the corresponding estimated location on the approximated trajectory is used to measure the error introduced by data reduction.  Examples are Perpendicular Euclidean Distance or Time Synchronized Euclidean Distance.

Common Error Measures: Perpendicular Euclidean Distance

Common Error Measures: Time Synchronized Distance

Trajectory Data Reduction Techniques What if we just sample every i th point from the trajectory and call it a compressed trajectory?

Trajectory Data Reduction Techniques  Batched Compression :  Collect full set of location points and then compress the data set for transmission to the location server.  Applications: content sharing sites such as Everytrail and Bikely.  Techniques include Douglas-Peucker Algorithm, top-down time-ratio (TD- TR), and Bellman's algorithm.

Batched Compression: Douglas-Peucker Algorithm  Preserve directional trends in the approximated trajectory using the perpendicular Euclidean distance as the error measure. 1.Replace the original trajectory by an approximate line segment. 2.If the replacement does not meet the specified error requirement, it recursively partitions the original problem into two subproblems by selecting the location point contributing the most errors as the split point. 3.This process continues until the error between the approximated trajectory and the original trajectory is below the specified error threshold.

Batched Compression: Douglas-Peucker (DP) Algorithm  Split at the point with most error.  Repeat until all the errors < given threshold

Batched Compression: Other Algorithms  DP uses perpendicular Euclidean distance as the error measure. Also, it’s heuristic based, i.e., no guarantee that the selected split points are the best choice.  TDTR uses time synchronized Euclidean distance as the error measure to take into account the geometric and temporal properties of object movements.  Bellman Algorithm employs dynamic programming technique to ensure that the approximated trajectory is optimal  Its computational cost is high.  More details in Bellman

Trajectory Data Reduction Techniques  On-line Data Reduction  Selective on-line updates of the locations based on specified precision requirements.  Applications: traffic monitoring and fleet management.  Techniques include Reservoir Sampling, Sliding Window, and Open Window. Consider the scenario: You are managing a trucks fleet and are interested in collecting GPS data from the fleet. However, you have limited memory on trucks and the internet connection is not reliable. How would you proceed?

Online Data reduction Techniques: Reservoir sampling  Generate an approximated trajectory of size R (i.e., R items).  Maintain a reservoir of size R.  Save first R samples in the reservoir.  When the k th location point is acquired (k > R).  Generate a random number j between 1 and k.  If j < R then evict j th item from reservoir.  the reservoir algorithm always maintains a uniform sample of the evolving trajectory without even knowing the eventual trajectory size.  See ( for proof. Any Comments on its solution quality?

Online Data reduction Techniques: Sliding Window  Fit the location points in a growing sliding window with a valid line segment and continue to grow the sliding window until the approximation error exceeds some error bound. 1.First initialize the first location point of a trajectory as the anchor point p a and then starts to grow the sliding window 2.When a new location point p i is added to the sliding window, the line segment p a p i is used to fit all the location points within the sliding window. 3.As long as the distance errors against the line segment p a p i are smaller than the user-specified error threshold, the sliding window continues to grow. Otherwise, the line segment p a p i-1 is included as part of the approximated trajectory and p i is set as the new anchor point. 4.The algorithm continues until all the location points in the original trajectory are visited.

Illustration of Sliding Window  While the sliding window grows from {p 0 } to {p 0, p 1, p 2, p 3 }, all the errors between fitting line segments and the original trajectory are not greater than the specified error threshold.  When p 4 is included, the error for p 2 exceeds the threshold, so p 0 p 3 is included in the approximate trajectory and p 3 is set as the anchor to continue. Exceeds the desired error

Illustration of Opening Window  Different from the sliding window, choose location points with the highest error in the sliding window as the closing point of the approximating line segment as well as the new anchor point.  When p 4 is included, the error for p 2 exceeds the threshold, so p 0 p 2 is included in the approximate trajectory and p 2 is set as the anchor to continue.

Reduction based on Speed and Direction

GPS Track Processing: Filtering Techniques

Filtering Techniques: Mean Filter

Filtering Techniques: Median Filter

Filtering Techniques: Kalman Filtering  Can use physics based model for the trajectory.  The trajectory estimate from Kalman filter is basically a tradeoff between measurements and motion model (dictated by physics).  Based on Hidden Markov Models

Gentle Introduction to Hidden Markov Models  X t and E t are random variables.  X t models the phenomena being tracked  Example, actual location of an object which is being tracked.  E t is observed state.  Based on E t s we estimate the X t  We are given transition probabilities of P(X t | X t-1 ) and  Sensor probabilities P(E t |X t )

Gentle Introduction to HMM: Markov Chains

Gentle Introduction to HMM

Filtering in Hidden Markov Models

One Step Prediction

Filtering in Hidden Markov Models Updating using Sensor values

Kalman Filters Observations through sensors Location (hidden variable) Velocity (hidden variable)

Gaussian Distributions  Univariate- x is a 1-d variable  Multi-variate– x is n-dimensional vector Covariance Matrix

Fitlering in Kalman Filters – 1-d case  One Step Prediction  If is Gaussian then the posterior is also Gaussian.  Updating with sensor values  Filter based on evidence  If predicted posterior is Gaussian then the filtered posterior is also Gaussian.  Final result is a Gaussian

Fitlering in Kalman Filters for multivariate case