Using Probabilistic Methods for Localization in Wireless Networks Presented by Adam Kariv May, 2005.

Slides:



Advertisements
Similar presentations
Cooperative Transmit Power Estimation under Wireless Fading Murtaza Zafer (IBM US), Bongjun Ko (IBM US), Ivan W. Ho (Imperial College, UK) and Chatschik.
Advertisements

Bayesian Belief Propagation
Localization for Mobile Sensor Networks ACM MobiCom 2004 Lingxuan HuDavid Evans Department of Computer Science University of Virginia.
Robotics-Based Location Sensing using Wireless Ethernet By Andrew M. Ladd, Kostas E. Bekris, Algis Rudys, Guillaume Marceau, Lydia E. Kavraki, Dan S. Wallach.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Computer Networks Group Universität Paderborn Ad hoc and Sensor Networks Chapter 9: Localization & positioning Holger Karl.
Dynamic Bayesian Networks (DBNs)
Lirong Xia Approximate inference: Particle filter Tue, April 1, 2014.
Ab initio gene prediction Genome 559, Winter 2011.
Hidden Markov Models Reading: Russell and Norvig, Chapter 15, Sections
IR Lab, 16th Oct 2007 Zeyn Saigol
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
 CpG is a pair of nucleotides C and G, appearing successively, in this order, along one DNA strand.  CpG islands are particular short subsequences in.
Statistical NLP: Lecture 11
Hidden Markov Models Theory By Johan Walters (SR 2003)
Advanced Artificial Intelligence
Planning under Uncertainty
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
© 2004 Andreas Haeberlen, Rice University 1 Practical Robust Localization over Large-Scale Wireless Ethernet Networks Andreas Haeberlen Eliot Flannery.
Lecture 9 Hidden Markov Models BioE 480 Sept 21, 2004.
Mobility Increases Capacity In Ad-Hoc Wireless Networks Lecture 17 October 28, 2004 EENG 460a / CPSC 436 / ENAS 960 Networked Embedded Systems & Sensor.
Code and Decoder Design of LDPC Codes for Gbps Systems Jeremy Thorpe Presented to: Microsoft Research
Learning HMM parameters Sushmita Roy BMI/CS 576 Oct 21 st, 2014.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
Radial Basis Function Networks
CS Reinforcement Learning1 Reinforcement Learning Variation on Supervised Learning Exact target outputs are not given Some variation of reward is.
Data Selection In Ad-Hoc Wireless Sensor Networks Olawoye Oyeyele 11/24/2003.
Introduction to Automatic Speech Recognition
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
7-Speech Recognition Speech Recognition Concepts
Demo. Overview Overall the project has two main goals: 1) Develop a method to use sensor data to determine behavior probability. 2) Use the behavior probability.
Simultaneous Localization and Mapping Presented by Lihan He Apr. 21, 2006.
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
Architectures and Applications for Wireless Sensor Networks ( ) Localization Chaiporn Jaikaeo Department of Computer Engineering.
Recap: Reasoning Over Time  Stationary Markov models  Hidden Markov models X2X2 X1X1 X3X3 X4X4 rainsun X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3.
Energy-Aware Scheduling with Quality of Surveillance Guarantee in Wireless Sensor Networks Jaehoon Jeong, Sarah Sharafkandi and David H.C. Du Dept. of.
1 Robot Environment Interaction Environment perception provides information about the environment’s state, and it tends to increase the robot’s knowledge.
Multi-hop-based Monte Carlo Localization for Mobile Sensor Networks
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
UIUC CS 498: Section EA Lecture #21 Reasoning in Artificial Intelligence Professor: Eyal Amir Fall Semester 2011 (Some slides from Kevin Murphy (UBC))
A new Ad Hoc Positioning System 컴퓨터 공학과 오영준.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Processing Sequential Sensor Data The “John Krumm perspective” Thomas Plötz November 29 th, 2011.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
LOCATION TRACKING USING MOBILE DEVICE POWER ANALYSIS
CS Statistical Machine learning Lecture 24
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Decision Theoretic Planning. Decisions Under Uncertainty  Some areas of AI (e.g., planning) focus on decision making in domains where the environment.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Week Aug-24 – Aug-29 Introduction to Spatial Computing CSE 5ISC Some slides adapted from the book Computing with Spatial Trajectories, Yu Zheng and Xiaofang.
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Chance Constrained Robust Energy Efficiency in Cognitive Radio Networks with Channel Uncertainty Yongjun Xu and Xiaohui Zhao College of Communication Engineering,
Dynamic Bandwidth Reservation in Cellular Networks Using Road Topology Based Mobility Predictions InfoCom 2004 Speaker : Bo-Chun Wang
Hidden Markov Model Parameter Estimation BMI/CS 576 Colin Dewey Fall 2015.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Instructor: Eyal Amir Grad TAs: Wen Pu, Yonatan Bisk Undergrad TAs: Sam Johnson, Nikhil Johri CS 440 / ECE 448 Introduction to Artificial Intelligence.
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Hidden Markov Models BMI/CS 576
Net 435: Wireless sensor network (WSN)
Localization in WSN Localization in WSN.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Wireless Mesh Networks
Wireless Sensor Networks and Internet of Things
CS 416 Artificial Intelligence
Presentation transcript:

Using Probabilistic Methods for Localization in Wireless Networks Presented by Adam Kariv May, 2005

Agenda Introduction Theory Our Algorithm Preliminary Results

Goal Find the exact location of a wireless, mobile, network device.

Possible Applications Smart Buildings Route incoming calls to the nearest phone- extension. Print documents to the nearest printer. Download slides of the currently presented lecture. Location-based Targeted Advertisement Receive discount information of the store you're standing next to.

Our Solution Concept A mobile station may use the received strengths of network signals in order to passively find it's own location.

Network Elements Base Stations - Stationary network elements, usually used to connect the wireless network to external networks. Mobile Stations - User agents, whose location is dynamic.  This is the location we aim to find!

Assumptions Each mobile station is in the reception range of several base stations. Mobile stations can easily list all base- stations in reception range. Mobile stations know the strength of the received signal from each base-station in its reception range.

Examples for Wireless Networks Wireless Local Area Networks. GSM Cellular Networks.

Drawbacks (1) Our localization scheme is nearly network- independent - Better results may be obtained by utilizing data available from the network (e.g. cell id) designing a network to be "localization-aware" performing actual localization on the network side, which has access to more data and better resources. Obtaining location using a different technology.

Drawbacks (2) Can't localize when too few base stations are in reception range. May not be a problem in WLAN Could be problematic in cellular networks Probably will be a problem in WiMax.

Agenda Introduction Theory Our Algorithm Preliminary Results

Reference Measurement (1) Preparation: Measure in selected reference points the exact received signal strength from each base station. Store measurement for each location in the signal strength database.

Reference Measurement (2) In order to localize: Measure exact signal strength from each base station. Find the best match for the current measurement in the signal-strength database. Accuracy is proportional to reference points density.

Problems with Reference Measurement (1) Tedious preparations phase, Low tolerance to changes in the amount or location of base stations, In order to achieve better accuracy, we must have more reference points  even longer preparation overhead.

Problems with Reference Measurement (2) Noise in signal strength measurements during localization may cause jitter in resulting location. Doesn't take into account prior knowledge we may have of the physical environment.

Using a physical model In order to avoid the preparations phase, we could deduce the signal strength using a radio propagation model. The model can predict the signal strength in each reference point. Number of possible reference points is unlimited This method allows us to improve accuracy without increasing overheads. But - Is this feasible?

Problems with physical model Achieving an accurate physical model is very difficult Many "real-world" phenomenon are hard to model: Reflections, Signal decay when passing through obstacles, We have many unknowns, such as: Floor plan of building, exact location and material of obstacles, walls, windows, furniture… Exact location of base-stations, Transmission power of base-stations, Sensitivity and Amplification of receiving mobile-station  Achieving an accurate physical model is very difficult.

Simple physical model (1) [as shown by Wallbaum & Wasch, 2004] Assumptions: Disregards reflections. Floor plan of building is fairly known. Base-station locations are known Base-station and mobile-station properties are known To compute received strength at point X of a signal transmitted from point Y - Count the number of obstacles of each kind on the straight line from X to Y Use the following function:

Simple physical model (2) Object classes and their parameters:

Hidden Markov Model Hidden Markov Model - HMM Defines a set of random variables One “hidden” and one “observable” for each time step. In our case: The hidden variable has the actual location at each time step. The observable has the sampled data at each time step – i.e. the reported signal strength from every base-station. The HMM “tracks” the location of the mobile- station through time.

Using HMMs This method assumes good knowledge of the following probability functions: A(l,l') = P( location t+1 = l' | location t = l ) B(l,s) = P( sample t = s | location t = l ) Using these functions, we can easily compute the exact value of P( location t = l | sample 1... sample t ) The probability of being in any of the reference locations at time t, given all the previous samples.

Previous Results Castro, Chiu, Kremenek, Muntz (2001) Physical Model Ladd, Bekris, Marceau, Rudys, Wallach Kavraki (IROS 2002) Physical Model + HMM Haeberlen, Flannery, Ladd, Rudys, Wallach, Kavraki (MOBICOM 2004) Reference Measurement + HMM Wallbaum, Wasch (WONS 2004) Physical Model + HMM

Agenda Introduction Theory Our Algorithm Preliminary Results

Tying it all together Floor plan of Ross Building, Entrance Level:

Tying it all together Use the physical model to fill initial values in the signal strength database. Transition function: A(l,l') = P( location t+1 = l' | location t = l ) - ~1 - for staying in the same location <<1 - for moving to an adjacent reference location 0 - otherwise Emission function: B(l,s) = P( sample t = s | location t = l ) Depends on the distance of s from the l th position in the signal-strength database.

Tying it all together To localize, for each sample, we calculate: maxarg l P( location t = l | sample 1... sample t ) Previous results already achieve good localization results – how can we do better?

The EM Algorithm The EM algorithm is an iterative method used to find the most likely model for a given sample. It has two steps: E - Estimate probabilities for each hidden variable at each time. M - Find new model which maximizes the likelihood of the samples.

Using EM to improve the model Model may include the signal-strength database, the transition function, plus all the physical model's unknowns. E step: Find P( location t =l ) for each t, l Using signal-strength database and HMM M step: Find new values for signal-strength database DB(l) =  t [ P( loc t =l )  sample t ] /  t P( loc t =l )

Problems of EM EM finds the model which maximizes the likelihood of the sampled data. This does not guarantee correctness… EM has many local maxima Better start close to the correct solution.

Agenda Introduction Theory Our Algorithm Preliminary Results

Performed Simulation (1) Mobile-station performs a random-walk along the reference locations. “Measured” signal strength is the sum of: Expected signal strength, according to the Physical Model, Model Error, fixed for each location, modeling inaccuracies in the physical model, Sensor Error, modeling measurement errors. HMM is used to track the mobile-station’s location. Inferred path is compared to actual path.

Performed Simulation (2) The EM algorithm is used to learn a better model In our case - more accurate values for the signal-strength database. We will see results for two scenarios: Low model and sensor errors (~1dB) High model and sensor errors (~10dB)

Measures for Learning Quality Location Accuracy Could be misleading – depends on reference-point density. Localization Error Inferred path vs. actual path 1-hop localization error Signal-Strength Database Error Likelihood Could be used as a measure for convergence.

Localization Error (1-hop)

Signal-Strength Database Error

Likelihood

Future Plans Use better representations of Model Errors and Sensor Errors Improve EM to Learn more of the physical model unknowns Learn the transition function (A matrix) Use multiple samples concurrently to improve learning quality and speed Perform “field-test” with actual network data.

Questions?