Properties of Random Direction Models Philippe Nain, Don Towsley, Benyuan Liu, Zhen Liu.

Slides:



Advertisements
Similar presentations
1 Perfect Simulation and Stationarity of a Class of Mobility Models Jean-Yves Le Boudec (EPFL) & Milan Vojnovic (Microsoft Research Cambridge) IEEE Infocom.
Advertisements

Hidden Markov Models (1)  Brief review of discrete time finite Markov Chain  Hidden Markov Model  Examples of HMM in Bioinformatics  Estimations Basic.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Markov Chain Monte Carlo Prof. David Page transcribed by Matthew G. Lee.
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
1 Perfect Simulation and Stationarity of a Class of Mobility Models Jean-Yves Le Boudec (EPFL) Milan Vojnovic (Microsoft Research Cambridge)
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
1 The Random Trip Mobility Model Milan Vojnovic (Microsoft Research) Computer Lab Seminar, University of Cambridge, UK, Nov 2004 with Jean-Yves Le Boudec.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Mobility Improves Coverage of Sensor Networks Benyuan Liu*, Peter Brass, Olivier Dousse, Philippe Nain, Don Towsley * Department of Computer Science University.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
TCOM 501: Networking Theory & Fundamentals
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 Random Trip Stationarity, Perfect Simulation and Long Range Dependence Jean-Yves Le Boudec (EPFL) joint work with Milan Vojnovic (Microsoft Research.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Mobility Models in Mobile Ad Hoc Networks Chun-Hung Chen 2005 Mar 8 th Dept. of Computer Science and Information Engineering National Taipei University.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
1 Set Theory. Notation S={a, b, c} refers to the set whose elements are a, b and c. a  S means “a is an element of set S”. d  S means “d is not an element.
1 Set Theory. Notation S={a, b, c} refers to the set whose elements are a, b and c. a  S means “a is an element of set S”. d  S means “d is not an element.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Entropy Rate of a Markov Chain
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
Random walk on Z: Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1},
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 3.
Modeling and Analysis of Computer Networks
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
Flows and Networks Plan for today (lecture 2): Questions? Birth-death process Example: pure birth process Example: pure death process Simple queue General.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Dan Boneh Introduction Discrete Probability (crash course, cont.) Online Cryptography Course Dan Boneh See also:
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Message Delay in Mobile Ad Hoc Networks by Robin Groenevelt In collaboration with Philippe Nain and Ger Koole.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Markov Chains and Random Walks
Markov Chains and Mixing Times
Flows and Networks Plan for today (lecture 3):
Much More About Markov Chains
Lecture 18: Uniformity Testing Monotonicity Testing
A particle representation for the heat equation solution
Finite M/M/1 queue Consider an M/M/1 queue with finite waiting room.
Gibbs Sampling A little bit of theory Outline: -What is Markov chain
Hidden Markov Autoregressive Models
CS723 - Probability and Stochastic Processes
Zachary Blomme University of Nebraska, Lincoln Pattern Recognition
Presentation transcript:

Properties of Random Direction Models Philippe Nain, Don Towsley, Benyuan Liu, Zhen Liu

Main mobility models Random Waypoint Random Direction

Random Waypoint Pick location x at random Go to x at constant speed v Stationary distribution of node location not uniform in area

Random Direction Pick direction θ at random Move in direction θ at constant speed v for time τ Upon hitting boundary reflection or wrap around

Reflection in 2D

Wrap around in 2D

Question: Under what condition(s) stationary distribution of node location uniform over area?

Notation T j : beginning j-th movement τ j = T j+1 - T j : duration j-th mvt s j : speed in j-th mvt θ(t) : direction time t θ j = θ( T j ) : direction start j-th mvt γ j : relative direction (T j, s j, γ j ) j  1 : mvt pattern

γ j = relative direction 1 D: γ j  {-1,+1} θ j = θ ( T j -) γ j Wrap around: θ j = θ j-1 γ j 2D: γ j  [0,2 π ) θ j = θ(T j -)+ γ j -2π  ( θ(T j -) + γ j )/2π  Wrap around: θ j = θ j-1 + γ j -2π  ( θ(T j -) + γ j )/2π 

Result I (1D & 2D; Refl. & Wr) If location and direction uniformly distributed at time t=0 then these properties hold at any time t>0 under any movement pattern.

Proof (1 D =[0,1) & Wrap around) Mvt pattern (T j, s j, γ j ) j  1 fixed Assumption: P(X(0) < x, θ(0) = θ) = x/2 Initial speed = s 0 0≤t<T1 : X(t) = X(0) + θ(0)s 0 t -  X(0) + θ(0)s 0 t  P(X(t) < x, θ(t) = θ) = ½ ∫ [0,1] 1  u + θ(0)s 0 t -  u + θ(0)s 0 t  < x  du = x/2 (X(t),θ(t)) unif. distr. [0,1)x{-1,1}, 0≤t<T1.

Proof (cont’ - 1D & Wrap around) For wrap around θ(T 1 )= θ(0)γ 1 X(T 1 ) = X(0) + θ(0)γ 1 s 0 T 1 -  X(0) + θ(0)γ 1 s 0 T 1  Conditioning on initial location and direction yields (X(T 1 ), θ(T 1 )) uniformly distributed in [0,1)  {-1,+1}. Proof for [0,1) & wrap around concluded by induction argument.

Proof (cont’ - 1D & Reflection) Lemma: Take T j r = T j w, γ j r = γ j w, s j r = 2s j w If relations X r (t) = 2X w (t), 0 ≤ X w (t) < ½ = 2(1-X w (t)), ½ ≤ X w (t) <1 θ r (t) = θ w (t), 0 ≤ X w (t) < ½ = -θ w (t), ½ ≤ X w (t) < 1 hold at t=0 then hold for all t>0. Use lemma and result for wrap around to conclude proof for 1D and reflection.

Proof (cont’ - 2D Wrap around & reflection). Area: rectangle, disk, … Wrap around: direct argument like in 1D Reflection: use relation between wrap around & reflection – See Infocom’05 paper.

Corollary N mobiles unif. distr. on [0,1] (or [0,1] 2 ) with equally likely orientation at t=0 Mobiles move independently of each other Mobiles uniformly distributed with equally likely orientation for all t>0.

Remarks (1D models) Additive relative direction ok θ j = (θ j-1 + γ j ) mod 2, γ j  {0,1} γ j = 0 (resp. 1) if direction at time T j not modified θ j = -1 with prob. Q = +1 with prob 1-q Uniform stationary distr. iff q=1/2

How can mobiles reach uniform stationary distributions for location and orientation starting from any initial state?

Mvt vector {y j = (τ j,s j,γ j,  j ) j } {  j } j : environment (finite-state M.C.) Assumptions {y j } j aperiodic, Harris recurrent M.C., with unique invariant probability measure q.

{y j } j, y j  Y, Markov chain {y j } j  -irreducible if there exists measure  on  (Y) such that, whenever  (A)>0, then P y (return time to A) > 0 for all y  A {y j } j Harris recurrent if it is  -irreducible and P y (  j  1 1{y j  A} =  ) = 1 for all y  A such that  (A)>0.

Z(t) = (X(t), θ(t), Y(t)): Markov process Y(t) = (R(t),S(t),γ(t),  (t) R(t) = remaining travel time at time t S(t) = speed at time t γ(t) = relative direction at time t  (t) = state of environment at time t Result II (1D, 2D -- Limiting distribution) If expected travel times τ finite, then {Z(t)} t has unique invariant probability measure. In particular, stationary location and direction uniformly distributed.

Outline of proof (1D = [0,1]) {z j } j has unique stationary distribution p A=[0,x)  { θ}  [0, τ)  S  {γ}  {m} q stat. distr. of mvt vector {y j } j p(A)=(x/2) q([0, τ)  S  {γ}  {m} ) Palm formula Lim t  P(Z(t)  A) = (1/E 0 [T 2 ])  E 0 [ ∫ [0,T2] 1(Z(u)  A) du] =(x/2  )  ∫ [0,τ) (1-q([0,u)  S  {γ}  {m} ) du

Outline of proof (cont’ -1D) S = set of speeds A = [0,x)  { θ}  [0,  )  S  {γ}  {m} Borel set Lim t  P(Z(t)  A) =(x/2  )  ∫ [0,  ) (1-q([0,u)  S  {γ}  {m} ) du Lim t  P(X(t)<x, θ(t) = θ ) =  γ,m Lim t  P(Z(t)  A) = x/2 for all 0≤ x <1, θ  {-1,+1}.

Outline of proof (cont’ -2D = [0,1] 2 ) Same proof as for 1D except that set of directions is now [0,2  ) Lim t  P(X 1 (t)<x 1, X 2 (t)<x 2, θ(t)< θ ) = x 1 x 2 θ/2  for all 0≤ x 1, x 2 <1, θ  [0,2  ).

Assumptions hold if (for instance): Speeds and relative directions mutually independent renewal sequences, independent of travel times and environment {τ j,  j } j Travel times modulated by {  j } j,  j  M: {τ j (m)} j, m  M, independent renewal sequences, independent of {s j, γ j,  j } j, with density and finite expectation.

ns-2 module available from authors

Sorry, it’s finished