Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.

Similar presentations


Presentation on theme: "1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk."— Presentation transcript:

1 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk 0 p q house N S EW -2 -33 21

2 2 One-dimensional random walk with reflective barriers Hypothesis Probability (object moves to the right) = p Probability (object moves to the left) = q Rule Object at position 2 (resp. -2) takes a step to the right  hits the reflective wall and bounces back to 2 (resp. -2) 0 p q -2 -33 21

3 3 Discrete state space and discrete time Let X t = r.v. indicating the position of the object at time t Where t takes on multiples of the time unit value  i.e., t = 0, 1, 2, 3, …..  => discrete time And X t belongs to {-2, -1, 0, 1, 2}  => discrete state space Discrete time + discrete state space + other conditions Markov chain  Example: random walk

4 4 Discrete state space and continuous time In this case State space is discrete But shifts from one value to the other occurs continuously Example # packets in the output buffers of a router  The number of packets is discrete and changes  Whenever you have an arrival or departure All the queues done so far fall under this category Discrete state space + continuous time + other conditions  => Markov process (Ex: M/M queues)

5 5 Random walk: one-step transition probability X t = i => X t+1 = i +/- 1 One step probability indicates where the object is going to be in one step => P ij (1) Example:  P[X t+1 = 1 | X t = 0] = p  P[X t+1 = 1 | X t = 0] = q  P[X t+1 = 0 | X t = 1] = q  P[X t+1 = 2 | X t = 1] = p

6 6 One step transition matrix One step transition matrix P (1) States at time t+1 -2 -1 0 1 2 States at time t -2 0 1 2

7 7 2-step transition probability P ij (2) = 2-step transition probability Given that at time t, the object is in state i With what probability it gets to j in exactly 2 steps P ij (2) =P[X t+2 =j | X t =i]  P[X t+2 =2|X t =0] = p 2  P[X t+2 =2|X t =1] = p 2  P[X t+2 =0|X t =-1] = 0 Next, we will populate the 2-step transition matrix P (2)

8 8 2-step transition matrix 2-step transition matrix P (2) Observation: 2-step transition matrix can be obtained By multiplying 2 1-step transition matrices

9 9 3-step transition probability P ij (3) may be derived as follows

10 10 3-step transition probability: example For instance Once you construct P (3) (3-step transition matrix) 0 1 2 p q p2p2 0

11 11 Chapman-Kolmogrov equation Let P ij (n) be the n steps transition probability It depends on The probability of jumping to an intermediate state k  In v steps And then in the remaining steps to go from k to j n-step transition matrix P (n) ikj v steps n-v steps

12 12 Markov chain: main feature Markov chain Discrete state space Discrete time structure Assumption P[X t+1 =j | X t =i]=P[X t+1 =j | X 0 =k, X 1 =k’,…, X t =i] In other words, probability that the object is in position j  At time t+1, given that it was in position i at time t  Is independent of the entire history

13 13 Markov chain: main objective Objective Obtain The long term probabilities also called  Equilibrium or stationary probabilities In the case of a random walk  The probability to be at position i on the long run P ij (n) becomes less dependent on i when n is very large  It will only depend on the destination state j

14 14 n-step transition matrix: long run π j = Prob [system will be in state j in the long run, i.e., after a large number of transitions]

15 15 Random walk: application Prob[ at time 0, the object will be in state i] = ? 0 p q -2 -33 21

16 16 Initial states are equiprobable If all states are equiprobable at time 0 =>

17 17 Object initially at a specific position As you move along, you get away from The original vector => behavior independent from initial position

18 18 The power method Assume a Markov chain with m+1 states 1, 2, 3, …,m

19 19 Long term probabilities: system of equations

20 20 Solving the system of equations So we have m+1 equations and m unknowns You get rid of one of the equations While keeping the normalizing equation

21 21 The long term probabilities: solution Application to the random walk Try to find the long term probabilities

22 22 Markov process Discrete state space Continuous time structure Example M/M/1 queue X t =number of customers in the queue at time t= {0,1,…} p ij (s,t) = P[X t =j | X s = i] => p ij (ζ) = P[X t+ζ =j | X t = i]

23 23 Rate matrix Stationary probability X = (X 0, X 1, X 2, …) Rate matrix M/M/1 queue Solution


Download ppt "1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk."

Similar presentations


Ads by Google