Presentation is loading. Please wait.

Presentation is loading. Please wait.

11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.

Similar presentations


Presentation on theme: "11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl."— Presentation transcript:

1 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

2 11.2.1 n-step transition probabilities (review) 2

3 Transition prob. matrix n-step transition prob. from state i to j is n-step transition matrix (for all states) is then For instance, two step transition matrix is 3

4 Chapman-Kolmogorov equations Prob. of going from state i at t=0, passing though state k at t=m, and ending at state j at t=m+n is In matrix notation, 4

5 11.2.2 state probabilities 5

6 State probability (pmf of an RV!) Let p(n) = {p j (n)} be the row vector of state probabilities at time n (i.e. state prob. vector) Thus, p(n) is given by From the initial state In matrix notation 6

7 How an MC changes (Ex 11.10, 11.11) A two-state system 7 Silence (state 0) Speech (state 1) 0.1 0.9 0.8 0.2 Suppose p(0)=(0,1) Then p(1) = p(0)P= (0,1)P = (0.2, 0.8) p(2)= (0.2,0.8)P= (0,1)P 2 = (0.34, 0.66) p(4)= (0,1)P 4 = (0.507, 0.493) p(8)= (0,1)P 8 = (0.629, 0.371) p(16)= (0,1)P 16 = (0.665, 0.335) p(32)= (0,1)P 32 = (0.667, 0.333) p(64)= (0,1)P 64 = (0.667, 0.333) Suppose p(0)=(1,0) p(1) = p(0)P = (0.9, 0.1) p(2) = (1,0)P 2 = (0.83, 0.17) p(4)= (1,0)P 4 = (0.747, 0.253) p(8)= (1,0)P 8 = (0.686, 0.314) p(16)= (1,0)P 16 = (0.668, 0.332) p(32)= (1,0)P 32 = (0.667, 0.333) p(64)= (1,0)P 64 = (0.667, 0.333)

8 Independence of initial condition 8

9 The lesson to take away No matter what assumptions you make about the initial probability distribution, after a large number of steps, the state probability distribution is approximately (2/3, 1/3) 9 See p.666, 667

10 11.2.3 steady state probabilities 10

11 State probabilities (pmf) converge As n , then transition prob. matrix P n approaches a matrix whose rows are equal to the same pmf. In matrix notation, where 1 is a column vector of all 1’s, and  =(  1,  1, … ) The convergence of P n implies the convergence of the state pmf’s 11

12 Steady state probability System reaches “equilibrium” or “steady state”, i.e., n , p j (n)   j, p i (n-1)   i In matrix notation, here  is stationary state pmf of the Markov chain To solve this, 12

13 Speech activity system From the steady state probabilities 13  =  P (  1,  2 ) = (  1,  2 ) 0.90.1 0.20.8  1 = 0.9  1 + 0.1  2  2 = 0.2  1 + 0.8  2  1 +  2 = 1  1 = 2 / 3 = 0.667  2 = 1 / 3 = 0.333

14 14 Question 11-1: Alice, Bob and Carol are playing Frisbee. Alice always throws to Carol. Bob always throws to Alice. Carol throws to Bob 2/3 of the time and to Alice 1/3 of the time. In the long run, what percentage of the time do each of the players have the Frisbee?

15 11.3.1 classes of states 11.3.2 recurrence properties 15

16 Why classification? The methods we have learned with steady state probabilities (in Section 11.2.3) work only with regular Markov chains (MCs). A regular MC has P n whose entries are all non- zero for some integer n. (then as n   ?) There are non-regular MCs; how can we check? 16

17 Classification of States Accessible: Possible to go from state i to state j (path exists from i to j). Let a i and d i be the events of arriving and departing of a customer to a system in state i Two states communicate if both are accessible from each other. A system is irreducible if all states communicate. State i is recurrent if the system will return to it after leaving some time in the future. If a state is not recurrent, it is transient.

18 Classification of States (cont’) A state is periodic if it can only return to itself after a fixed number of transitions greater than 1 (or multiple of a fixed number). A state that is not periodic is aperiodic. Each state is visited every 3 iterations How about this?

19 Classification of States (cont’) Each state is visited in multiples of 3 iterations The period of a state i is the smallest k > 1 such that all paths leading back to i have lengths that are a multiple of k; i.e., p ii (n) = 0 unless n = k, 2k, 3k,... If the smallest k, gcd of all path lengths, is 1, then state i is aperiodic. Periodicity is a class property; all the states in a class have the same period.

20 Classification of States (cont’) An absorbing/trapping state is one that locks in the system once it enters. This diagram might represent the wealth of a gambler who begins with $2, makes a series of wagers for $1 each, and stops when his money becomes $4 or $0. Let a i be the event of winning in state i and d i the event of losing in state i. There are two absorbing states: 0 and 4. An absorbing state is a state j with p jj = 1.

21 Classification of States (cont’) Class: set of states that communicate with each other. A chain is irreducible if there is only one class A class is either all recurrent or all transient and may be all periodic or aperiodic. States in a transient class communicate only with each other so no arcs enter any of the corresponding nodes from outside the class. Arcs may leave, though, passing from a node in the class to one outside.

22 Illustration of Concepts Example 1 Every pair of states that communicates forms a single recurrent class; however, the states are not periodic. Thus the stochastic process is aperiodic and irreducible. * X is a prob. and 0<X  1

23 Illustration of Concepts Example 2 States 0 and 1 communicate and for a recurrent class. States 3 and 4 form separate transient classes. State 2 is an absorbing state and forms a recurrent class. * X is a prob. and 0<X  1 * especially, p 2,2 is 1

24 Illustration of Concepts Example 3 Every state communicates with every other state, so we have irreducible stochastic process. Periodic?Yes, so this MC is irreducible and periodic. * X is a prob. and 0<X  1

25 Classification of States.5.4.6.5.3.5.4.8.7.1 1 5 2 34.2 Example 4 * Sometimes states start with 1, not 0

26 A state j is accessible from state i if p ij (n) > 0 for some n > 0. In Example 4, state 2 is accessible from state 1 states 3 and 4 are accessible from state 5 but states 3 is not accessible from state 2. States i and j communicate if i is accessible from j and j is accessible from i. States 1 & 2 communicate; states 3, 4 & 5 communicate; states 1,2 and states 3,4,5 do not communicate. States 1 & 2 form one communicating class. States 3, 4 & 5 form another communicating class. Example 4 Review

27 If all states in an MC communicate, (i.e., all states are in the same class) then the chain is irreducible. Example 4 is not an irreducible MC. Gambler’s example has 3 classes: {0}, {1, 2, 3} and {4}. Example 3? Recurrence Properties (solve Ex 11.19) Let f i = prob. that the process will return to state i (eventually) given that it starts in state i. If f i = 1 then state i is called recurrent. If f i < 1 then state i is called transient. Recurrence properties

28 11.3.3 limiting probabilities 28

29 Transient vs. recurrent If a Markov chain has transient classes and recurrent classes, the system will eventually enter and remain in one of recurrent classes (Fig. 11.6(a)). Thus, we can assume only irreducible Markov chains Suppose a system starts with a recurrent state i at time 0. Let T i (k) be the time that elapses between the (k-1)-th and k-th returns. 29

30 Fig 11.8: recurrence times T i (k) The process will return to state i at T i (1), T i (1)+T i (2), T i (1)+T i (2)+T i (3),… 30

31 Steady state prob. vs. recurrence time Proportion of time in state i after k returns to i is 31 T i ’s form an iid sequence since each recurrence time is independent of previous ones. As the state is recurrent, the process returns to state i an infinite number of times. Then the law of large numbers implies that Proportion of time in state i

32 Recurrent times If E[T i ] 0 32 If E[T i ] = , then state i is null recurrent, which implies that  i = 0 But how can we obtain E[T i ] ? Solve Ex 11.26 If f i is 1, recurrent; if f i <1, transient

33 Question 11-2 Consider an MC with infinite states, where p 0,1 = 1, and for all state i  1, we have 33 Check whether state 0 is recurrent or transient. If recurrent, check whether positive or null.

34 Existence of Steady-State Probabilities A state is ergodic if it is aperiodic and positive recurrent Once an MC enters an ergodic state, then the process will remain in the state’s class forever. Also, the process will visit all states in the class sufficiently frequently, so that the long term proportion of time in a state will be non zero. We mostly deal with MCs, each of which has a single class that consists of ergodic states only. Thus we can apply  =  P

35 Regular vs. ergodic MC Regular MCs have P n whose entries are all non- zero for some integer n. Ergodic MCs have aperiodic, positive recurrent states. They are almost same, practically. 35

36 Not regular, but ergodic MC 36


Download ppt "11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl."

Similar presentations


Ads by Google