Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Chains and Absorbing States

Similar presentations


Presentation on theme: "Markov Chains and Absorbing States"— Presentation transcript:

1 Markov Chains and Absorbing States
My beard is a Markov Chain Andrey Markov Nathan Hechtman

2 About Markov Russian Mathematician
Helped prove the central limit theorem Specialized in stochastic processes and probability

3 Transition Diagrams: Bayesian Probability Maps
Transition diagrams are conditional probability trees with one repeated process That process is expressed in a network of conditional probabilities emerging from states (nodes) 1.0 1.0

4 Transition Diagrams as Matrices
1 .2 .8 .1 .4 .5 1.0 1.0 The network corresponds to the matrix of transformation of the Markov chain Entry (i, j) is the probability from going from node i to node j in a single step

5 Simple Markov Chain: Initial state matrix, matrix of transformation, and power The final state matrix (matrix resulting after n steps) can be calculated very efficiently this way The matrix of transformation will be square 1 .2 .8 .1 .4 .5 n [C0 C1 C2 C3 C4 C5 C6 C7] Initial state matrix Matrix of transformation

6 Ergodic/Irreducible Chains
Every node in the transition diagram leads to and from every other node with a nonzero probability It does so in a finite amount of steps, but not necessarily one step Ergodic chains correspond to bijective (and invertible) transformations Irreducible (ergodic) Irreducible (ergodic) Reducible (non-ergodic)

7 Periodic Markov Chains
Periodic Markov chains repeat in cycles of length greater than one Periodic chains are a special case of ergodic Markov chains 2n 1 1 =

8 Regular Markov Chains and Steady State
The MOT raised to some power of n has all positive entries Converge to a steady-state matrix Finding the steady-state matrix (v is the initial matrix, P is the MOT): .8 .2 .6 .4 [C1 C2] = [0 0] vP = v v(P-I) = v(I-P) = 0 .8-1 .2 .6 .4-1 [C1 C2] = [0 0] -.2 .2 .6 -.6 [.75 .25] = [0 0]

9 An Analogy for Absorbing States: Ford and the Bistro

10 Absorbing States The matrix of transformation contains a row of all zeros, signifying a node or nodes with no ‘children’ All nodes have a pathway to at least one absorbing state, which can be seen as a row with all zeros, except for a 1 along the diagonal Absorbing states exist iff chain is irreducible For example, S4 and S7 1 .2 .8 .1 .4 .5 1.0 1.0

11 The standard form of the transition matrix
Absorbing Markov chain can be expressed with a standard form transition matrix Absorbing states, like S4 and S7 are moved to the top and left: Recall: absorbing states are seen as rows of zeros S0 S1 S2 S3 S4 S5 S6 S7 1 .2 .8 .1 .4 .5 S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 Transition Matrix Standard form Transition Matrix

12 The Standard Form continued
Identity Zero Matrix R Q Four Parts: I, 0, R, Q Pk asymptotically approaches P , the limiting matrix S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 I, 0: no chance of leaving absorbing states R: probabilities of entering absorbing states Q: probabilities of entering other (pre-absorbing) states Are you absorbed, yet? Standard form transition matrix

13 F, the fundamental matrix
F= (I-Q)-1 is known as the fundamental matrix S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 -1 1 -1 -.2 -.8 -.4 -.5 1 .2 .8 .32 .4 .5 F = = Q ( I-Q ) (I-Q) -1

14 Property of F: expected time before absorption
(I-Q)-1 gives the expected number of periods before entering an absorbing state (any absorbing state) The sum of each row: 1 .2 .8 .32 .4 .5 3.74 2.74 1 1.9

15 P , the limiting matrix Finding FR Identity Zero Matrix FR P = S4 S7
1 .28 .72 .1 .9 1 .2 .8 .32 .4 .5 1 .1 .28 .72 1 .1 .9 = F R FR P

16 Interpreting P Entry (i, j) is the probability of going from state i to state j after an infinite number of steps Starting in state S0, there is a 72% chance of ending up in S7 Starting in state S2, there is a 100% chance of ending up in S4 There is no chance of ending up in a Non-absorbing state S4 S7 S0 S1 S2 S3 S5 S6 1 .28 .72 .1 .9 P

17 Sources MDPs: https://www.youtube.com/watch?v=i0o-ui1N35U
Feller, William. An Introduction to Probability and Its Applications. Tokyo: C.E. Tuttle, Print. Anderson, David. "Markov Chains." Interactive Markov Chains Lecture Notes in Computer Science (2002): Web. Wilde, Joshua. “Linear algebra III: Eigenvalues and Markov Chains." Eigenvalues, Eigenvectors, and Diagonalizability (2002): Web. "Andrey Andreyevich Markov | Russian Mathematician." Encyclopedia Britannica Online. Encyclopedia Britannica, n.d. Web. 24 Nov

18 In Soviet Russia, questions ask you!


Download ppt "Markov Chains and Absorbing States"

Similar presentations


Ads by Google