Markov Chains and Absorbing States

Slides:



Advertisements
Similar presentations
Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Chapter 6 Eigenvalues and Eigenvectors
Matrix Algebra Matrix algebra is a means of expressing large numbers of calculations made upon ordered sets of numbers. Often referred to as Linear Algebra.
Lecture 17 Introduction to Eigenvalue Problems
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
Eigenvalues and Eigenvectors
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
10.3 Absorbing Markov Chains
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
Eigenvalues and Eigenvectors (11/17/04) We know that every linear transformation T fixes the 0 vector (in R n, fixes the origin). But are there other subspaces.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
Introduction to PageRank Algorithm and Programming Assignment 1 CSC4170 Web Intelligence and Social Computing Tutorial 4 Tutor: Tom Chao Zhou
5.II. Similarity 5.II.1. Definition and Examples
Link Analysis, PageRank and Search Engines on the Web
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Lecture 18 Eigenvalue Problems II Shang-Hua Teng.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Boyce/DiPrima 9th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
Abstract Sources Results and Conclusions Jessica Porath  Department of Mathematics  University of Wisconsin-Eau Claire Faculty Mentor: Dr. Don Reynolds.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Day 3 Markov Chains For some interesting demonstrations of this topic visit: 2005/Tools/index.htm.
Linear Algebra (Aljabar Linier) Week 6 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma Ph: ,
PHARMACOECONOMIC EVALUATIONS & METHODS MARKOV MODELING IN DECISION ANALYSIS FROM THE PHARMACOECONOMICS ON THE INTERNET ®SERIES ©Paul C Langley 2004 Maimon.
Theory of Computations III CS-6800 |SPRING
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
What is Matrix Multiplication? Matrix multiplication is the process of multiplying two matrices together to get another matrix. It differs from scalar.
By Josh Zimmer Department of Mathematics and Computer Science The set ℤ p = {0,1,...,p-1} forms a finite field. There are p ⁴ possible 2×2 matrices in.
Abstract Sources Results and Conclusions Jessica Porath  Department of Mathematics  University of Wisconsin-Eau Claire Faculty Mentor: Dr. Don Reynolds.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
Markov Chain Hasan AlShahrani CS6800
CS479/679 Pattern Recognition Dr. George Bebis
3.1 Introduction to Determinants
Discrete-time markov chain (continuation)
Section 4.1 Eigenvalues and Eigenvectors
Matrices 3 1.
Markov Chains Part 5.
2. Matrix Algebra 2.1 Matrix Operations.
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Boyce/DiPrima 10th ed, Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues Elementary Differential Equations and Boundary Value Problems,
Linear Algebra Lecture 18.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Linear Algebra Lecture 29.
Diagonalization of Linear Operator
Linear Algebra Lecture 30.
Eigenvalues and Eigenvectors
Linear Algebra: Matrix Eigenvalue Problems – Part 2
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Linear Algebra Lecture 28.
Presentation transcript:

Markov Chains and Absorbing States My beard is a Markov Chain Andrey Markov 1856-1922 Nathan Hechtman

About Markov Russian Mathematician Helped prove the central limit theorem Specialized in stochastic processes and probability

Transition Diagrams: Bayesian Probability Maps Transition diagrams are conditional probability trees with one repeated process That process is expressed in a network of conditional probabilities emerging from states (nodes) 1.0 1.0

Transition Diagrams as Matrices 1 .2 .8 .1 .4 .5 1.0 1.0 The network corresponds to the matrix of transformation of the Markov chain Entry (i, j) is the probability from going from node i to node j in a single step

Simple Markov Chain: Initial state matrix, matrix of transformation, and power The final state matrix (matrix resulting after n steps) can be calculated very efficiently this way The matrix of transformation will be square 1 .2 .8 .1 .4 .5 n [C0 C1 C2 C3 C4 C5 C6 C7] Initial state matrix Matrix of transformation

Ergodic/Irreducible Chains Every node in the transition diagram leads to and from every other node with a nonzero probability It does so in a finite amount of steps, but not necessarily one step Ergodic chains correspond to bijective (and invertible) transformations Irreducible (ergodic) Irreducible (ergodic) Reducible (non-ergodic)

Periodic Markov Chains Periodic Markov chains repeat in cycles of length greater than one Periodic chains are a special case of ergodic Markov chains 2n 1 1 =

Regular Markov Chains and Steady State The MOT raised to some power of n has all positive entries Converge to a steady-state matrix Finding the steady-state matrix (v is the initial matrix, P is the MOT): .8 .2 .6 .4 [C1 C2] = [0 0] vP = v v(P-I) = v(I-P) = 0 .8-1 .2 .6 .4-1 [C1 C2] = [0 0] -.2 .2 .6 -.6 [.75 .25] = [0 0]

An Analogy for Absorbing States: Ford and the Bistro

Absorbing States The matrix of transformation contains a row of all zeros, signifying a node or nodes with no ‘children’ All nodes have a pathway to at least one absorbing state, which can be seen as a row with all zeros, except for a 1 along the diagonal Absorbing states exist iff chain is irreducible For example, S4 and S7 1 .2 .8 .1 .4 .5 1.0 1.0

The standard form of the transition matrix Absorbing Markov chain can be expressed with a standard form transition matrix Absorbing states, like S4 and S7 are moved to the top and left: Recall: absorbing states are seen as rows of zeros S0 S1 S2 S3 S4 S5 S6 S7 1 .2 .8 .1 .4 .5 S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 Transition Matrix Standard form Transition Matrix

The Standard Form continued Identity Zero Matrix R Q Four Parts: I, 0, R, Q Pk asymptotically approaches P , the limiting matrix S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 I, 0: no chance of leaving absorbing states R: probabilities of entering absorbing states Q: probabilities of entering other (pre-absorbing) states Are you absorbed, yet? Standard form transition matrix

F, the fundamental matrix F= (I-Q)-1 is known as the fundamental matrix S4 S7 S0 S1 S2 S3 S5 S6 1 .2 .8 .1 .4 .5 -1 1 -1 -.2 -.8 -.4 -.5 1 .2 .8 .32 .4 .5 F = = Q ( I-Q ) (I-Q) -1

Property of F: expected time before absorption (I-Q)-1 gives the expected number of periods before entering an absorbing state (any absorbing state) The sum of each row: 1 .2 .8 .32 .4 .5 3.74 2.74 1 1.9

P , the limiting matrix Finding FR Identity Zero Matrix FR P = S4 S7 1 .28 .72 .1 .9 1 .2 .8 .32 .4 .5 1 .1 .28 .72 1 .1 .9 = F R FR P

Interpreting P Entry (i, j) is the probability of going from state i to state j after an infinite number of steps Starting in state S0, there is a 72% chance of ending up in S7 Starting in state S2, there is a 100% chance of ending up in S4 There is no chance of ending up in a Non-absorbing state S4 S7 S0 S1 S2 S3 S5 S6 1 .28 .72 .1 .9 P

Sources MDPs: https://www.youtube.com/watch?v=i0o-ui1N35U https://www.youtube.com/watch?v=uvYTGEZQTEs Feller, William. An Introduction to Probability and Its Applications. Tokyo: C.E. Tuttle, 1957. 338-51. Print. Anderson, David. "Markov Chains." Interactive Markov Chains Lecture Notes in Computer Science (2002): 35-55. Web. Wilde, Joshua. “Linear algebra III: Eigenvalues and Markov Chains." Eigenvalues, Eigenvectors, and Diagonalizability (2002): 35-55. Web. http://www.avcsl.com/large-yellow-jumbo-sponge-bone-shape.html http://www.ssc.wisc.edu/~jmontgom/absorbingchains.pdf "Andrey Andreyevich Markov | Russian Mathematician." Encyclopedia Britannica Online. Encyclopedia Britannica, n.d. Web. 24 Nov. 2015.

In Soviet Russia, questions ask you!