1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

Slides:



Advertisements
Similar presentations
Markov chains. Probability distributions Exercise 1.Use the Matlab function nchoosek(n,k) to implement a generic function BinomialPMF(k,n,p) for calculating.
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Chapter 6 Eigenvalues and Eigenvectors
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Hidden Markov Models Tunghai University Fall 2005.
. Computational Genomics Lecture 10 Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Our Group Members Ben Rahn Janel Krenz Lori Naiberg Chad Seichter Kyle Colden Ivan Lau.
Section 10.1 Basic Properties of Markov Chains
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Operations Research: Applications and Algorithms
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Matrix Multiplication To Multiply matrix A by matrix B: Multiply corresponding entries and then add the resulting products (1)(-1)+ (2)(3) Multiply each.
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
Maths for Computer Graphics
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
. Markov Chains Tutorial #5 © Ilan Gronau. Based on original slides of Ydo Wexler & Dan Geiger.
Chapter 3 The Inverse. 3.1 Introduction Definition 1: The inverse of an n  n matrix A is an n  n matrix B having the property that AB = BA = I B is.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Lecture 11 – Stochastic Processes
Section 10.2 Regular Markov Chains
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Compiled By Raj G. Tiwari
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Chapter 8 Matrices and Determinants Copyright © 2014, 2010, 2007 Pearson Education, Inc Matrix Operations and Their Applications.
8.1 Matrices & Systems of Equations
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Matrices. Definitions  A matrix is an m x n array of scalars, arranged conceptually as m rows and n columns.  m is referred to as the row dimension.
Dynamical Systems Model of the Simple Genetic Algorithm Introduction to Michael Vose’s Theory Rafal Kicinger Summer Lecture Series 2002.
Day 3 Markov Chains For some interesting demonstrations of this topic visit: 2005/Tools/index.htm.
Operations with Matrices
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Chapter 6 Matrices and Determinants Copyright © 2014, 2010, 2007 Pearson Education, Inc Matrix Operations and Their Applications.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
By Josh Zimmer Department of Mathematics and Computer Science The set ℤ p = {0,1,...,p-1} forms a finite field. There are p ⁴ possible 2×2 matrices in.
Productivity. Operations Management Module 3 Productivity Efficiency Effectiveness Markov chains.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Systems of Equations and Matrices Review of Matrix Properties Mitchell.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrix Algebra Definitions Operations Matrix algebra is a means of making calculations upon arrays of numbers (or data). Most data sets are matrix-type.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
CS 285- Discrete Mathematics Lecture 11. Section 3.8 Matrices Introduction Matrix Arithmetic Transposes and Power of Matrices Zero – One Matrices Boolean.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
MTH108 Business Math I Lecture 20.
Markov Chains Mixing Times Lecture 5
Matrix Multiplication
Markov Chains Tutorial #5
Markov Chains Part 5.
Section 2.4 Matrices.
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Markov Chains Tutorial #5
Discrete-time markov chain (continuation)
Presentation transcript:

1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

2 Markov Chains Mathematical models for processes that evolve over time in a probabilistic manner are called stochastic processes. A special kind of stochastic process is Markov Chain. It is characterized by the special property that probabilities involving how the process will evolve in the future depends only on the present state of the process, and so are independent of the events in the past. Markov chains are used to analyze trends and predict the future (weather, stock market, genetics, product success, etc.)

3 Markov Chains A finite Markov chain is -a set of objects, -a set of consecutive time periods, -a finite set of different states such that (i)during any given time period, each object is in only one state (although different objects can be in different states); (ii)the probability that an object will move from one state to another state (or remain in same state) over a time period depends only on the beginning and ending states.

4 Markov Chains A Markov chain can be represented by a matrix P=[p ij ] where p ij represents the probability of an object moving from state i to state j in one time period. Such a matrix is called a transition matrix. - The transition matrix is square (by the nature of Markov process); -The sum of the probabilities of each row must add to one. A Markov chain can also be represented by a graph. Example in the next slide

Given that a person’s last cola purchase was Coke, there is a 90% chance that his next cola purchase will also be Coke. If a person’s last cola purchase was Pepsi, there is an 80% chance that his next cola purchase will also be Pepsi. coke pepsi Markov Chains: Coke vs. Pepsi Example transition matrix:

Given that a person is currently a Pepsi purchaser, what is the probability that he will purchase Coke two purchases from now? Pr [ Pepsi  ?  Coke ] = Pr [ Pepsi  Coke  Coke ] + Pr [ Pepsi  Pepsi  Coke ] = 0.2 * * 0.2 = 0.34 Powers of transition matrices Coke vs. Pepsi Example (cont.) P 2 is the transition matrix after two time period.

Given that a person is currently a Coke purchaser, what is the probability that he will purchase Pepsi three purchases from now? Powers of transition matrices Coke vs. Pepsi Example (cont.)

8 Distribution Row Vector A distribution row vector d for an N-state Markov chain is an N-dimensional row vector having as its components, one for each state, the probabilities that an object in the system is in each of the respective states. Example (cont.): Suppose 60% of all people now drink Coke, and 40% drink Pepsi. Then the distribution vector is (0.6, 0.4). Let d (k) denote the distribution vector for a Markov chain after k time periods. Thus, d (0) represents the initial distribution. Then d (k) = d (0) P k

9 Distribution Row Vector Example (cont.): Suppose 60% of all people now drink Coke, and 40% drink Pepsi What fraction of people will be drinking Coke three weeks from now? d (0) = (0.6, 0.4) d (3) = d (0) P 3 Pr[X 3 =Coke] = 0.6 * * =

10 Regular Markov Chains A Markov chain is regular if some power of the transition matrix contains only positive elements. If the matrix itself contains only positive elements then the power is one, and the matrix is automatically regular. Transition matrices that are regular always have an eigenvalue of unity. They also have limiting distribution vectors x (∞), where the ith component of x (∞) represents the probability of an object in state i after a large number of time periods have elapsed.

11 Limiting distribution Coke vs. Pepsi Example (cont) week - i Pr[X i = Coke] 2/3 stationary distribution coke pepsi

12 Regular Markov Chains Definition: A nonzero vector x is a left eigenvector for a matrix A if there exists a scalar λ such that xA = λx. (Left and right eigenvectors are usually different; they are the same only for special type of matrices.) The limiting distribution x (∞) is a left eigenvector of the transition matrix corresponding to the eigenvalue of unity, and having the sum of its components equal to one. Examples on the board.