Courtesy of J. Bard, L. Page, and J. Heyl

Slides:



Advertisements
Similar presentations
Random Processes Markov Chains Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering
Advertisements

Lecture 5 This lecture is about: Introduction to Queuing Theory Queuing Theory Notation Bertsekas/Gallager: Section 3.3 Kleinrock (Book I) Basics of Markov.
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Markov Models.
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
Markov Chains 1.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Continuous Time Markov Chains and Basic Queueing Theory
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Stochastic Models Lecture 3 Continuous-Time Markov Processes
Discrete Time Markov Chains
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
6 vector RVs. 6-1: probability distribution A radio transmitter sends a signal to a receiver using three paths. Let X1, X2, and X3 be the signals that.
Reliability Engineering
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Markov Chains.
Discrete Time Markov Chains (A Brief Overview)
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
Markov Chains and Random Walks
Medium Access Control Protocols
Industrial Engineering Dep
Discrete-time markov chain (continuation)
Much More About Markov Chains
DTMC Applications Ranking Web Pages & Slotted ALOHA
Operations Research: Applications and Algorithms
Discrete-time markov chain (continuation)
Markov Chains Part 5.
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
TexPoint fonts used in EMF.
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
Game Theory Day 4.
Discrete time Markov Chain
IENG 362 Markov Chains.
Discrete time Markov Chain
Lesson 6 Ratio’s and Proportions
Discrete-time markov chain (continuation)
TexPoint fonts used in EMF.
Markov Chains & Population Movements
Random Processes / Markov Processes
CS723 - Probability and Stochastic Processes
Presentation transcript:

Courtesy of J. Bard, L. Page, and J. Heyl 11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl

11.2.1 n-step transition probabilities (review)

Transition prob. matrix n-step transition prob. from state i to j is n-step transition matrix (for all states) is then For instance, two step transition matrix is

Chapman-Kolmogorov equations Prob. of going from state i at t=0, passing though state k at t=m, and ending at state j at t=m+n is In matrix notation,

11.2.2 state probabilities

State probability (pmf of an RV!) Let p(n) = {pj(n)}, for all jE, be the row vector of state probs. at time n (i.e., state prob. vector) Thus, p(n) is given by From the initial state In matrix notation

How an MC changes (Ex 11.10, 11.11) A two-state system Silence Speech 0.9 0.1 A two-state system 0.8 Silence (state 0) Speech (state 1) 0.2 Suppose p(0)=(0,1) Suppose p(0)=(1,0) Then p(1) = p(0)P= (0,1)P = (0.2, 0.8) p(1) = p(0)P = (0.9, 0.1) p(2)= (0.2,0.8)P= (0,1)P2 = (0.34, 0.66) p(2) = (1,0)P2 = (0.83, 0.17) p(4)= (0,1)P4 = (0.507, 0.493) p(4)= (1,0)P4 = (0.747, 0.253) p(8)= (0,1)P8 = (0.629, 0.371) p(8)= (1,0)P8 = (0.686, 0.314) p(16)= (0,1)P16 = (0.665, 0.335) p(16)= (1,0)P16 = (0.668, 0.332) p(32)= (0,1)P32 = (0.667, 0.333) p(32)= (1,0)P32 = (0.667, 0.333) p(64)= (0,1)P64 = (0.667, 0.333) p(64)= (1,0)P64 = (0.667, 0.333)

Independence of initial condition

The lesson to take away No matter what assumptions you make about the initial probability distribution, after a large number of steps, the state probability distribution is approximately (2/3, 1/3) See p.666, 667

11.2.3 steady state probabilities

State probabilities (pmf) converge As n , then transition prob. matrix Pn approaches a matrix whose rows are equal to the same pmf. In matrix notation, where 1 is a column vector of all 1’s, and =(0, 1, … ) The convergence of Pn implies the convergence of the state pmf’s

Steady state probability System reaches “equilibrium” or “steady state”, i.e., n , pj(n)  j, pi(n-1)  i In matrix notation, here  is stationary state pmf of the Markov chain To solve this,

Speech activity system From the steady state probabilities  =  P 0.9 0.1 0.2 0.8 (1, 2) = (1, 2) 1 = 0.91 + 0.22 2 = 0.11 + 0.82 1 + 2 = 1 1 = 2/3 = 0.667 2 = 1/3 = 0.333

Question 11-1: Alice, Bob and Carol are playing Frisbee. Alice always throws to Carol. Bob always throws to Alice. Carol throws to Bob 2/3 of the time and to Alice 1/3 of the time. In the long run, what percentage of the time do each of the players have the Frisbee?