IERG5300 Tutorial 1 Discrete-time Markov Chain

Slides:



Advertisements
Similar presentations
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
Operations Research: Applications and Algorithms
Markov Chains.
Operations Research: Applications and Algorithms
Discrete Time Markov Chains
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Markov Chains 1.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Computational Genomics Lecture 10 Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
11 - Markov Chains Jim Vallandingham.
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Entropy Rates of a Stochastic Process
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
CS774. Markov Random Field : Theory and Application Lecture 16 Kyomin Jung KAIST Nov
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
TCOM 501: Networking Theory & Fundamentals
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Condition State Transitions and Deterioration Models H. Scott Matthews March 10, 2003.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Group exercise For 0≤t 1
Binomial Probability Distribution.
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Random Walks and Markov Chains Nimantha Thushan Baranasuriya Girisha Durrel De Silva Rahul Singhal Karthik Yadati Ziling Zhou.
Binomial Distributions Calculating the Probability of Success.
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Dynamical Systems Model of the Simple Genetic Algorithm Introduction to Michael Vose’s Theory Rafal Kicinger Summer Lecture Series 2002.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Discrete Time Markov Chains
6. Markov Chain.
Discrete-time markov chain (continuation)
Presentation transcript:

IERG5300 Tutorial 1 Discrete-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials

Outline Discrete-time Markov Chain Miscellaneous Materials Course Information Discrete-time Markov Chain N-step transition probability Chapman-Kolmogorov Equations Limiting probabilities Miscellaneous Materials Summary, Q&A

Course Information Lecture Notes can be found at https://elearn.cuhk.edu.hk/webapps/portal/frameset.jsp Or https://course.ie.cuhk.edu.hk/~ierg5300/ Grading: Homework is coming soon. 10% homework, 30% mid-term exam, 60% final exam

Discrete-time Markov Chain - Definition A sequence of discrete r.v. X0, X1, X2, ... forms a Markov Chain if At any discrete time t, the state of Xt+1 only depends on the state of Xt , i.e. , Pij is the 1-step transition probability and it is time invariant Pij describes how likely state j will occur tomorrow if today's state is i. ∏ is the 1-step transition matrix comprising all Pij j for all i, j. for all i.

Markov Chain - Example An individual possesses r umbrellas which he brings from his home to office, and vice versa. If he is at home (the office) at the beginning (end) of a day and it is raining, then he will take an umbrella at home (the office ) to the office (home), provided there is one to be taken. If it is not raining, then he never takes an umbrella. Assume that, independent of the past, it rains at the beginning (end) of a day with probability p. Q: Define a Markov chain with r+1 states Let Xt be the number of umbrellas at home at the beginning of day t, then Xt ∈ {0, 1, … , r}

N-step transition probability Natural questions are: Given the Markov chain is currently at state i, what is the chance to stay at state j after n time periods? n-step transition probability: Correspondingly,

Chapman-Kolmogorov Equations P(n) = ∏n To compute , we can just get it by conditioning on some intermediate time period t+m : P(m+n) = P(m) P(n) At any time period t, Xt is a r.v. with distribution Vt = { … P(Xt = 0) , P(Xt = 1) , P(Xt = 2), … }, then for n≥0, Vt+n = Vt ∏n

C-K Equations- Example Suppose that coin 1 and 2 have probability 0.7 and 0.6 of coming up heads respectively. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow, and if it comes up tails, then we select coin 2 to flip tomorrow. If the coin flipped on the first day is equally likely to be coin 1 or coin 2, then what is the probability that the coin flipped on the third day is coin 1? Define the state on a day to be the label of the coin that is flipped on that day. Thus . We want to find V2 ∏

Limiting Probabilities Summary of theorems for finite-state markov chains. Ergodicity // All entries in the matrix t are nonzero for some t  1. The limiting distribution V exists. // regardless of the initial distribution V0. Eigenvalue 1 of the transition matrix with 1-dim eigenspace // Rank of the matrix I is full minus 1 (dim of eigenspace = 1). // Long-run distr. = unique normalized eigenvector with eigenvalue 1 Eigenvalue 1 of the transition matrix // The matrix I is a singular.  det(I) = 0 The reverse is not true.

Limiting Probabilities Consider a Markov chain with discrete r.v. X0, X1, X2 ,… and transition matrix ∏. At any time t ≥ 0, Xt is a r.v. with distribution Vt = { … P(Xt = 0) , P(Xt = 1) , P(Xt = 2), … } . Then, Vt+n = Vt ∏n for n≥0. //Chapman-Kolmogorov Equation If the distribution exists, V is referred to the limiting distribution/stationary state of the Markov chain and i are the limiting probabilities. To calculate the limiting probabilities, we need the following equations: V=V∏ ,

Limiting Probabilities - Ergodicity Definition. A finite-state Markov chain is ergodic if all entries in the matrix t are nonzero for some t  1. // For some t, you can go from anywhere to anywhere in exactly t steps. Theorem. If a finite-state Markov chain is ergodic, then the stationary state exists. // Ergodicity is a sufficient condition but not a necessary condition.

Limiting Probabilities - Remarks V is regardless of V0, the initial distribution of the Markov chain. V is equal to ANY row in V may also exist if the Markov chain is not ergodic, e.g. V may not exist but satisfy V=V∏. V can NOT be regarded as a stationary state, e.g. ∏n 1 0.2 0.8 1

Limiting Probabilities-Remarks (cont’d) As long as V∞ satisfies V=V∏, can be interpreted as the long-run proportion of time that the Markov chain is in state j.

Limiting Probabilities ― Example 1 Each of two switches is either on or off during a day. On day n, each switch will independently off with probability p = (1 + number of on switches during day n−1) / 4. (For instance, if both switches are on during day n−1, then each will independently be off during day n with probability 3/4.). What fraction of days are both switches on? What fraction are both off?

Limiting Probabilities ― Example 1 Each of two switches is either on or off during a day. On day n, each switch will independently off with probability p = (1 + number of on switches during day n−1) / 4. (For instance, if both switches are on during day n−1, then each will independently be off during day n with probability 3/4.). What fraction of days are both switches on? What fraction are both off? How to solve: 1) Model the problem as a Markov chain : define the proper states and the corresponding transition probabilities. Define the state to be the number of on switches, which gives us a 3-state Markov chain.

Limiting Probabilities ― Example 1(cont’d) Each of two switches is either on or off during a day. On day n, each switch will independently off with probability p = (1 + number of on switches during day n−1) / 4. (For instance, if both switches are on during day n−1, then each will independently be off during day n with probability 3/4.). What fraction of days are both switches on? What fraction are both off? How to solve: 2) Solve the linear equations:

Geometric Random Variable An experiment with probability of success p Two types: (Starting at 1) The number of total experiments till a success appears. P(N = k) = (1-p)k-1p E[N] = 1/p, k  1 (Starting at 0) The number of failed experiments till a success appears. P(N = k) = (1-p)kp E[N] = 1/p – 1, k  0 Geometric r.v. is the only discrete one that satisfies the memoryless property: P(N=k+m | Nm) = P(N=k)

Summary Markov chain has two important properties: memoryless and time-invariant. C-K Equation: Vt+n = Vt ∏n When we calculate the limiting probabilities of a Markov chain, check if the stationary state exists (whether the finite state Markov chain is ergodic) Use the two equations: V=V∏ , . Geometric r.v. is the only discrete distribution that satisfies the memoryless property. Questions?

Extra Markov Chain? What kind of rule changing make it a Markov chain? What are the states then?