Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.

Slides:



Advertisements
Similar presentations
Markov Chain Nur Aini Masruroh.
Advertisements

Discrete time Markov Chain
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Markov Chains Extra problems
Discrete Time Markov Chains
Markov Chains 1.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Hidden Markov Models Tunghai University Fall 2005.
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
. Markov Chains Tutorial #5 © Ilan Gronau. Based on original slides of Ydo Wexler & Dan Geiger.
Group exercise For 0≤t 1
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  transient behavior.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 7 on Discrete Time Markov Chains Kishor S. Trivedi Visiting.
Intro. to Stochastic Processes
S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
1 Introduction to Stochastic Models GSLM Outline  transient behavior  first passage time  absorption probability  limiting distribution 
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Discrete Time Markov Chain
Discrete-time Markov chain (DTMC) State space distribution
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
Solutions Markov Chains 1
Chapman-Kolmogorov Equations
Solutions Markov Chains 1
Discrete-time markov chain (continuation)
Random Processes / Markov Processes
Presentation transcript:

Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3

Inventory System : Inventory at a store is reviewed daily. If inventory drops below 3 units, an order is placed with the supplier which is delivered the next day. The order size should bring inventory position to 6 units. Daily demand D is i.i.d. with distribution P ( D = 0) =1/3 P ( D = 1) =1/3 P(D = 2) =1/3. Let X n describe inventory level on the nth day. Is the process { X n } a Markov chain? Assume we start with 6 units. Example

 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains

 { X n : n =0, 1, 2,...} is a discrete time stochastic process  If X n = i the process is said to be in state i at time n Markov Chains

 { X n : n =0, 1, 2,...} is a discrete time stochastic process  If X n = i the process is said to be in state i at time n  { i : i =0, 1, 2,...} is the state space Markov Chains

 { X n : n =0, 1, 2,...} is a discrete time stochastic process  If X n = i the process is said to be in state i at time n  { i : i =0, 1, 2,...} is the state space  If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC). Markov Chains

 { X n : n =0, 1, 2,...} is a discrete time stochastic process  If X n = i the process is said to be in state i at time n  { i : i =0, 1, 2,...} is the state space  If P ( X n +1 =j|X n =i, X n -1 =i n -1,..., X 0 =i 0 }= P ( X n +1 =j|X n =i } = P ij, the process is said to be a Discrete Time Markov Chain (DTMC).  P ij is the transition probability from state i to state j Markov Chains

P : transition matrix

 Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) =  P (rain tomorrow|no rain today) = 

 Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) =  P (rain tomorrow|no rain today) =  State 0 = rain State 1 = no rain

 Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) =  P (rain tomorrow|no rain today) =  State 0 = rain State 1 = no rain

 Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds.

 P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M)

 Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds.  P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M)  P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M)

 Example 4: A gambler wins $1 with probability p, loses $1 with probability 1- p. She starts with $ N and quits if she reaches either $ M or $0. X n is the amount of money the gambler has after playing n rounds.  P ( X n =i +1 |X n -1 =i, X n -2 =i n -2,..., X 0 =N }= P ( X n =i +1 |X n -1 =i }= p (i≠ 0, M)  P ( X n =i -1 | X n -1 =i, X n -2 = i n -2,..., X 0 =N } = P ( X n =i -1 |X n -1 =i }=1– p (i ≠ 0, M) P i, i +1 =P ( X n =i +1 |X n -1 =i }; P i, i -1 =P ( X n =i -1 |X n -1 =i }

 P i, i +1 = p ;  P i, i -1 = 1- p for i≠ 0, M  P 0,0 = 1; P M, M = 1 for i≠ 0, M (0 and M are called absorbing states)  P i, j = 0, otherwise

 random walk: A Markov chain whose state space is 0,  1,  2,..., and P i,i +1 = p = 1 - P i,i -1 for i =0,  1,  2,..., and 0 < p < 1 is said to be a random walk.

Chapman-Kolmogorv Equations

 Example 1: Probability it will rain tomorrow depends only on whether it rains today or not: P (rain tomorrow|rain today) =  P (rain tomorrow|no rain today) =  What is the probability that it will rain four days from today given that it is raining today? Let  = 0.7 and  = 0.4. State 0 = rain State 1 = no rain

Unconditional probabilities

Classification of States

Communicating states

Proof

Classification of States (continued)

The Markov chain with transition probability matrix P is irreducible.

The classes of this Markov chain are {0, 1}, {2}, and {3}.

f i : probability that starting in state i, the process will eventually re-enter state i. Recurrent and transient states

f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. Recurrent and transient states

f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Recurrent and transient states

f i : probability that starting in state i, the process will eventually re-enter state i. State i is recurrent if f i = 1. State i is transient if f i < 1. Probability the process will be in state i for exactly n periods is f i n -1 (1- f i ), n ≥ 1. Recurrent and transient states

Proof

Not all states can be transient.

If state i is recurrent, and state i communicates with state j, then state j is recurrent.

Proof

Not all states can be transient.

If state i is recurrent, and state i communicates with state j, then state j is recurrent  recurrence is a class property. Not all states can be transient.

If state i is recurrent, and state i communicates with state j, then state j is recurrent  recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient  transience is also a class property.

If state i is recurrent, and state i communicates with state j, then state j is recurrent  recurrence is a class property. Not all states can be transient. If state i is transient, and state i communicates with state j, then state j is transient  transience is also a class property. All states in an irreducible Markov chain are recurrent.

All states communicate. Therefore all states are recurrent.

There are three classes {0, 1}, {2, 3} and {4}. The first two are recurrent and the third is transient