1 Introduction to Stochastic Models GSLM 54100. 2 Outline  transient behavior  first passage time  absorption probability  limiting distribution 

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Markov Models.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains Extra problems
IERG5300 Tutorial 1 Discrete-time Markov Chain
Operations Research: Applications and Algorithms
1 Introduction to Stochastic Models GSLM Outline  course outline course outline  Chapter 1 of the textbook.
Discrete Time Markov Chains
Markov Chains 1.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Computational Genomics Lecture 10 Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Al-Imam Mohammad Ibn Saud University
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
INDR 343 Problem Session
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
The Binomial Distribution. Introduction # correct TallyFrequencyP(experiment)P(theory) Mix the cards, select one & guess the type. Repeat 3 times.
1 Introduction to Stochastic Models GSLM Outline  conditional probability & Binomial  recursive relationships  examples of similar random.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  transient behavior.
Introduction to Stochastic Models GSLM 54100
Intro. to Stochastic Processes
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Chapter 9Section 7 Mathematical Expectation. Ch9.7 Mathematical Expectation To review from last Friday, we had the case of a binomial distribution given.
P. STATISTICS LESSON 8.2 ( DAY 1 )
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Basic Probability. Introduction Our formal study of probability will base on Set theory Axiomatic approach (base for all our further studies of probability)
Random Sampling Algorithms with Applications Kyomin Jung KAIST Aug ERC Workshop.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Discrete Time Markov Chain
Discrete-time markov chain (continuation)
Prof. Dr. Holger Schlingloff 1,2 Dr. Esteban Pavese 1
IENG 362 Markov Chains.
Solutions Markov Chains 1
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
September 1, 2010 Dr. Itamar Arel College of Engineering
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
Presentation transcript:

1 Introduction to Stochastic Models GSLM 54100

2 Outline  transient behavior  first passage time  absorption probability  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient, recurrent, positive recurrent, null recurrent  periodicity  limiting behavior of irreducible chains

3 Example 4.12 of Ross  the amount of money of a pensioner  receiving 2 (thousand dollars) at the beginning of a month  expenses in a month = i, w.p. ¼, i = 1, 2, 3, 4  not using the excess if insufficient money on hand  disposal of excess if having more than 3 at the end of a month  at a particular month (time reference), having 5 after receiving his payment  P(the pensioner’s capital ever 1 or less within the following four months)

4 Example 4.12 of Ross  X n = the amount of money that the pensioner has at the end of month n  X n+1 = min{[X n +2  D n ] +, 3}, D n ~ disc. unif. [1, 2, 3, 4]  starting with X 0 = 3, X n  {0, 1, 2, 3}

5 Example 4.12 of Ross  starting with X 0 = 3, whether the chain has ever visited state 0 or 1 on or before n depends on the transitions within {2, 3}  merging states 0 and 1 into a super state A A

6 Probability of Ever Visiting a Set of States by Period n  a Markov chain [p ij ]  A : a set of special states  P(ever visiting states in A by period n|X 0 = i)  defining  super state A: indicating ever visiting states in A  the first visiting time of A, N = min{n: X n  A }  a new Markov chain W n =

7 Probability of Ever Visiting a Set of States by Period n  transition probability matrix Q = [q ij ]

8 Probability of Visiting a Particular State at n and Skipping a Particular Set of States for k  {1, …, n  1}  P(X n = j, X k  A, k = 1, …, m  1| X 0 = i) = P(W n = j|X 0 = i) = P(W n = j|W 0 = i)

9 Example  X n, weather of a day, a DTMC  to find  P(X 3 = s, X 2  r, X 1  r|X 0 = c)  P(ever visits state r on or before n = 3|X 0 = c)

10 Example  claim: these probabilities can be found from a new DTMC {W n }  r: the special state, i.e., A = {r}  N = min{n: X n  A } = min{n: X n  = r}  define  state of {W n }  {A, c, w, s}  transition probability matrix of {W n }

11 Example  P(X 3 = s, X 2  r, X 1  r|X 0 = c) = P(W 3 = s, W 2  A, W 1  A|W 0 = c)  P({X n } ever visits state r on or before n = 3|X 0 = c) = P({W n } ever visits state A on or before n = 3|W 0 = c)

12 Intuition  P({X n } ever visits state r on or before n = 3|X 0 = c)  {{X n } ever visits state r on or before n = 3|X 0 = c}: determined by events occurring before visiting r  e.g., if up to X 2 the chain has not visited r, X 3 = r depends on the state transition from X 2, nothing related to r  not related to events after visiting r  transition probabilities of {X n } before visiting r are the same as the transition probabilities of {W n } before visiting A

13 Intuition  P(X 3 = s, X 2  r, X 1  r|X 0 = c)  {X 3 = s, X 2  r, X 1  r|X 0 = c}: again determined by events occurring before visiting r

14 Example  let (X 1, X 2 ) = (i, j)  P(ever visits r in the first two periods|X 0 = c) = P((r, r), (r, c), (r, w), (r, s), (c, r), (w, r), (s, r) |X 0 = c) = P((r, r), (r, c), (r, w), (r, s)|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = P(X 1 = r|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = P(W 1 = r|W 0 = c) + P((c, r)|W 0 = c) + P((w, r)|W 0 = c) + P((s, r)|W 0 = c) reasons for P(X 1 = r|X 0 = c) = P(W 1 = r|W 0 = c): {X 1 = r|X 0 = c} depends on the transition from state c, which does not depend on state r. Before visiting state r, {X n } and {W n } are the same.

15 Example  P(ever visits state r on or before n = 3|X 0 = c) = P((r, r), (r, c), (r, w), (r, s), (c, r), (w, r), (s, r) |X 0 = c) = P((r, r) |X 0 = c) + P((r, c) |X 0 = c) + P((r, w) |X 0 = c) + P((r, s)|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = = = =

16 Example  repeat the process for n = 3, i.e., convince yourself that  P(ever visits state r on or before n = 3|X 0 = c)  P(X 3 = s, X 2  r, X 1  r|X 0 = c) the two probabilities can be found from events before visiting r  {X n } and {W n } are exactly the same

17 First Passage Time

18 First Passage Times  let T ij be the first passage time from state i to state j  T ij = the number of transitions taken to visit state j for the first time given that X 0 = i  T ij = min{n|X n = j, X n-1  j,..., X 1  j|X 0 = i}  T ii = the recurrence time for state i  simple formulas to calculate E(T ij ) for positive irreducible chains

19 Example of Note  let  ij = E(T ij )  X 0 = 3;  30 = ?   10 =  10   20 =   20   30 =    30  3 equations, 3 unknowns, solving for 

20 Example of Note  to find  00   00 =    30  will discuss a quick way to find  00 soon

21 Absorption States  the gambler’s ruin problem  Sam & Peter, total 4 dollars  infinite number of coin flips  H: Peter gives Sam $1, o.w. Sam gives Peter $1  P(H) = p and P(T) = 1  p  X n : amount of money that Sam has after n rounds  X 0 = 1  P(Sam wins the game) = ? p 2 3 p p 4 p 1

22 Absorption States  f i : probability that Sam wins the game if he starts with i dollars, i = 1, 2, 3; f 0 = 0, f 4 = 1  f 1 = pf 2  f 2 = (1  p)f 1 + pf 3  f 3 = (1  p)f 2 + p  3 equations, 3 unknowns, solving for f i p 2 3 p p 4 p 1

23 Example on Weather  weather  {r, c, w, s}  X 0 = c  find P(running into sunny before rainy)  f c = P(running into s before r|X 0 = c)  f w = P(running into s before r|X 0 = w)  f c = f c /3 + f w /6 + 1/4  f w = f c /2 + f w /4 + 1/8  two equations, two unknowns