Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Introduction to Stochastic Models GSLM 54100. 2 Outline  transient behavior  first passage time  absorption probability  limiting distribution 

Similar presentations


Presentation on theme: "1 Introduction to Stochastic Models GSLM 54100. 2 Outline  transient behavior  first passage time  absorption probability  limiting distribution "— Presentation transcript:

1 1 Introduction to Stochastic Models GSLM 54100

2 2 Outline  transient behavior  first passage time  absorption probability  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient, recurrent, positive recurrent, null recurrent  periodicity  limiting behavior of irreducible chains

3 3 Example 4.12 of Ross  the amount of money of a pensioner  receiving 2 (thousand dollars) at the beginning of a month  expenses in a month = i, w.p. ¼, i = 1, 2, 3, 4  not using the excess if insufficient money on hand  disposal of excess if having more than 3 at the end of a month  at a particular month (time reference), having 5 after receiving his payment  P(the pensioner’s capital ever 1 or less within the following four months)

4 4 Example 4.12 of Ross  X n = the amount of money that the pensioner has at the end of month n  X n+1 = min{[X n +2  D n ] +, 3}, D n ~ disc. unif. [1, 2, 3, 4]  starting with X 0 = 3, X n  {0, 1, 2, 3}

5 5 Example 4.12 of Ross  starting with X 0 = 3, whether the chain has ever visited state 0 or 1 on or before n depends on the transitions within {2, 3}  merging states 0 and 1 into a super state A 0 12 3 0.25 0.5 0.25 0.5 0.75 A 2 3 0.25 0.5 0.25 1

6 6 Probability of Ever Visiting a Set of States by Period n  a Markov chain [p ij ]  A : a set of special states  P(ever visiting states in A by period n|X 0 = i)  defining  super state A: indicating ever visiting states in A  the first visiting time of A, N = min{n: X n  A }  a new Markov chain W n =

7 7 Probability of Ever Visiting a Set of States by Period n  transition probability matrix Q = [q ij ]

8 8 Probability of Visiting a Particular State at n and Skipping a Particular Set of States for k  {1, …, n  1}  P(X n = j, X k  A, k = 1, …, m  1| X 0 = i) = P(W n = j|X 0 = i) = P(W n = j|W 0 = i)

9 9 Example  X n, weather of a day, a DTMC  to find  P(X 3 = s, X 2  r, X 1  r|X 0 = c)  P(ever visits state r on or before n = 3|X 0 = c)

10 10 Example  claim: these probabilities can be found from a new DTMC {W n }  r: the special state, i.e., A = {r}  N = min{n: X n  A } = min{n: X n  = r}  define  state of {W n }  {A, c, w, s}  transition probability matrix of {W n }

11 11 Example  P(X 3 = s, X 2  r, X 1  r|X 0 = c) = P(W 3 = s, W 2  A, W 1  A|W 0 = c)  P({X n } ever visits state r on or before n = 3|X 0 = c) = P({W n } ever visits state A on or before n = 3|W 0 = c)

12 12 Intuition  P({X n } ever visits state r on or before n = 3|X 0 = c)  {{X n } ever visits state r on or before n = 3|X 0 = c}: determined by events occurring before visiting r  e.g., if up to X 2 the chain has not visited r, X 3 = r depends on the state transition from X 2, nothing related to r  not related to events after visiting r  transition probabilities of {X n } before visiting r are the same as the transition probabilities of {W n } before visiting A

13 13 Intuition  P(X 3 = s, X 2  r, X 1  r|X 0 = c)  {X 3 = s, X 2  r, X 1  r|X 0 = c}: again determined by events occurring before visiting r

14 14 Example  let (X 1, X 2 ) = (i, j)  P(ever visits r in the first two periods|X 0 = c) = P((r, r), (r, c), (r, w), (r, s), (c, r), (w, r), (s, r) |X 0 = c) = P((r, r), (r, c), (r, w), (r, s)|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = P(X 1 = r|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = P(W 1 = r|W 0 = c) + P((c, r)|W 0 = c) + P((w, r)|W 0 = c) + P((s, r)|W 0 = c) reasons for P(X 1 = r|X 0 = c) = P(W 1 = r|W 0 = c): {X 1 = r|X 0 = c} depends on the transition from state c, which does not depend on state r. Before visiting state r, {X n } and {W n } are the same.

15 15 Example  P(ever visits state r on or before n = 3|X 0 = c) = P((r, r), (r, c), (r, w), (r, s), (c, r), (w, r), (s, r) |X 0 = c) = P((r, r) |X 0 = c) + P((r, c) |X 0 = c) + P((r, w) |X 0 = c) + P((r, s)|X 0 = c) + P((c, r)|X 0 = c) + P((w, r)|X 0 = c) + P((s, r)|X 0 = c) = = = =

16 16 Example  repeat the process for n = 3, i.e., convince yourself that  P(ever visits state r on or before n = 3|X 0 = c)  P(X 3 = s, X 2  r, X 1  r|X 0 = c) the two probabilities can be found from events before visiting r  {X n } and {W n } are exactly the same

17 17 First Passage Time

18 18 First Passage Times  let T ij be the first passage time from state i to state j  T ij = the number of transitions taken to visit state j for the first time given that X 0 = i  T ij = min{n|X n = j, X n-1  j,..., X 1  j|X 0 = i}  T ii = the recurrence time for state i  simple formulas to calculate E(T ij ) for positive irreducible chains

19 19 Example 4.7.1 of Note  let  ij = E(T ij )  X 0 = 3;  30 = ?   10 = 1 + 0.368  10   20 = 1 + 0.368  10 + 0.368  20   30 = 1 + 0.184  10 + 0.368  20 + 0.368  30  3 equations, 3 unknowns, solving for  30 0 12 3 0.368 0.264 0.184 0.368 0.632 0.184 0.08 0.368 0.08

20 20 Example 4.7.1 of Note  to find  00   00 = 1 + 0.184  10 + 0.368  20 + 0.368  30  will discuss a quick way to find  00 soon 0 12 3 0.368 0.264 0.184 0.368 0.632 0.184 0.08 0.368 0.08

21 21 Absorption States  the gambler’s ruin problem  Sam & Peter, total 4 dollars  infinite number of coin flips  H: Peter gives Sam $1, o.w. Sam gives Peter $1  P(H) = p and P(T) = 1  p  X n : amount of money that Sam has after n rounds  X 0 = 1  P(Sam wins the game) = ? 0 1 1 1-p 2 3 p p 4 p 1

22 22 Absorption States  f i : probability that Sam wins the game if he starts with i dollars, i = 1, 2, 3; f 0 = 0, f 4 = 1  f 1 = pf 2  f 2 = (1  p)f 1 + pf 3  f 3 = (1  p)f 2 + p  3 equations, 3 unknowns, solving for f i 0 1 1 1-p 2 3 p p 4 p 1

23 23 Example on Weather  weather  {r, c, w, s}  X 0 = c  find P(running into sunny before rainy)  f c = P(running into s before r|X 0 = c)  f w = P(running into s before r|X 0 = w)  f c = f c /3 + f w /6 + 1/4  f w = f c /2 + f w /4 + 1/8  two equations, two unknowns


Download ppt "1 Introduction to Stochastic Models GSLM 54100. 2 Outline  transient behavior  first passage time  absorption probability  limiting distribution "

Similar presentations


Ads by Google