Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < . . . < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.

Similar presentations


Presentation on theme: "Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < . . . < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e."— Presentation transcript:

1 Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e. The future of the process depends only on the present and not on the past.

2 Examples:

3

4 But

5 Markov Chains Integer-values Markov Processes are called Markov Chains. Examples: Sum process. Counting process Random walk Poisson process Markov chains can be discrete-time or continuous-time.

6 Discrete – Time Markov Chains
Initial PMF : pj (0)  P( X0 = j ) ; j = 0,1, . . . Transition Probability Matrix:

7 e.g. Binomial counting process : Sn = Sn-1 + Xn Xn ~ Bernoulli
1 2 k k+1 p p p p p p

8 n-step Transition Probabilities:
Time 0 Time 1 Time 2 i j k

9 State Probabilities:  PMF at any time can be obtained from initial PMF and the TPM.

10 Steady State Probabilities:
In some cases, the probabilities pj(n) approach a fixed point as n   Not all Markov chains settle in to steady state , e.g. Binomial Counting

11 Classification of States ( Discrete time Markov Chains)
* State j is accessible from state i if pij(n) > 0 for some n  0 * States i and j communicate if i is accessible from j and j from i . This is denoted i  j . * i  i * i  j and j  k  i  k . Class: States i and j belong to the same class if i  j . If S  set of states, then for any Markov chain If a Markov chain has only one class, it is called irreducible.

12 Recurrence Properties:
Let fi  P ( Xn ever returns to i | X0 = i ) If fi = 1 , i is termed recurrent . If fi < 1 , i is termed transient . If i is recurrent , X0 = i  infinite # of returns to i . If i is transient , X0 = i  finite # of returns to i .

13

14 If i is recurrent and i  Classk , then all j  Classk are recurrent
If i is recurrent and i  Classk , then all j  Classk are recurrent. If i is transient , all j are transient , i.e. recurrence and transience are class properties.  States of an irreducible Markov chain are either all transient or all recurrent. If # of states <  , all states cannot be transient  All states in a finite-state irreducible Markov Chain are recurrent . Periodicity: If for state i , pii(n) = 0 except when n is a multiple of d, where d is a largest such integer , i is said to have a period d. Period is also a class property. An irreducible Markov chain is aperiodic, if all of its states have period 1.

15 1 2 3 k k+1 1 4 5 2 3 Class 2(Recurrent) Class 1(Transient) 2 1 3
Irreducible Markov Chain 4 5 1 2 3 k k+1 Non- Irreducible Markov Chain

16 Recurrence times for 0,1 = { 2,4,6,8, . . . . }
3 1 A typical periodic M C 1 1/2 1/2 1 1 2 3 1 Recurrence times for 0,1 = { 2,4,6,8, } Recurrence times for 2,3 = { 4,6,8, }  period = 2

17 Let X0 = i where i is a recurrent state .
Define Ti (k)  interval between (k-1) th and k th returns to i . (by the law of large numbers) where i is the long-term fraction of time spent in state i. i Positive Recurrent: E(Ti) <  , i > 0 i Null Recurrent: E(Ti) =  , i = (e.g. all states in a random walk with p = 0.5) i is Ergodic if it is positive recurrent, aperiodic. Ergodic Markov Chain: An irreducible, aperiodic, positive recurrent MC.

18 Limiting Probabilities:
j ‘s satisfy the rule for stationary state PMF : A This is because long-term proportion of time in which j follows i = long-term proportion of time in i  P( i  j) = i pij and long-term proportion of time in j = i (long-term proportion of time in which j follows i) = i i pij  j

19 Theorem: For an Irreducible, aperiodic and positive recurrent Markov Chain
Where j is a unique non-negative solution of A. i.e. Steady state prob of j = stationary state pmf = Long-term fraction of time in j  Ergodicity.

20 Continuous-Time Markov Chains
Transition Probabilities: P( X(s+t) = j | X(s) = i ) = P( X(t) = j | X(0) = i )  pij (t) t  0 i.e. the transition probabilities depend only on t, not on s (time-invariant transition probabilities  homogenous) P(t) = TPM = matrix of pij (t)  i,j Clearly P (0) = I (identity matrix)

21 Ex 8.12 : Poisson Process Can only transition from j to j+1 or remain in j because  is small for 2 transitions.

22 Can only transition from j to j+1 or remain in j because  is small for 2 transitions.

23 State Occupancy Times:

24 Embedded Markov Chains :
Consider a continuous-time Markov Chain with the state Occupancy times Ti and The corresponding Markov chain is a discrete time MC with the same states as the original MC. Each time the state i is entered, a Ti ~ exponential (i ) is chosen. After Ti is elapsed , a new state is transitioned to with probability qij , which depend on the original MC as: This is very useful in generating Markov chains in simulations.

25 Transition Rates:

26 State Probabilities:

27 This is a system of Chapman – Kolmogorov Equations
This is a system of Chapman – Kolmogorov Equations. These are solved for each pj(t) using the initial PMF p(0)= [p0(0) p1(0) p2(0) ] Note: If we start with pi(0) = 1 , pj(0) =  j i , pj(t)  pij(t)  C-K equations can be used to find TPM P(t)

28 Steady State Probabilities:
If pj(t)  pj  j as t   , the system reaches equilibrium (SS) . Then Solve these equations j to obtain pj ‘s - equilibrium PMF. The GBE states that, at equilibrium , rate of probability flow out of j (LHS) = rate of probability flow in to j (RHS)

29 Example: M/M/1 queue ( Poisson arrivals/ exp-time arrivals / 1 server)
arrival rate =  , service rate =    i,i+1 =  i =0,1,2, .. .… ( i customers  i+ 1 customers )  i,i-1 =  i =1,2,3, .. .… ( i customers  i - 1 customers ) 1 2 j j+1

30 Ex: Birth- Death processes
0 1 2 j 1 2 3 j j+1 1 2 3 j+1 j = birth rate at state j j = death rate at state j

31

32 Theorem: Given CT MC X(t) with associated embedded MC [qij ] with SS PMF j , if [qij ] is irreducible and pos recurrent , the long term fraction of time spent by X(t) is state i is Which is also the unique solution to the GBE’s.


Download ppt "Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < . . . < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e."

Similar presentations


Ads by Google