Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.

Similar presentations


Presentation on theme: "1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification."— Presentation transcript:

1

2 1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification 13.Limiting Theorems For Markov Chains 14.Continuous-Time Markov Chains

3 2 10. Discrete-Time Markov Chain Definition of Markov Processes A random process is said to be a Markov Process if, for any set of n time points t 1 <t 2 <…<t n in the index set or time range of the process, the conditional distribution of X(t n ), given the values of X(t 1 ), X(t 2 ), …, X(t n-1 ), depends only on the immediately preceding value; that is, for any real numbers x 1, x 2, …, x n,

4 3 Classification of Markov Processes Type of Parameter State Space Discrete Continuous Discrete-Time Continuous-Time Discrete Markov Chain Markov Chain Continuous Discrete-Time Continuous-Time Markov Process Markov Process

5 4 Discrete-Time Markov Chain Definition : A discrete-time Markov chain {X n | n=0, 1, 2, …}, is a discrete-time, discrete-value random sequence such that given X 0, X 1, …, X n, the next random variable depends only on X n through the transition probability

6 5 Theorem 10.1 The transition probabilities p ij of a Markov chain satisfy Pf :

7 6 Example 10.1 The two-state Markov chain can be used to model a wide variety of systems that alternate between ON and OFF states. After each unit of time in the OFF state, the system turns ON with probability p. After each unit of time in the ON state, the system turns OFF with probability q. Using 0 and 1 to denote the OFF and ON states, what is the Markov chain for the system? Sol :

8 7 Example 10.2 A packet voice communications is in talkspurts (state 1) or silent (state 0) states. The system decides whether the speaker is talking or silent every 10 ms (slot time). If the speaker is silent in a slot, then the speaker will be talking in the next slot with probability p= 1/140. If the speaker is talking in a slot, the speaker will be silent in the next slot with probability q = 1/100. Sketch the Markov chain of this system. Sol :

9 8 Example 10.3 A computer disk drive can be in one of 3 possible states: 0 (IDLE), 1 (READ), or 2 (WRITE). When a unit of time is required to read or write a sector on the disk, sketch the Markov chain. Sol :

10 9 Example 10.4 In a discrete random walk, a person’s position is marked by an integer on the real line. Each unit of time, the person randomly moves one step, either to the right (with probability p ) or to the left. Sketch the Markov chain. Sol :

11 10 Example 10.5 What is the transition matrix of the two-state ON- OFF Markov chain of Example 10.1 ? Sol :

12 11 n-Step Transition Probabilities Definition For a finite Markov chain, the n -step transition probabilities are given by the matrix P(n) which has i, j -th element

13 12 Theorem 10.2 : Chapman-Kolmogorov Eq. For a finite Markov chain with K states, the n -step transition probabilities satisfy Pf :

14 13 Theorem 10.3 For a finite Markov chain with transition matrix P, the n -step transition matrix is Pf :

15 14 Example 10.6 For the two-state Markov chain described in Example 10.1, find the n -step transition matrix P(n). Given the system is OFF at time 0, what is the probability the system is OFF at time n = 33 ? Pf :

16 15 State Probability Vector Definition A vector p=[p 0 p 1 … p K ] is a state probability vector if, and each element is nonnegative.

17 16 Theorem 10.4 The state probabilities p j (n) at time n can be defined by either one iteration with the n -step transition probabilities: or n iterations with the one-step transition probabilities: Pf :

18 17 Example 10.7 For the two-state Markov chain described in Example 10.1 with initial state probabilities p(0)=[p 0 p 1 ], find the state probability vector p(n). Sol :

19 18 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification 13.Limiting Theorems For Markov Chains 14.Continuous-Time Markov Chains

20 19 11. Stationary Distributions & Limiting Probabilities Definition of Limiting State Probabilities For a finite Markov chain with initial state probability vector p(0), the limiting state probabilities, when they exist, are defined to be the vector

21 20 Example 11.1 For a two-state packet voice system of Example 10.2, what is the limiting state probability vector Sol : 0 1 1/140 1/100 99/100 139/140

22 21 Theorem 11.1 If a finite Markov chain with transition matrix P and initial state probability p(0) has limiting probability vector, then Pf :

23 22 Stationary Probability Vector Definition For a finite Markov chain with transition matrix P, a state probability vector  is stationary if

24 23 Theorem 11.2 If a finite Markov chain X n with transition matrix P is initialized with stationary probability vector p(0) = , then p(n) =  for all n and the stochastic process X n is stationary. Pf :

25 24 Example 11.2 A queueing system is described by a Markov chain in which that state X n is the number of customers in the queue at time n. The Markov chain has a unique stationary distribution . The following questions are all equivalent. (1) What is the steady-state probability of at least 10 customers in the system? (2) If we inspect the queue in the distant future, what is the probability of at least 10 customers in the system? (3) What is the stationary probability of at least 10 customers in the system? (4) What is the limiting probability of at least 10 customers in the system?

26 25 Example 11.3 Consider the two-state Markov chain of Example 10.1 and Example 10.6. For what values of p and q does (a) exist, independent of the initial state probability vector p(0) ; (b) exist, but depend on p(0) ; (c) or not exist? Sol :

27 26 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification 13.Limiting Theorems For Markov Chains 14.Continuous-Time Markov Chains

28 27 12. State Classification Accessibility State j is accessible from state i, written i  j, if p ij (n) > 0 for some n > 0. Communicating States State i and j communicates, written i  j, if i  j and j  i. Communicating Class A communicating class is a nonempty subset of states C such that if i  C, then j  C if and only if i  j.

29 28 State Classification Periodic and Aperiodic State i has period d if d is the largest integer such that p ii (n) = 0 whenever n is not divisible by d (p ii (n) > 0 whenever n is divisible by d). If d = 1, then state i is called aperiodic. Transient and Recurrent States In a finite Markov chain, a state i is transient if there exists a state j such that i  j but j  i ; otherwise, if no such state j exists, then state i is recurrent. Irreducible Markov chain A Markov chain is irreducible if there is only one communicating class.

30 29 Example 12.1 The Markov chain is with each branch p ij > 0. How many communicating classes? Sol : 0 2 1 6 5 4 3

31 30 Example 12.2 Consider the five-state discrete random walk with Markov chain What is the period of each state i ? Sol : 2 4 3 1 0 p 1p1p p 1p1p p 1 1 1p1p

32 31 Theorem 12.1 Communicating states have the same period. Pf :

33 32 Example 12.3 The Markov chain is with each branch p ij > 0. Find the periodicity of each communicating class. Sol : 0 2 1 6 5 4 3

34 33 Theorem 12.2 If i is recurrent and i  j, then j is recurrent. Sol :

35 34 Example 12.4 The Markov chain is with each branch p ij > 0. Identify each communicating class and indicate whether it is transient or recurrent. Sol : 2 4 3 1 0 5

36 35 Theorem 12.3 If state i is transient, then N i, the number of visits to state i over all time, has expected value E[N i ] < . Pf :

37 36 Theorem 12.4 A finite-state Markov chain always has a recurrent communicating class. Pf :

38 37 Example 12.5 The Markov chain is with each branch p ij > 0. If the system starts in state j  C 1 = {0, 1, 2}, the system never leaves C 1. If the system starts in communicating class C 3 = {4, 5}, the system never leaves C 3. If the system starts in the transient state 3, then in the first step there is a random transition to either state 2 or to state 4 and the system then remains forever in the corresponding communicating class. 2 4 3 1 0 5

39 38 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification 13.Limiting Theorems For Markov Chains 14.Continuous-Time Markov Chains

40 39 13. Limiting Theorems For Markov Chains For Markov chains with multiple recurrent classes, the limiting state probabilities depend on the initial state distribution. For understanding a system with multiple communicating classes, we need to examine each recurrent class separately as an irreducible system consisting of just that class. In this part, we first focus on irreducible, aperiodic chains and their limiting state probabilities.

41 40 Theorem 13.1 For an irreducible, aperiodic, finite Markov chain with states {0, 1, 2, …, K}, the limiting n -step transition matrix is where is the column vector and is the unique vector satisfying

42 41 Theorem 13.1 (cont’d) Pf :

43 42 Theorem 13.2 For an irreducible, aperiodic, finite Markov chain with transition matrix P and initial state probability vector p(0), Pf :

44 43 Example 13.1 For the packet voice communications system of Example 12.8, use Theorem 13.2 to calculate the stationary probabilities. Sol : 0 1 1/140 1/100 99/100 139/140

45 44 Example 13.2 A digital mobile phone transmits one packet in every 20-ms time slot over a wireless connection. A packet is received error with probability p = 0.1, independent of other packets. Whenever 5 consecutive packets are received in error, the transmitter enters a timeout state. During the timeout state, the mobile terminal performs an independent Bernoulli trial with success probability q = 0.01 in every slot. When a success occurs, the mobile terminal starts transmitting in the next slot as though no packets had been in error. Construct a Markov chain for this system. What are the limiting state probabilities?

46 45 Example 13.2 (Cont’d) Sol : 2 4 3 1 0 p 1p1p p 1p1p p 1p1p p 1p1p p 5 q 1p1p 1q1q

47 46 Theorem 13.3 Consider an irreducible, aperiodic, finite Markov chain with transition probabilities {p ij } and stationary probabilities {  i }. For any partition of the state space into disjoint subsets S and S’, Pf :

48 47 Example 13.3 In each time slot, a router can either store an arriving data packet in its buffer or forward a stored packet (and remove that packet from its buffer). In each time slot, a new packet arrives with probability p, independent of arrivals in all other slots. This packets is stored as long as the router is storing fewer than c packets. If c packets are already buffered, then the new packet is discarded by the router. If no new packet arrives and n > 0 packets are buffered by the router, then the router will forward one buffered packet. That packet is then removed from the router. Let X n denote the number of buffered packets at time n. Sketch the Markov chain for X n and find the stationary probabilities.

49 48 Example 13.3 Sol : i i+1 1 0 p 1p1p p 1p1p p 1p1p … 1p1p c p 1p1p … p S S’

50 49 Periodic States and Multiple Communicating Classes Theorem 13.4 : For an irreducible, recurrent, periodic, finite Markov chain with transition matrix P, the stationary probability vector is the unique nonnegative solution of Pf :

51 50 Example 13.4 Find the stationary probabilities for the Markov chain shown to the right. Sol : 2 1 0 1 1 1

52 51 Theorem 13.5 For a Markov chain with recurrent communicating classes C 1, … C m, let denote the limiting state probabilities associated with class k. Given that the system starts in a transient state i, the limiting probability of state j is where P[B ik ] is the conditional probability that the system enters class C k. Pf :

53 52 Example 13.5 For each possible starting state i  {0,1,…,4}, find the limiting state probabilities for the following Markov chain. Sol : 2 1 0 3/4 1/2 1/4 4 3 1/2 1/4 3/4 1/2 3/4

54 53 Countably Infinite Chains Countably infinite Markov chains has infinite set of states {0, 1, 2, …}. We will consider only a single communicating class here. (Irreducible Markov chains) Multiple communicating classes represent distinct system modes that are coupled only through an initial transient phase that results in the system landing in one of the communicating classes.

55 54 Example 13.6 Suppose that the router in Example 13.3 has unlimited buffer space. In each time slot, a router either store an arriving data packet in its buffer or forward a stored packet ( and remove that packet from its buffer). In each time slot, a new packet is stored with probability p, independent of arrivals in all other slots. If no new packet arrives, then one packet will be removed from the buffer and forwarded. Sketch the Markov chain for X n, the number of buffered packets at time n. Sol : 2 1 0 p 1p1p p 1p1p p 1p1p … 1p1p

56 55 Theorem 13.6 : Chapman-Kolmogorov Eqs. The n -step transition probabilities satisfy Pf : Omitted.

57 56 Theorem 13.7 The state probabilities p j (n) at time n can be found by either one iteration with the n -step transition probabilities or n iterations with the one-step transition probabilities Pf : Omitted.

58 57 Visitation, First Return Time, No. of Returns Definitions : Given that the system is in state i at an arbitrary time, (a) E ii is the event that the system eventually returns to visit state i. (b) T ii is the time (number of transitions) until the system first returns to state i, (c) N ii is the number of times (in the future) that the system returns to state i. Definition : For a countably infinite Markov chain, state i is recurrent if P[E ii ]=1 ; otherwise state i is transient.

59 58 Example 13.7 A system with states {0, 1, 2, …} has Markov chain Note that for any state i > 0, p i,0 = 1/(i+1) and p i,i+1 = i/(i+1). Is state 0 transient or recurrent? Sol : 2 4 3 1 0 1/2 1/3 2/3 1/4 3/4 1/5 1 1/2 4/5 …

60 59 Theorem 13.8 The expected number of visits to state i over all time is Pf :

61 60 Theorem 13.9 State i is recurrent if and only if E[N ii ]= . Pf :

62 61 Example 13.8 The discrete random walk introduced in Example 10.4 has state space {…,  1, 0, 1, …} and Markov chain Is state 0 recurrent? Sol : 0 2 1 11 22 p 1p1p p 1p1p p 1p1p p 1p1p p 1p1p p 1p1p … …

63 62 Positive Recurrence and Null Recurrence Definition : A recurrent state i is positive recurrent if E[T ii ] <  ; otherwise, state i is null recurrent. Example 13.9 : In Example 13.7, we found that state 0 is recurrent. Is state 0 positive recurrent or null recurrent? Sol : 2 4 3 1 0 1/2 1/3 2/3 1/4 3/4 1/5 1 1/2 4/5 …

64 63 Theorem 13.10 For a communicating class of a Markov chain, one of the following must be true : (a)All states are transient. (b)All states are null recurrent. (c)All states are positive recurrent.

65 64 Example 13.10 In a Markov chain of Example 13.7 and 13.9, is state 33 positive recurrent, null recurrent, or transient? Sol :

66 65 Stationary Probabilities of Infinite Chains Theorem 13.11 : For an irreducible, aperiodic, positive recurrent Markov chain with states {0, 1, …}, the limiting n -step transition probabilities are where are the unique state probabilities satisfying

67 66 Example 13.11 For the stationary probabilities of the router buffer described in Example 13.6. Make sure to identify for what values of p that the stationary probabilities exist. Sol : 2 1 0 p 1p1p p 1p1p p 1p1p … 1p1p

68 67 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification 13.Limiting Theorems For Markov Chains 14.Continuous-Time Markov Chains

69 68 14. Continuous-Time Markov Chains Definition : A continuous-time Markov chain {X(t)| t  0} is a continuous-time, discrete-value random process such that for an infinitesimal time step of size , Continuous-time Markov chain is closely related to the Poisson process. The time until the next transition is an exponential R.V. with parameter

70 69 Embedded Discrete-Time Markov Chain Definition : For a continuous-time Markov chain with transition rates q ij and state i departure rates v i, the embedded discrete-time Markov chain has transition probabilities p ij =q ij / i for states i with i > 0 and p ii = 1 for states i with i = 0. Definition : The Communicating Classes of a Continuous-Time Markov Chain are given by the communicating classes of its embedded discrete-time Markov chain.

71 70 Irreducible Continuous-Time Markov Chain Definition : A continuous-time Markov chain is irreducible if the embedded discrete-time Markov chain is irreducible. Definition : An irreducible continuous-time Markov chain is positive recurrent if for all states i, the time T ii to return to state i satisfying E[T ii ] < .

72 71 Example 14.1 In a continuous-time ON-OFF process, alternating OFF and ON (state 0 and 1 ) periods have independent exponential durations. The average ON period lasts 1/  seconds, while the average OFF period lasts 1/ seconds. Sketch the continuous-time Markov chain. Sol : 1 0 

73 72 Example 14.2 Air conditioner is in one of 3 states : OFF (0), Low (1), or High (2). The transitions from OFF to Low occur after an exponential with mean time 3 mins. The transitions from Low to OFF or High are equally likely and transitions out of the Low state occur at rate 0.5 per min. When the system is in High state, it makes a transition to Low with probability 2/3 or to the OFF state with probability 1/3. The time spent in the high state is an exponential (1/2) R.V. Model this air conditioning system using a continuous-time Markov chain.

74 73 Example 14.2 (Cont’d) Sol : 2 1 0 1/4 1/3 1/4 1/6

75 74 Theorem 14.1 For a continuous-time Markov chain, the state probabilities p j (t) evolve according to the differential equations where Pf :

76 75 Theorem 14.2 For an irreducible, positive recurrent continuous- time Markov chain, the state probabilities satisfying where the limiting state probabilities are the unique solution to Pf : i j j q ij Flow Conservation Law : ave. rate in = ave. rate out

77 76 Example 14.3 Calculate the limiting state probabilities for the ON-OFF system of Example 14.1. Sol : 1 0 

78 77 Example 14.4 Find the stationary distribution for the Markov chain describing the air conditioning system of Example 14.2. Sol : 2 1 0 1/4 1/3 1/4 1/6

79 78 Birth-Death Process Definition: A continuous-time Markov chain is a birth-death process if the transition rates satisfy q ij = 0 for |i  j| > 1. 2 1 0 1 22 2 33 0 11 3 44 … 3

80 79 Theorem 14.3 For a birth-death queue with arrival rate i and service rate  i, the stationary probabilities p i satisfy Pf :

81 80 Theorem 14.4 For a birth-death queue with arrival rate i and service rate  i, let  i = i /  i+1. The limiting state probabilities, if they exist, are Pf :

82 81 The M/M/1 Queue The arrivals are a Poisson process of rate, independent of the service requirements of the customers. The service time of a customer is an exponential (  ) R.V., independent of the system state. M/M/1  Markovian arrival (Poisson arrivals) Markovian service time (Exponential R.V.) One server

83 82 Theorem 14.5 The M/M/1 queue with arrival rate > 0 and service rate ,  >, has limiting state probabilities, where  = / . Pf :

84 83 Example 14.5 Cars arrive at an isolated toll booth as a Poisson process with arrival rate = 0.6 cars per minute. The service required by a customer is an exponential R.V. with expected value 1/  = 0.3 minutes. What are the limiting state probabilities for N, the number of cars at the toll booth? What is the probability that the toll booth has zero cars some time in the distant future? Sol :


Download ppt "1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification."

Similar presentations


Ads by Google