 # Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh.

## Presentation on theme: "Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh."— Presentation transcript:

Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh

Recall: Discrete Time Markov Process In the DTMC… ▫Whenever a process enters a state i, we imagine that it determines the next state j to which it will move instantaneously according to the transition probability of p ij

Discrete time semi Markov Process In semi Markov process, after state j has been selected, but before making this transition from state i to state j, the process “holds” for a time t ij in the state i. The holding times t ij are positive, integer-valued random variables each governed by a probability mass function h ij (.) called the holding time mass function for a transition from state i to state j After holding in state i for the holding time t ij, the process makes transition to state j and then immediately select a new destination state k using the transition probabilities p jk It next chooses a holding time t jk in state j according to the mass function h jk (.) and makes its next transition at time t jk after entering state j The process continues in the same way

Discrete time semi Markov Process (cont’d) To describe semi Markov process completely, we need to define n 2 holding time mass functions in addition to the transition probabilities Suppose, the cumulative probability distribution of t ij, ≤ h ij (.) is defined as Suppose we know that the process enters state i and choose successor j but we don’t know the successor chosen. The pmf assigned to the time t i spent in i is defined as w i (m): probability that the system will spend m time unit in state i

Discrete time semi Markov Process (cont’d) So,t i : waiting time in state i and w i (.): waiting time pmf ▫Waiting time is a holding time that is unconditional on the destination state ▫The mean waiting time is related to the mean holding time by We compute the second moment of the waiting time from the second moments of the holding time using Variance waiting time,

Car rental example A car rental rents cars at two locations, town 1 and town 2. the experience of the company shows that a car is rented in town 1 is a 0.8 probability that it will be returned to town 1 and a 0.2 probability that it will be returned to town 2. when the car is rented in town 2, there is a 0.7 probability that it will be returned to town 2 and a 0.3 probability that it will be returned to town 1. We assumed that there are always many customers available at both towns and that cars are always rented at the town to which they are last returned Because of the nature of the trips involved, the length of time a car will be rented depends on both where it is rented and where it is returned. The holding time t ij is thus the length of time a car will be rented if it was rented at town i and returned to town j. From the company records, the possible holding time pmf follows geometric distribution with the following expressions: ▫h 11 (m) = (1/3)(2/3) m-1 ▫h 21 (m) = (1/4)(3/4) m-1 ▫h 12 (m) = (1/6)(5/6) m-1 ▫h 22 (m) = (1/12)(11/12) m-1

Car rental example: solution Transition probability matrix The holding time distribution are all geometric distributions General term for geometric distribution is (1-a)a n-1, with mean 1/(1-a) and second moment (1+a)/(1-a) 2, and variance a/(1-a) 2 Therefore the moments of four holding times are: These numbers indicate that people renting cars at town 2 and returning them to town 2 often have long rental periods

Car rental example: solution 12 p 12 = 0.2 h 12 (m) = 1/6(5/6) m-1 t 12 bar = 6 p 21 = 0.3 h 21 (m) = 1/4(3/4) m-1 t 21 bar = 4 p 22 = 0.7 h 22 (m) = 1/12(11/12) m-1 t 22 bar = 12 p 11 = 0.8 h 11 (m) = 1/3(2/3) m-1 t 11 bar = 3 A complete description of the semi Markov process:

Car rental example: solution Therefore, the matrix forms of these distributions for the example are The results show, for example, that the chance that a car rented in town 1 and returned to town 2 will be rented for n or fewer time periods is 1 – (5/6) n. A car rented in town 2 and returned to town 1 has a chance (3/4) n of being rented for more than n periods If h ij (m) is the geometric distribution (1-a)a m-1, m=1,2,3,…, then the cumulative and complementary cumulative distributions ≤ h ij (n) and > h ij (n) are

Car rental example: solution Waiting time The mean time that a car rented in town 1 will be rented, destination unknown, is 3.6 period. If car rented in town 2, the mean is 9.6 period Distribution of waiting time (probability that a car rented in each town will be rented for m periods, destination unknown) is:

Car rental example: solution Cumulative and complementary cumulative distributions of waiting time: The expression for > w 2 (n), for example, shows the probability that a car rented in town 2 will be rented for more than n periods if its destination is unknown

Interval transition probabilities, Φ ij (n) Corresponds to multistep transition probabilities for the Markov process Φ ij (n): probability that a discrete-time semi Markov process will be in state j at time n given that it entered state i at time zero  interval transition probability from state i to state j in the interval (0,n) ▫Note that an essential part of the definition is that the system entered state i at time zero as opposed to its simply being in state i at time zero

Limiting behavior The chain structure of semi-Markov process is the same as that of its imbedded Markov process Dealing with monodesmic semi Markov process ▫Monodesmic process: Markov process that has a Φ with equal rows ▫Monodesmic process:  Sufficient condition: able to make transition  Necessary condition: exit only one subset of states that must be occupied after infinitely many transitions

Limiting behavior (cont’d) Limiting interval probabilities, Φ ij for a monodesmic semi Markov process: With: π j : limiting state probability of the imbedded Markov process for state j τ j bar: mean waiting time for state j

Consider: car rental example Transition probability matrix, π 1 = 0.6, π 2 = 0.4

Download ppt "Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh."

Similar presentations