Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Processes What is a Markov Process?

Similar presentations


Presentation on theme: "Markov Processes What is a Markov Process?"— Presentation transcript:

1 Markov Processes What is a Markov Process?
A stochastic process which contains the Markovian property. A process has the Markovian property if: for t = 0,1,… and every sequence i,j, k0, k1,…kt-1. In other words, any future state is only dependent on it’s prior state.

2 Markov Processes cont. This conditional probability
is called the one-step transition probability. And if for all t = 1,2,… then the one-step transition probability is said to be stationary and therefore referred to as the stationary transition probability.

3 Markov Processes cont. Let pij = state 0 1 2 3 0 p00 p01 p02 p03
P = p p p p13 2 p p p p23 3 p p p p33 P is referred to as the probability transition matrix.

4 Markov Processes cont. Suppose the probability you win is based on if you won the last time you played some game. Say, if you won last time, then there is a 70% chance of winning the next time. However, if you lost last time, there is a 60% chance you lose the next time. Can the process of winning and losing be modeled as a Markov process? Let state 0 be you win, and state 1 be you lose, then: state P =

5 Markov Processes cont. See handout on n-step transition matrix.

6 Markov Processes cont. Let, state 0 1 2 ... N 0 p0 p1 p2 … pN
Pn = p p p2 … pN 2 p p p2 … pN 3 p p p2 … pN Then P= [p0 , p1 , p2 , p3 …pN ] are the steady state probabilities.

7 Markov Processes cont. Observing that P(n) = P(n-1)P, as , P = PP.
[p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ] p p p02 … p0N p p p12 … p1N p p p22 … p2N pN pN pN2 … p3N The inner product of this matrix equation results in N+1 equations and N+1 unknowns, however rank of the P matrix is N. However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1 equations and N+1 unknowns.

8 Markov Processes cont. Observing that P(n) = P(n-1)P, as , P = PP.
[p0 , p1 ,,p2 ,…pN ] = [p0 , p1 ,,p2 ,…pN ] p p p02 … p0N p p p12 … p1N p p p22 … p2N pN pN pN2 … p3N The inner product of this matrix equation results in N+1 equations and N+1 unknowns, however rank of the P matrix is N. However, note that p0 + p1 + p2+ p3 …pN = 1. Therefore N+1 equations and N+1 unknowns.

9 Markov Processes cont. Show example of obtaining P = PP from transition matrix: state P =

10 Markov Processes cont. Break for Exercise

11 Markov Processes cont. State diagrams: state 0 1 P = 0 .70 .30
1

12 Markov Processes cont. State diagrams: state 0 1 2 3 P = 0 .5 .5 0 0
1 2 3

13 Markov Processes cont. Classification of States:
A state j is accessible from some state i if it is possible to transition from i to j in a finite number of steps. i j State i and j communicate if j i and i j , The communicating class of state i is the set C(i) where C(i) ={j: } If the communicating class C(i) = f then i is a non-return state. A process is said to be irreducible if all states within the process communicate.

14 Markov Processes cont. 6. A closed communicating class is such that there is no escape from the class. Note: An ergodic process can have at most 1 closed communicating class. 7. If i is the only member of C(i) and no state j is accessible from i, then i is an absorbing or capturing state, pii = 1. 8. A return state may be revisited infinitely often (recurrent) or finitely often (non-recurrent or transient) in the long run.

15 Markov Processes cont. First Passage Times:
1. The first passage time for a state i to j, Tij is the number of transitions required to enter state j for the first time given we start in state i. The recurrence time from state i, Tii = number of transitions to get back to state i. The first passage probability, fij(n), is the probability that the first passage time from state i to j is equal to n: fij(n) = P[Tij = n] fij(1) = pij fij(2) = P[Xn+2 = j | Xn = i, Xn+1 j ] =

16 Markov Processes cont. First Passage Times:
The mean first passage time: mij = E[Tij] = mij = mean recurrence time. if mij = then i is null recurrent if mij < then i is positive recurrent Probability of absorption – probability of ever going from state i to k. let fii = for i = 0,1, 2 …M if fii = 1, then i is recurrent. fii < 1, then i is transient

17 Markov Processes cont. Expected Average Value(Cost) per Unit Time :
How does one find the long run average reward (cost) of a Markov process? Let V(Xt) be a function that represents the reward for being in state X. Then


Download ppt "Markov Processes What is a Markov Process?"

Similar presentations


Ads by Google