Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

Similar presentations


Presentation on theme: "1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)"— Presentation transcript:

1 1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)

2 2 Markov Chains Mathematical models for processes that evolve over time in a probabilistic manner are called stochastic processes. A special kind of stochastic process is Markov Chain. It is characterized by the special property that probabilities involving how the process will evolve in the future depends only on the present state of the process, and so are independent of the events in the past. Markov chains are used to analyze trends and predict the future (weather, stock market, genetics, product success, etc.)

3 3 Markov Chains A finite Markov chain is -a set of objects, -a set of consecutive time periods, -a finite set of different states such that (i)during any given time period, each object is in only one state (although different objects can be in different states); (ii)the probability that an object will move from one state to another state (or remain in same state) over a time period depends only on the beginning and ending states.

4 4 Markov Chains A Markov chain can be represented by a matrix P=[p ij ] where p ij represents the probability of an object moving from state i to state j in one time period. Such a matrix is called a transition matrix. - The transition matrix is square (by the nature of Markov process); -The sum of the probabilities of each row must add to one. A Markov chain can also be represented by a graph. Example in the next slide

5 Given that a person’s last cola purchase was Coke, there is a 90% chance that his next cola purchase will also be Coke. If a person’s last cola purchase was Pepsi, there is an 80% chance that his next cola purchase will also be Pepsi. coke pepsi 0.1 0.9 0.8 0.2 Markov Chains: Coke vs. Pepsi Example transition matrix:

6 Given that a person is currently a Pepsi purchaser, what is the probability that he will purchase Coke two purchases from now? Pr [ Pepsi  ?  Coke ] = Pr [ Pepsi  Coke  Coke ] + Pr [ Pepsi  Pepsi  Coke ] = 0.2 * 0.9 + 0.8 * 0.2 = 0.34 Powers of transition matrices Coke vs. Pepsi Example (cont.) P 2 is the transition matrix after two time period.

7 Given that a person is currently a Coke purchaser, what is the probability that he will purchase Pepsi three purchases from now? Powers of transition matrices Coke vs. Pepsi Example (cont.)

8 8 Distribution Row Vector A distribution row vector d for an N-state Markov chain is an N-dimensional row vector having as its components, one for each state, the probabilities that an object in the system is in each of the respective states. Example (cont.): Suppose 60% of all people now drink Coke, and 40% drink Pepsi. Then the distribution vector is (0.6, 0.4). Let d (k) denote the distribution vector for a Markov chain after k time periods. Thus, d (0) represents the initial distribution. Then d (k) = d (0) P k

9 9 Distribution Row Vector Example (cont.): Suppose 60% of all people now drink Coke, and 40% drink Pepsi What fraction of people will be drinking Coke three weeks from now? d (0) = (0.6, 0.4) d (3) = d (0) P 3 Pr[X 3 =Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438

10 10 Regular Markov Chains A Markov chain is regular if some power of the transition matrix contains only positive elements. If the matrix itself contains only positive elements then the power is one, and the matrix is automatically regular. Transition matrices that are regular always have an eigenvalue of unity. They also have limiting distribution vectors x (∞), where the ith component of x (∞) represents the probability of an object in state i after a large number of time periods have elapsed.

11 11 Limiting distribution Coke vs. Pepsi Example (cont) week - i Pr[X i = Coke] 2/3 stationary distribution coke pepsi 0.1 0.9 0.8 0.2

12 12 Regular Markov Chains Definition: A nonzero vector x is a left eigenvector for a matrix A if there exists a scalar λ such that xA = λx. (Left and right eigenvectors are usually different; they are the same only for special type of matrices.) The limiting distribution x (∞) is a left eigenvector of the transition matrix corresponding to the eigenvalue of unity, and having the sum of its components equal to one. Examples on the board.


Download ppt "1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)"

Similar presentations


Ads by Google