Presentation is loading. Please wait.

Presentation is loading. Please wait.

Markov Chain Nur Aini Masruroh.

Similar presentations


Presentation on theme: "Markov Chain Nur Aini Masruroh."— Presentation transcript:

1 Markov Chain Nur Aini Masruroh

2 Discrete Time Markov Chains
Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number of possible values – assume set of possible values are non negative integers {0, 1, 2, …} • Let Xn= i denote the process in state i at time n • {Xn, n=0, 1, 2, …} describes the evolution of process over time • Define Pij as the probability that the process will next be in state j given that it is currently in state i Pij = P{Xn+1 = j | Xn= i} • Such a stochastic process is known as a Discrete Time Markov chain (DTMC)

3 Discrete Time Markov Chains
DTMC can be used to model a lot of real life stochastic phenomena – Example: Xn can be the inventory on-hand of a warehouse at the nth period – Example: Xn can be the amount of money a taxi driver gets for his nth trip – Example: Xn can be the status of a machine on the nth day of operation – Example: Xn can be the weather (rain/shine) on the nth day

4 DTMC properties Markov Property:
The conditional distribution of any future state Xn+1 given the past state X0, X1, …, Xn-1 and the present state Xn, is independent of the past states and depends only on the present state P{Xn+1 = j | Xn=i, Xn-1=in-1, …, X1=i1, X0= i0}=P{Xn+1 = j | Xn=i}= Pij – Pij represents the probability that the process will, when in state i, next make a transition into state j Since probabilities are non negative and since the process must make a transition into some state, we have that Pij≥0, i,j ≥ 0; i = 0, 1, …

5 Short term analysis The N2 transition probabilities can be represented by matrix P with element pij; 0 ≤ pij ≤ 1 Sum of the row should be 1

6 Demonstrate the Markov property Find the probability transition
Defining a DTMC Specify the state State, s Variable to describe the present situation of the system, i.e. pressure, temperature, etc Demonstrate the Markov property Find the probability transition

7 Example 1: rain Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Model this as a DTMC!

8 Example 2: stock movement
Consider the following model for the value of a stock: At the end of a given day, the price is recorded If the stock has gone up, the probability that it will go up tomorrow is 0.7 If the stock has gone down, the probability that it will go up tomorrow is only 0.5 (assume stock staying the same as a decrease) Model the price movement (up/down) as a DTMC

9 Example 3: Mood On any given day Gary is either cheerful (C), so-so (S), or glum (G). If he is cheerful today, then he will be C, S or G tomorrow with respective probabilities 0.5, 0.4, 0.1. If he is so-so today, then he will be C, S, or G tomorrow with probabilities 0.3, 0.4, 0.3. If he is glum today, then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5. Model this as a DTMC

10 Example 4: Gambler’s ruin
My initial fortune is $1 and my opponent’s fortune is $2. I win a play with probability p, in which case I receive $1 and my opponent loses $1. We play until one of us have fortune 0. Model this as a DTMC

11 Example 5: beer branding
A leading brewery company in Singapore (label T) has asked its IE team to analyze its market position. It is particularly concerned about its major competitor (label A). It is believed (and somewhat verified by consumer surveys) that consumers chose their brand preference based on the brand of beer they are currently consuming. From market survey data collected monthly, 95% of the current consumers of label T will still prefer label T in the next month, while 3% will switch to label A and the remaining to label C (all other foreign brands) 90% of consumers of label A will remain loyal to label A, while 8% will shift preferences to label T 80% of consumers of label C will prefer label C, while 10% will shift preferences to label T. Model the brand loyalty/switching of consumers as a DTMC

12 State transition diagram
The transition matrix of the Markov chain can be represented by a state transition diagram (“bubble” diagram) In the diagram: circles = nodes = states: one for each element of the state space arrows = archs = transitions: one for each non-zero probability (labeled with probabilities) Diagram Rule: arrows out of a state have label sums that add to 1. Try to draw state transition diagram for the Gambler’s ruin and the beer branding examples!

13 Assignment 2 Consider an experiment in which a rat is wandering inside a maze. The maze has six rooms labeled F, 2, 3, 4, 5 and S. If a room has k doors, the probability that the rat selects a particular door is 1/k. However, if the rat reaches room F, which contains food, or room S, which gives it an electrical shock, then it is kept there and the experiment stops. Model the rat’s movement around the maze as a DTMC Draw the state transition diagram!

14 Markov process behavior
Multistep transition probabilities Φij(n): probability that the process will occupy state j at time n given that it occupied state i at time 0 (n-step transition probability from state i to state j) Φij(n) = P{s(n) = j|s(0) = i}, 1 ≤ i, j ≤ N, n=0,1,2,… Probability in state j after n+1 transition: P{s(n+1)=j|s(0)=i} rewritten in terms of joint probability that state j is occupied at time n+1 and state k is occupied at time n:

15 Multistep transition probabilities
From the definition of conditional probability: If n≥1, by Markov assumption that the future trajectory only depends on the present state,

16 Multistep transition probabilities
Based on the Markov property, the conditional probability can be rewritten:

17 Multistep transition probabilities
Matrix formulation Φ(n+1) = Φ(n)P Φ(0) = I  identity matrix Φ(1) = Φ(0)P = IP = P Φ(2) = Φ(1) P = P2 Φ(n) = Pn n=0,1,2,…

18 Marketing example In a hypothetical market there are only 2 brands, A and B. A typical consumer in this market buys brand A with probability of 0.8 if his last purchase was brand A and with probability of 0.3 if his last purchase was brand B. How does the probability of the consumer’s purchasing each brand depend on the number of purchases he has made and on the brand he purchased initially?

19 State probabilities Suppose the probability that state I is occupied at time n will be given the symbol πi(n) and defined πi(n) =P{s(n)=i} i=1,2,…,N, n=0,1,2,… We have Φij(n) = P{s(n) = j|s(0) = i} If we multiply both side by P{s(0)=i} and sum over I from 1 to N, we obtain

20 State probabilities The state probabilities at time n can be determined by multiplying the multistep transition probabilities of starting in each state and summing all over state π(n) = [π1(n), π2(n), …, πN(n)] with this definition, π(n) = π(0)Φ(n) n=0,1,2,… π(n) = π(0)Pn π(n+1) = π(0) Pn+1 = π(0)PnP π(n+1) = π(n) P n=0,1,2,… The state probability vector at any time can be calculated by post-multiplying the state probability vector at the preceding time by P Try the marketing example!

21 Asymptotic behavior of state probabilities
When the process is allowed to make a large transition, π(∞) = π(0) P∞ If we define limiting state probability as π, so π = π(0)Φ Where Φ is the limiting multistep transition probability matrix, Φ = Φ(∞) = P∞

22 Long term behavior and analysis
• In designing physical systems, there are often “start-up” effects that are different from what can be expected in the “long-run”. – A designer might be interested in the long-run behavior, or operations • The LLN holds for iid random variables. • Question: Do similar limiting results hold for DTMC when n is large? – Limiting distributions – Long term averages

23 Limiting probabilities
Monodesmic process: Markov process that has a Φ with equal rows Monodesmic process: Sufficient condition: able to make transition Necessary condition: exit only one subset of states that must be occupied after infinitely many transitions Recall: Because all rows of Φ are equal for a monodesmic process, each element Φij is equal to a value Φj that depends only on the column index j. Then

24 Direct solution for limiting state probabilities
If the state probability vector has attained its limiting value, π, it must satisfy the equation π = πP It implies the N simultaneous equations: Try the marketing example!

25 Example 1: rain Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Find the limiting probabilities π0, π1 .

26 Example 2: Tank warfare One theoretical model for tank warfare expresses the firing mechanism as a two-state Markov process in which the states are 0: a hit, 1: a miss. Thus Xn is 1 or 0 depending on whether the nth shot is a miss or a hit. Suppose the probability of the tank’s hitting on a certain shot after it had hit on the previous shot is ¾, and the probability of the tank’s hitting on a certain shot after it had missed on the previous shot is ½, – find the probability that the 11th shot fired from the tank (the 10th shot after the first) hits its target, given that the initial shot hit. – Suppose that a tank commander on first encountering an enemy fires his first round for ranging purposes, and suppose that it is fairly unlikely that the first round hits. More specifically, suppose that the probability of a hit on the initial shot is ¼, and that the probability of a miss on the initial shot is ¾. What is the probability that the fourth shot hits?


Download ppt "Markov Chain Nur Aini Masruroh."

Similar presentations


Ads by Google