Markov Chain Nur Aini Masruroh.

Slides:



Advertisements
Similar presentations
MARKOV ANALYSIS Andrei Markov, a Russian Mathematician developed the technique to describe the movement of gas in a closed container in 1940 In 1950s,
Advertisements

Discrete time Markov Chain
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Some additional Topics. Distributions of functions of Random Variables Gamma distribution,  2 distribution, Exponential distribution.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Operations Research: Applications and Algorithms
Markov Chains Extra problems
IERG5300 Tutorial 1 Discrete-time Markov Chain
Markov Analysis Jørn Vatn NTNU.
Linear Transformation and Statistical Estimation and the Law of Large Numbers Target Goal: I can describe the effects of transforming a random variable.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
Section 10.1 Basic Properties of Markov Chains
Solutions Markov Chains 1
Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh.
Chapter 17 Markov Chains.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
INDR 343 Problem Session
Stochastic Processes Dr. Nur Aini Masruroh. Stochastic process X(t) is the state of the process (measurable characteristic of interest) at time t the.
Introduction to stochastic process
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
1 Spare part modelling – An introduction Jørn Vatn.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna Markov Analysis.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Chapter 5 Several Discrete Distributions General Objectives: Discrete random variables are used in many practical applications. These random variables.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
Generalized Semi-Markov Processes (GSMP)
Intro. to Stochastic Processes
S TOCHASTIC M ODELS L ECTURE 1 M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept. 9, 2015.
DISCRETE PROBABILITY DISTRIBUTIONS
Lecture 4: State-Based Methods CS 7040 Trustworthy System Design, Implementation, and Analysis Spring 2015, Dr. Rozier Adapted from slides by WHS at UIUC.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
Random Variables. Numerical Outcomes Consider associating a numerical value with each sample point in a sample space. (1,1) (1,2) (1,3) (1,4) (1,5) (1,6)
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
Productivity. Operations Management Module 3 Productivity Efficiency Effectiveness Markov chains.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Functions of Random Variables
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Discrete Time Markov Chains (A Brief Overview)
Discrete Time Markov Chain
Availability Availability - A(t)
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Discrete-time markov chain (continuation)
What is Probability? Quantification of uncertainty.
Discrete Time Markov Chains
Chapter 5 Markov Analysis
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Slides by John Loucks St. Edward’s University.
Discrete-time markov chain (continuation)
Presentation transcript:

Markov Chain Nur Aini Masruroh

Discrete Time Markov Chains Consider a stochastic process {Xn, n=0, 1, 2, …} that takes on a finite or countable number of possible values – assume set of possible values are non negative integers {0, 1, 2, …} • Let Xn= i denote the process in state i at time n • {Xn, n=0, 1, 2, …} describes the evolution of process over time • Define Pij as the probability that the process will next be in state j given that it is currently in state i Pij = P{Xn+1 = j | Xn= i} • Such a stochastic process is known as a Discrete Time Markov chain (DTMC)

Discrete Time Markov Chains DTMC can be used to model a lot of real life stochastic phenomena – Example: Xn can be the inventory on-hand of a warehouse at the nth period – Example: Xn can be the amount of money a taxi driver gets for his nth trip – Example: Xn can be the status of a machine on the nth day of operation – Example: Xn can be the weather (rain/shine) on the nth day

DTMC properties Markov Property: The conditional distribution of any future state Xn+1 given the past state X0, X1, …, Xn-1 and the present state Xn, is independent of the past states and depends only on the present state P{Xn+1 = j | Xn=i, Xn-1=in-1, …, X1=i1, X0= i0}=P{Xn+1 = j | Xn=i}= Pij – Pij represents the probability that the process will, when in state i, next make a transition into state j Since probabilities are non negative and since the process must make a transition into some state, we have that Pij≥0, i,j ≥ 0; i = 0, 1, …

Short term analysis The N2 transition probabilities can be represented by matrix P with element pij; 0 ≤ pij ≤ 1 Sum of the row should be 1

Demonstrate the Markov property Find the probability transition Defining a DTMC Specify the state State, s Variable to describe the present situation of the system, i.e. pressure, temperature, etc Demonstrate the Markov property Find the probability transition

Example 1: rain Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Model this as a DTMC!

Example 2: stock movement Consider the following model for the value of a stock: At the end of a given day, the price is recorded If the stock has gone up, the probability that it will go up tomorrow is 0.7 If the stock has gone down, the probability that it will go up tomorrow is only 0.5 (assume stock staying the same as a decrease) Model the price movement (up/down) as a DTMC

Example 3: Mood On any given day Gary is either cheerful (C), so-so (S), or glum (G). If he is cheerful today, then he will be C, S or G tomorrow with respective probabilities 0.5, 0.4, 0.1. If he is so-so today, then he will be C, S, or G tomorrow with probabilities 0.3, 0.4, 0.3. If he is glum today, then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5. Model this as a DTMC

Example 4: Gambler’s ruin My initial fortune is $1 and my opponent’s fortune is $2. I win a play with probability p, in which case I receive $1 and my opponent loses $1. We play until one of us have fortune 0. Model this as a DTMC

Example 5: beer branding A leading brewery company in Singapore (label T) has asked its IE team to analyze its market position. It is particularly concerned about its major competitor (label A). It is believed (and somewhat verified by consumer surveys) that consumers chose their brand preference based on the brand of beer they are currently consuming. From market survey data collected monthly, 95% of the current consumers of label T will still prefer label T in the next month, while 3% will switch to label A and the remaining to label C (all other foreign brands) 90% of consumers of label A will remain loyal to label A, while 8% will shift preferences to label T 80% of consumers of label C will prefer label C, while 10% will shift preferences to label T. Model the brand loyalty/switching of consumers as a DTMC

State transition diagram The transition matrix of the Markov chain can be represented by a state transition diagram (“bubble” diagram) In the diagram: circles = nodes = states: one for each element of the state space arrows = archs = transitions: one for each non-zero probability (labeled with probabilities) Diagram Rule: arrows out of a state have label sums that add to 1. Try to draw state transition diagram for the Gambler’s ruin and the beer branding examples!

Assignment 2 Consider an experiment in which a rat is wandering inside a maze. The maze has six rooms labeled F, 2, 3, 4, 5 and S. If a room has k doors, the probability that the rat selects a particular door is 1/k. However, if the rat reaches room F, which contains food, or room S, which gives it an electrical shock, then it is kept there and the experiment stops. Model the rat’s movement around the maze as a DTMC Draw the state transition diagram!

Markov process behavior Multistep transition probabilities Φij(n): probability that the process will occupy state j at time n given that it occupied state i at time 0 (n-step transition probability from state i to state j) Φij(n) = P{s(n) = j|s(0) = i}, 1 ≤ i, j ≤ N, n=0,1,2,… Probability in state j after n+1 transition: P{s(n+1)=j|s(0)=i} rewritten in terms of joint probability that state j is occupied at time n+1 and state k is occupied at time n:

Multistep transition probabilities From the definition of conditional probability: If n≥1, by Markov assumption that the future trajectory only depends on the present state,

Multistep transition probabilities Based on the Markov property, the conditional probability can be rewritten:

Multistep transition probabilities Matrix formulation Φ(n+1) = Φ(n)P Φ(0) = I  identity matrix Φ(1) = Φ(0)P = IP = P Φ(2) = Φ(1) P = P2 Φ(n) = Pn n=0,1,2,…

Marketing example In a hypothetical market there are only 2 brands, A and B. A typical consumer in this market buys brand A with probability of 0.8 if his last purchase was brand A and with probability of 0.3 if his last purchase was brand B. How does the probability of the consumer’s purchasing each brand depend on the number of purchases he has made and on the brand he purchased initially?

State probabilities Suppose the probability that state I is occupied at time n will be given the symbol πi(n) and defined πi(n) =P{s(n)=i} i=1,2,…,N, n=0,1,2,… We have Φij(n) = P{s(n) = j|s(0) = i} If we multiply both side by P{s(0)=i} and sum over I from 1 to N, we obtain

State probabilities The state probabilities at time n can be determined by multiplying the multistep transition probabilities of starting in each state and summing all over state π(n) = [π1(n), π2(n), …, πN(n)] with this definition, π(n) = π(0)Φ(n) n=0,1,2,… π(n) = π(0)Pn π(n+1) = π(0) Pn+1 = π(0)PnP π(n+1) = π(n) P n=0,1,2,… The state probability vector at any time can be calculated by post-multiplying the state probability vector at the preceding time by P Try the marketing example!

Asymptotic behavior of state probabilities When the process is allowed to make a large transition, π(∞) = π(0) P∞ If we define limiting state probability as π, so π = π(0)Φ Where Φ is the limiting multistep transition probability matrix, Φ = Φ(∞) = P∞

Long term behavior and analysis • In designing physical systems, there are often “start-up” effects that are different from what can be expected in the “long-run”. – A designer might be interested in the long-run behavior, or operations • The LLN holds for iid random variables. • Question: Do similar limiting results hold for DTMC when n is large? – Limiting distributions – Long term averages

Limiting probabilities Monodesmic process: Markov process that has a Φ with equal rows Monodesmic process: Sufficient condition: able to make transition Necessary condition: exit only one subset of states that must be occupied after infinitely many transitions Recall: Because all rows of Φ are equal for a monodesmic process, each element Φij is equal to a value Φj that depends only on the column index j. Then

Direct solution for limiting state probabilities If the state probability vector has attained its limiting value, π, it must satisfy the equation π = πP It implies the N simultaneous equations: Try the marketing example!

Example 1: rain Suppose the chance of rain tomorrow depends on previous weather conditions only through whether or not it rains today, and not the past weather conditions. Suppose also that if it rains today, then it will rain tomorrow with probability α; and if it does not rain today, then it will rain tomorrow with probability β. Let 0 be the state when it rains and 1 when it does not rain. Find the limiting probabilities π0, π1 .

Example 2: Tank warfare One theoretical model for tank warfare expresses the firing mechanism as a two-state Markov process in which the states are 0: a hit, 1: a miss. Thus Xn is 1 or 0 depending on whether the nth shot is a miss or a hit. Suppose the probability of the tank’s hitting on a certain shot after it had hit on the previous shot is ¾, and the probability of the tank’s hitting on a certain shot after it had missed on the previous shot is ½, – find the probability that the 11th shot fired from the tank (the 10th shot after the first) hits its target, given that the initial shot hit. – Suppose that a tank commander on first encountering an enemy fires his first round for ranging purposes, and suppose that it is fairly unlikely that the first round hits. More specifically, suppose that the probability of a hit on the initial shot is ¼, and that the probability of a miss on the initial shot is ¾. What is the probability that the fourth shot hits?