1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.

Slides:



Advertisements
Similar presentations
TWO STEP EQUATIONS 1. SOLVE FOR X 2. DO THE ADDITION STEP FIRST
Advertisements

Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
0 - 0.
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
MULTIPLYING MONOMIALS TIMES POLYNOMIALS (DISTRIBUTIVE PROPERTY)
ADDING INTEGERS 1. POS. + POS. = POS. 2. NEG. + NEG. = NEG. 3. POS. + NEG. OR NEG. + POS. SUBTRACT TAKE SIGN OF BIGGER ABSOLUTE VALUE.
MULTIPLICATION EQUATIONS 1. SOLVE FOR X 3. WHAT EVER YOU DO TO ONE SIDE YOU HAVE TO DO TO THE OTHER 2. DIVIDE BY THE NUMBER IN FRONT OF THE VARIABLE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
FACTORING Think Distributive property backwards Work down, Show all steps ax + ay = a(x + y)
Addition Facts
Year 6 mental test 10 second questions Numbers and number system Numbers and the number system, fractions, decimals, proportion & probability.
BALANCING 2 AIM: To solve equations with variables on both sides.
Markov Chain Nur Aini Masruroh.
Discrete time Markov Chain
© S Haughton more than 3?
Linking Verb? Action Verb or. Question 1 Define the term: action verb.
Squares and Square Root WALK. Solve each problem REVIEW:
Past Tense Probe. Past Tense Probe Past Tense Probe – Practice 1.
Properties of Exponents
Addition 1’s to 20.
25 seconds left…...
Test B, 100 Subtraction Facts
Week 1.
We will resume in: 25 Minutes.
1 OR II GSLM Outline  introduction to discrete-time Markov Chain introduction to discrete-time Markov Chain  problem statement  long-term.
Probabilistic Reasoning over Time
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Markov Models.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
IERG5300 Tutorial 1 Discrete-time Markov Chain
Discrete Time Markov Chains
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Hidden Markov Models Tunghai University Fall 2005.
. Computational Genomics Lecture 10 Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Markov Chains Chapter 16.
. Markov Chains Tutorial #5 © Ilan Gronau. Based on original slides of Ydo Wexler & Dan Geiger.
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Discrete Time Markov Chain
Discrete-time markov chain (continuation)
Discrete Time Markov Chains
IENG 362 Markov Chains.
Presentation transcript:

1 Introduction to Discrete-Time Markov Chain

2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers unserved in a distribution system time excellent good fair bad

3 Motivation  any nice limiting results for dependent X n ’s?  no such result for general dependent X n ’s  nice results when X n ’s form a discrete-time Markov Chain

4 Discrete-Time, Discrete-State Stochastic Process  a stochastic process: a sequence of indexed random variables, e.g., {X n }, {X(t)}  a discrete-time stochastic process: {X n }  a discrete-state stochastic process, e.g.,  state  {excellent, good, fair, bad}  set of states  {e, g, f, b}  {1, 2, 3, 4}  {0, 1, 2, 3}  state to describe weather  {windy, rainy, cloudy, sunny}

5 Markov Property  a discrete-time, discrete-state stochastic process possesses the Markov property if  P{X n+1 = j|X n = i, X n−1 = i n−1,..., X 1 = i 1, X 0 = i 0 } = p ij, for all i 0, i 1, …, i n  1, i n, i, j, n  0  time frame: presence n, future n+1, past {i 0, i 1, …, i n  1 }  meaning of the statement: given presence, the past and the future are conditionally independent  the past and the future are certainly dependent

6 One-Step Transition Probability Matrix  p ij  0, i, j  0,

7 Example 4-1 Forecasting the Weather  state  {rain, not rain}  dynamics of the system  rains today  rains tomorrow w.p.   does not rain today  rains tomorrow w.p.   weather of the system across the days, {X n }

8 Example 4-3 The Mood of a Person  mood  {cheerful (C), so-so (S), or glum (G)}  cheerful today  C, S, or G tomorrow w.p. 0.5, 0.4, 0.1  so-so today  C, S, or G tomorrow w.p. 0.3, 0.4, 0.3  glum today  C, S, or G tomorrow w.p. 0.2, 0.3, 0.5  X n : mood on the nth day, such that mood  {C, S, G}  {X n }: a 3-state Markov chain (state 0 = C, state 1 = S, state 2 = G)

9 Example 4.5 A Random Walk Model  a discrete-time Markov chain of  number of states {…, -2, -1, 0, 1, 2, …}  random walk: for 0 < p < 1,  p i,i+1 = p = 1 − p i,i−1, i = 0,  1,...

10 Example 4.6 A Gambling Model  each play of a game a gambler gaining $1 w.p. p, and losing $1 o.w.  end of the game: a gambler either broken or accumulating $N  transition probabilities:  p i,i+1 = p = 1 − p i,i−1, i = 1, 2,..., N − 1; p 00 = p NN = 1  example for N = 4  state: X n, the gambler’s fortune after the n play  {0, 1, 2, 3, 4}

11 Limiting Behavior of Irreducible Chains

12 Limiting Behavior of a Positive Irreducible Chain  cost of a visit  state 1 = $5  state 2 = $8  what is the long-run cost of the above DTMC?

13 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j  N: a very large positive integer  # of periods at state j   j N  balance of flow   j N   i (  i N)p ij   j =  i  i p ij

14 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j   j =  i  i p ij   1 = 0.9   2   2 = 0.1   2  linearly dependent  normalization equation:  1 +  2 = 1  solving:  1 = 2/3,  2 = 1/ C 0.2

15 Limiting Behavior of a Positive Irreducible Chain   1 = 0.75   3   3 = 0.25  2   1 +  2 +  3 = 1   1 = 301/801,  2 = 400/801,  3 = 100/

16 Limiting Behavior of a Positive Irreducible Chain  an irreducible DTMC {X n } is positive  there exists a unique nonnegative solution to    j : stationary (steady-state) distribution of {X n }

17 Limiting Behavior of a Positive Irreducible Chain   j = fraction of time at state j   j = fraction of expected time at state j  average cost  c j for each visit at state j  random i.i.d. C j for each visit at state j  for aperiodic chain:

18 Limiting Behavior of a Positive Irreducible Chain   1 = 301/801,  2 = 400/801,  3 = 100/801  profit per state: c 1 = 4, c 2 = 8, c 3 = -2  average profit

19 Limiting Behavior of a Positive Irreducible Chain   1 = 301/801,  2 = 400/801,  3 = 100/801  C 1 ~ unif[0, 8], C 2 ~ Geo(1/8), C 3 = -4 w.p. 0.5; and = 0 w.p. 0.5  E(C 1 ) = 4, E(C 2 ) = 8, E(C 3 ) = -2  average profit