Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,

Slides:



Advertisements
Similar presentations
MARKOV ANALYSIS Andrei Markov, a Russian Mathematician developed the technique to describe the movement of gas in a closed container in 1940 In 1950s,
Advertisements

Discrete time Markov Chain
1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Markov Chains 1.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Solutions Markov Chains 1
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Simulation Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder, CO
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
Lecture 13 – Continuous-Time Markov Chains
Markov Processes Homework Solution MGMT E
Markov Analysis Chapter 15
Markov Analysis Chapter 16
Introduction to Linear Programming
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
Simulation MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder, CO
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna Markov Analysis.
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
1 Introduction to Stochastic Models GSLM Outline  discrete-time Markov chain  motivation  example  transient behavior.
Markov Processes ManualComputer-Based Homework Solution MGMT E-5070.
Lecture 11 – Stochastic Processes
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
ECES 741: Stochastic Decision & Control Processes – Chapter 1: The DP Algorithm 31 Alternative System Description If all w k are given initially as Then,
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
QUEUING. CONTINUOUS TIME MARKOV CHAINS {X(t), t >= 0} is a continuous time process with > sojourn times S 0, S 1, S 2,... > embedded process X n = X(S.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Chapter 9: Markov Processes
Lecture 20 Review of ISM 206 Optimization Theory and Applications.
Discrete Time Markov Chains
Chapter 5 Markov Analysis
Solutions Markov Chains 1
CS723 - Probability and Stochastic Processes
Presentation transcript:

Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder, CO

OR Course Outline Intro to OR Linear Programming Solving LP’s LP Sensitivity/Duality Transport Problems Network Analysis Integer Programming Nonlinear Programming Dynamic Programming Game Theory Queueing Theory Markov Processes Decisions Analysis Simulation

Whirlwind Tour of OR Markov Analysis Andrey A. Markov (born 1856). Early work in probability theory, proved central limit theorem

Agenda for This Week Markov Applications –More Markov examples –Markov decision processes Markov Processes –Stochastic processes –Markov chains –Future probabilities –Steady state probabilities –Markov chain concepts

Stochastic Processes Series of random variables {X t } Series indexed over time interval T Examples: X 1, X 2, …, X t, …, X T represent –monthly inventory levels –daily closing price for a stock or index –availability of a new technology –market demand for a product

Markov Chains Present state X t is independent of history –previous states or events have no current or future influence on the current state Process will move to other states with known transition probabilities Transition probabilities are stationary –probabilities do not change over time There exist a finite number of possible states

An Example of a Markov Chain A small community has two service stations: Petroco and Gasco. The marketing department of Petroco has found that customers switch between stations according to the following transition matrix: Note: Rows sum to 1.0 ! =1.0

Future State Probabilities Probability that a customer buying from Petroco this month will buy from Petroco next month: In two months: From Gasco in two months:

Graphical Interpretation Petroco Gasco Petroco Gasco First PeriodSecond Period

Chapman-Kolmogorov Equations P (2) = P·P Let P be the transition matrix for a Markov process. Then the n-step transition probability matrices can be found from: P (3) = P·P·P

CK Equations for Example P (1) P (2)

Starting States In current month, if 70% of customers shop at Petroco and 30% at Gasco, what will be the mix in 2 months? P s = [ ] s n = s 0 P (n) s 2 = [ ] = [ ] = [ ]

CK Equations in Steady State P (1) P (2)  P (9)

Convergence to Steady-State Prob Period 0.33 If a customer is buys at Petroco this month, what is the long-run probability that the customer will buy at Petroco during any month in the future?

Calculation of Steady State Want outcome probabilities equal to incoming probabilities Let s = [s 1, s 2, …, s n ] be the vector of steady- state probabilities Then we want s = s P That is, the output state probabilities do not change from transition to transition (e.g., steady-state!)

Steady-State for Example P s = [ p g ] s = s P [ p g ] = [ p g ] p = 0.6p + 0.2g g = 0.4p + 0.8g p + g = 1 p = g = 0.667

Markov Chain Concepts Steady-State Probabilities –long-run probability that a process starting in state i will be found in state j First-Passage Time –length of time (steps) in going from state i to j Recurrence Time –length of time (steps) to return to state i when starting in state i

Markov Chain Concepts (cont.) Accessible States –State j can be reached from i (p ij (n) > 0) Communicating States –State i and j are accessible from one another Irreducible Markov chains –All states communicate with one another

Markov Chain Concepts (cont.) Recurrent State –A state that will certainly return to itself (f ii = 1) Transient State –A state that may return to itself (f ii < 1) Absorbing State –A state the never moves to another state (p ii =1) –A “black hole”

Markov Examples Markov Decision Processes

Matrix Multiplication Matrix multiplication in Excel…

Machine Breakdown Example A critical machine in a manufacturing operation breaks down with some frequency. The hourly up-down transition matrix for the machine is shown below. What percentage of the time is the machine operating (up)? Up Down

Credit History Example The Rifle, CO Mercantile Department Store wants to analyze the payment behavior of customers who have outstanding accounts. The store’s credit department has determined the following bill payment pattern from historical records: Pay No PayPay No Pay

Credit History Continued Further analysis reveals the following credit transition matrix at the Rifle Mercantile: Pay12Bad Pay 1 2 Bad 0 0

University Graduation Example Fort Lewis College in Durango has determined that students progress through the college according to the following transition matrix: FSoJSrDG F So J Sr D G