Chapter 9: Markov Processes

Slides:



Advertisements
Similar presentations
MARKOV ANALYSIS Andrei Markov, a Russian Mathematician developed the technique to describe the movement of gas in a closed container in 1940 In 1950s,
Advertisements

Markov Chain Nur Aini Masruroh.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
INDR 343 Problem Session
Operations Research: Applications and Algorithms
Discrete Time Markov Chains
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Brand Switching Ted Mitchell. Switching & Loyalty Even The Most Brand Loyal Customer has Some Probability of Switching The Cost of Keeping a Profitable.
Topics Review of DTMC Classification of states Economic analysis
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Section 10.1 Basic Properties of Markov Chains
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
Solutions Markov Chains 1
Chapter 17 Markov Chains.
1 Markov Chain - Brand Switching AusKola The Aussie Drink.
Markov Analysis. Overview A probabilistic decision analysis Does not provide a recommended decision Provides probabilistic information about a decision.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Continuous Time Markov Chains and Basic Queueing Theory
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
INDR 343 Problem Session
Lecture 13 – Continuous-Time Markov Chains
Markov Processes Homework Solution MGMT E
Markov Analysis Chapter 15
Markov Analysis Chapter 16
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
HW # Due Day: Nov. 9.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna 16-1 © 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna Markov Analysis.
Markov Processes System Change Over Time Data Mining and Forecast Management MGMT E-5070 scene from the television series “Time Tunnel” (1970s)
Markov Processes ManualComputer-Based Homework Solution MGMT E-5070.
Section 10.2 Regular Markov Chains
CH – 11 Markov analysis Learning objectives:
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Asst. Professor in Mathematics
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Problems Markov Chains 1 1) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether they.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Markov Chains Applications. Brand Switching  100 customers currently using brand A 84 will stay with A, 9 will switch to B, 7 will switch to C  100.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Problems Markov Chains ) Given the following one-step transition matrices of a Markov chain, determine the classes of the Markov chain and whether.
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
Markov Chains.
Markov Chains Applications
Chapter 5 Markov Analysis
Discrete-time markov chain (continuation)
Solutions Markov Chains 1
Slides by John Loucks St. Edward’s University.
Presentation transcript:

Chapter 9: Markov Processes Market Share Analysis Transition Probabilities Steady-State Probabilities Absorbing States Transition Matrix with Submatrices Fundamental Matrix

Markov Processes Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. the promotion of managers to various positions within an organization the migration of people into and out of various regions of the country the progression of students through the years of college, including eventually dropping out or graduating

Markov Processes Markov processes have been used to describe the probability that: a machine that is functioning in one period will function or break down in the next period. a consumer purchasing brand A in one period will purchase brand B in the next period.

Example: Market Share Analysis Suppose we are interested in analyzing the market share and customer loyalty for Murphy’s Foodliner and Ashley’s Supermarket, the only two grocery stores in a small town. We focus on the sequence of shopping trips of one customer and assume that the customer makes one shopping trip each week to either Murphy’s Foodliner or Ashley’s Supermarket, but not both.

Example: Market Share Analysis We refer to the weekly periods or shopping trips as the trials of the process. Thus, at each trial, the customer will shop at either Murphy’s Foodliner or Ashley’s Supermarket. The particular store selected in a given week is referred to as the state of the system in that period. Because the customer has two shopping alternatives at each trial, we say the system has two states. State 1. The customer shops at Murphy’s Foodliner. State 2. The customer shops at Ashley’s Supermarket.

Example: Market Share Analysis Suppose that, as part of a market research study, we collect data from 100 shoppers over a 10-week period. In reviewing the data, suppose that we find that of all customers who shopped at Murphy’s in a given week, 90% shopped at Murphy’s the following week while 10% switched to Ashley’s. Suppose that similar data for the customers who shopped at Ashley’s in a given week show that 80% shopped at Ashley’s the following week while 20% switched to Murphy’s.

Transition Probabilities Transition probabilities govern the manner in which the state of the system changes from one stage to the next. These are often represented in a transition matrix.

Transition Probabilities A system has a finite Markov chain with stationary transition probabilities if: there are a finite number of states, the transition probabilities remain constant from stage to stage, and the probability of the process being in a particular state at stage n+1 is completely determined by the state of the process at stage n (and not the state at stage n-1). This is referred to as the memory-less property.

Example: Market Share Analysis Transition Probabilities pij  probability of making a transition from state i in a given period to state j in the next period p11 p12 0.9 0.1 P = = p21 p22 0.2 0.8

Example: Market Share Analysis State Probabilities Murphy’s P = .9(.9) = .81 .9 Murphy’s Ashley’s .9 Murphy’s P = .9(.1) = .09 .1 Murphy’s P = .1(.2) = .02 .2 Ashley’s .1 Ashley’s P = .1(.8) = .08 .8

Example: Market Share Analysis State Probabilities for Future Periods Beginning Initially with a Murphy’s Customer Beginning Initially with an Ashley’s Customer

Steady-State Probabilities The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n. The probability of the system being in a particular state after a large number of stages is called a steady-state probability.

Steady-State Probabilities Steady state probabilities can be found by solving the system of equations P =  together with the condition for probabilities that i = 1. Matrix P is the transition probability matrix Vector  is the vector of steady state probabilities.

Example: Market Share Analysis Steady-State Probabilities Let 1 = long run proportion of Murphy’s visits 2 = long run proportion of Ashley’s visits Then, .9 .1 [1 2] = [1 2] .2 .8 continued . . .

Example: Market Share Analysis Steady-State Probabilities .91 + 2 = 1 (1) 11 + 2 = 2 (2) 1 + 2 = 1 (3) Substitute 2 = 1 - 1 into (1) to give: 1 = .91 + .2(1 - 1) = 2/3 = .667 Substituting back into (3) gives: 2 = 1/3 = .333

Example: Market Share Analysis Steady-State Probabilities Thus, if we have 1000 customers in the system, the Markov process model tells us that in the long run, with steady-state probabilities 1  .667 and 2  .333, 667 customers will be Murphy’s and 333 customers will be Ashley’s.

Example: Market Share Analysis Suppose Ashley’s Supermarket is contemplating an advertising campaign to attract more of Murphy’s customers to its store. Let us suppose further that Ashley’s believes this promotional strategy will increase the probability of a Murphy’s customer switching to Ashley’s from 0.10 to 0.15.

Example: Market Share Analysis Revised Transition Probabilities

Example: Market Share Analysis Revised Steady-State Probabilities .851 + .202 = 1 (1) .151 + .802 = 2 (2) 1 + 2 = 1 (3) Substitute 2 = 1 - 1 into (1) to give: 1 = .851 + .20(1 - 1) = .57 Substituting back into (3) gives: 2 = .43

Example: Market Share Analysis Suppose that the total market consists of 6000 customers per week. The new promotional strategy will increase the number of customers doing their weekly shopping at Ashley’s from 2000 to 2580. If the average weekly profit per customer is $10, the proposed promotional strategy can be expected to increase Ashley’s profits by $5800 per week. If the cost of the promotional campaign is less than $5800 per week, Ashley should consider implementing the strategy.