1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.

Slides:



Advertisements
Similar presentations
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Advertisements

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Operations Research: Applications and Algorithms
Markov Chains.
INDR 343 Problem Session
Markov Analysis Jørn Vatn NTNU.
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
Lecture 12 – Discrete-Time Markov Chains
TCOM 501: Networking Theory & Fundamentals
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
10.3 Absorbing Markov Chains
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
To accompany Quantitative Analysis for Management, 9e by Render/Stair/Hanna 16-1 © 2006 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Continuous Time Markov Chains and Basic Queueing Theory
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Lecture 13 – Continuous-Time Markov Chains
Markov Processes Homework Solution MGMT E
Markov Analysis Chapter 15
Markov Analysis Chapter 16
Queueing Theory (2). Home Work 12-9 and Due Day: October 31 (Monday) 2005.
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
EMGT 501 Fall 2005 Midterm Exam SOLUTIONS.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Markov Chains Chapter 16.
To accompany Quantitative Analysis for Management, 8e by Render/Stair/Hanna 16-1 © 2003 by Prentice Hall, Inc. Upper Saddle River, NJ Chapter 16.
Markov Processes System Change Over Time Data Mining and Forecast Management MGMT E-5070 scene from the television series “Time Tunnel” (1970s)
Markov Processes ManualComputer-Based Homework Solution MGMT E-5070.
Section 10.2 Regular Markov Chains
Section 8.1/8.2 Matrix Solutions to Linear Systems Inconsistent and Dependent Systems.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Needs Work Need to add –HW Quizzes Chapter 13 Matrices and Determinants.
Using Matrices to Solve Systems of Equations Matrix Equations l We have solved systems using graphing, but now we learn how to do it using matrices.
CH – 11 Markov analysis Learning objectives:
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Week.  Student will: scientific notation  Write in scientific notation.
Chapter 4 Section 4: Inverse and Identity Matrices 1.
Queuing Theory Basic properties, Markovian models, Networks of queues, General service time distributions, Finite source models, Multiserver queues Chapter.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 12 – Discrete-Time Markov Chains Topics.
Discrete Time Markov Chains
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Copyright ©2015 Pearson Education, Inc. All rights reserved.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
 Recall that when you wanted to solve a system of equations, you used to use two different methods.  Substitution Method  Addition Method.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
Solving Equations Involving Logarithmic and Exponential Functions
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Do Now: Perform the indicated operation. 1.). Algebra II Elements 11.1: Matrix Operations HW: HW: p.590 (16-36 even, 37, 44, 46)
Chapter 9: Markov Processes
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Matrices IB Mathematics SL. Matrices Describing Matrices Adding Matrices.
Discrete-time Markov chain (DTMC) State space distribution
Slides Prepared by JOHN LOUCKS
Much More About Markov Chains
Chapter 5 Markov Analysis
Lial/Hungerford/Holcomb/Mullins: Mathematics with Applications 11e Finite Mathematics with Applications 11e Copyright ©2015 Pearson Education, Inc. All.
3.8 Use Inverse Matrices to Solve Linear Systems
Slides by John Loucks St. Edward’s University.
Presentation transcript:

1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam

2 2 Slide © 2005 Thomson/South-Western EMGT 501 HW Solutions Chapter 16 - SELF TEST 9 Chapter 16 - SELF TEST 35

3 3 Slide © 2005 Thomson/South-Western 16-9 Note: Results were obtained using the Forecasting module of The Management Scientist.a. MethodForecastMSE 3-Quarter Quarter The 3-quarter moving average forecast is better because it has the smallest MSE.

4 4 Slide © 2005 Thomson/South-Western b. MethodForecastMSE  =  = The a =.5 smoothing constant is better because it has the smallest MSE. c.The exponential smoothing is better because it has the smallest MSE.

5 5 Slide © 2005 Thomson/South-Western 16-35a. xixi yiyi xiyixiyi xi2xi

6 6 Slide © 2005 Thomson/South-Western = (50) = or approximately 15 defective parts b.

7 7 Slide © 2005 Thomson/South-Western Chapter 17 Markov Processes n Transition Probabilities n Steady-State Probabilities n Absorbing States n Transition Matrix with Submatrices n Fundamental Matrix

8 8 Slide © 2005 Thomson/South-Western Markov Processes n Markov process models are useful in studying the evolution of systems over repeated trials or sequential time periods or stages. n They have been used to describe the probability that: a machine that is functioning in one period will continue to function or break down in the next period. a machine that is functioning in one period will continue to function or break down in the next period. A consumer purchasing brand A in one period will purchase brand B in the next period. A consumer purchasing brand A in one period will purchase brand B in the next period.

9 9 Slide © 2005 Thomson/South-Western Transition Probabilities n Transition probabilities govern the manner in which the state of the system changes from one stage to the next. These are often represented in a transition matrix.

10 Slide © 2005 Thomson/South-Western Transition Probabilities n A system has a finite Markov chain with stationary transition probabilities if: there are a finite number of states, there are a finite number of states, the transition probabilities remain constant from stage to stage, and the transition probabilities remain constant from stage to stage, and the probability of the process being in a particular state at stage n+ 1 is completely determined by the state of the process at stage n (and not the state at stage n- 1). This is referred to as the memory-less property. the probability of the process being in a particular state at stage n+ 1 is completely determined by the state of the process at stage n (and not the state at stage n- 1). This is referred to as the memory-less property.

11 Slide © 2005 Thomson/South-Western Steady-State Probabilities n The state probabilities at any stage of the process can be recursively calculated by multiplying the initial state probabilities by the state of the process at stage n. n The probability of the system being in a particular state after a large number of stages is called a steady-state probability.

12 Slide © 2005 Thomson/South-Western Steady-State Probabilities Steady state probabilities can be found by solving the system of equations  P =  together with the condition for probabilities that  i = 1. Steady state probabilities can be found by solving the system of equations  P =  together with the condition for probabilities that  i = 1. Matrix P is the transition probability matrix Matrix P is the transition probability matrix Vector  is the vector of steady state probabilities. Vector  is the vector of steady state probabilities.

13 Slide © 2005 Thomson/South-Western Absorbing States n An absorbing state is one in which the probability that the process remains in that state once it enters the state is 1. n If there is more than one absorbing state, then a steady-state condition independent of initial state conditions does not exist.

14 Slide © 2005 Thomson/South-Western Transition Matrix with Submatrices n If a Markov chain has both absorbing and nonabsorbing states, the states may be rearranged so that the transition matrix can be written as the following composition of four submatrices: I, 0, R, and Q : I 0 I 0 R Q R Q

15 Slide © 2005 Thomson/South-Western Transition Matrix with Submatrices I = an identity matrix indicating one always remains in an absorbing state once it is reached I = an identity matrix indicating one always remains in an absorbing state once it is reached 0 = a zero matrix representing 0 probability of 0 = a zero matrix representing 0 probability of transitioning from the absorbing states to the transitioning from the absorbing states to the nonabsorbing states nonabsorbing states R = the transition probabilities from the nonabsorbing states to the absorbing states R = the transition probabilities from the nonabsorbing states to the absorbing states Q = the transition probabilities between the nonabsorbing states Q = the transition probabilities between the nonabsorbing states

16 Slide © 2005 Thomson/South-Western Fundamental Matrix n The fundamental matrix, N, is the inverse of the difference between the identity matrix and the Q matrix. N = ( I - Q ) -1 N = ( I - Q ) -1

17 Slide © 2005 Thomson/South-Western NR Matrix n The NR matrix is the product of the fundamental ( N ) matrix and the R matrix. n It gives the probabilities of eventually moving from each nonabsorbing state to each absorbing state. n Multiplying any vector of initial nonabsorbing state probabilities by NR gives the vector of probabilities for the process eventually reaching each of the absorbing states. Such computations enable economic analyses of systems and policies.

18 Slide © 2005 Thomson/South-Western Example: North’s Hardware Henry, a persistent salesman, calls North's Hardware Store once a week hoping to speak Hardware Store once a week hoping to speak with the store's buying agent, Shirley. If Shirley does not accept Henry's call this week, the probability she will do the same next week is.35. On the other hand, if she accepts Henry's call this week, the probability she will not do so next week is.20.

19 Slide © 2005 Thomson/South-Western Example: North’s Hardware n Transition Matrix Next Week's Call Next Week's Call Refuses Accepts Refuses Accepts This Refuses This Refuses Week's Week's Call Accepts Call Accepts.20.80

20 Slide © 2005 Thomson/South-Western n Steady-State Probabilities Question How many times per year can Henry expect to talk to Shirley? Answer To find the expected number of accepted calls per year, find the long-run proportion (probability) of a call being accepted and multiply it by 52 weeks. continued... continued... Example: North’s Hardware

21 Slide © 2005 Thomson/South-Western n Steady-State Probabilities Answer (continued) Let  1 = long run proportion of refused calls  2 = long run proportion of accepted calls  2 = long run proportion of accepted calls Then, Then, [     ] = [     ] [     ] = [     ] continued... Example: North’s Hardware

22 Slide © 2005 Thomson/South-Western n Steady-State Probabilities Answer (continued)   +   =   (1)   +   =   (1)   +   =   (2)   +   =   (2)   +   = 1 (3)   +   = 1 (3) Solve for   and    Solve for   and    continued... continued... Example: North’s Hardware

23 Slide © 2005 Thomson/South-Western Example: North’s Hardware n Steady-State Probabilities Answer (continued) Solving using equations (2) and (3). (Equation 1 is redundant.) Substitute   = 1 -   into (2) to give: Solving using equations (2) and (3). (Equation 1 is redundant.) Substitute   = 1 -   into (2) to give:.65(1 -  2 ) +   =  2.65(1 -  2 ) +   =  2 This gives   = Substituting back into equation (3) gives   = Thus the expected number of accepted calls per year is: (.76471)(52) = or about 40 (.76471)(52) = or about 40

24 Slide © 2005 Thomson/South-Western n State Probability Question What is the probability Shirley will accept Henry's next two calls if she does not accept his call this week? What is the probability Shirley will accept Henry's next two calls if she does not accept his call this week? Example: North’s Hardware

25 Slide © 2005 Thomson/South-Western Example: North’s Hardware n State Probability Answer Answer P =.35(.35) =.1225 P =.35(.65) =.2275 P =.65(.20) =.1300 Refuses Refuses Refuses Refuses Accepts Accepts Accepts P =.65(.80) =.5200

26 Slide © 2005 Thomson/South-Western n State Probability Question What is the probability of Shirley accepting exactly one of Henry's next two calls if she accepts his call this week? Example: North’s Hardware

27 Slide © 2005 Thomson/South-Western Example: North’s Hardware n State Probability Answer Answer The probability of exactly one of the next two calls being accepted if this week's call is accepted can be found by adding the probabilities of (accept next week and refuse the following week) and (refuse next week and accept the following week) = The probability of exactly one of the next two calls being accepted if this week's call is accepted can be found by adding the probabilities of (accept next week and refuse the following week) and (refuse next week and accept the following week) = =.29

28 Slide © 2005 Thomson/South-Western The vice president of personnel at Jetair Aerospace has noticed that yearly shifts in personnel can be modeled by a Markov process. The transition matrix is: Next Year Next Year Same Pos. Promotion Retire Quit Fired Same Pos. Promotion Retire Quit Fired Current Year Current Year Same Position Same Position Promotion Promotion Retire Retire Quit Quit Fired Fired Example: Jetair Aerospace

29 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n Transition Matrix Next Year Next Year Retire Quit Fired Same Promotion Retire Quit Fired Same Promotion Current Year Current Year Retire Retire Quit Quit Fired Fired Same Same Promotion Promotion

30 Slide © 2005 Thomson/South-Western n Fundamental Matrix N = ( I - Q ) -1 = - = N = ( I - Q ) -1 = - = Example: Jetair Aerospace

31 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n Fundamental Matrix The determinant, d = a  a  - a  a  = (.45)(.80) - (-.70)(-.10) =.29 = (.45)(.80) - (-.70)(-.10) =.29 Thus, Thus,.80/.29.10/ /.29.10/ N = = N = =.70/.29.45/ /.29.45/

32 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n NR Matrix The probabilities of eventually moving to the absorbing states from the nonabsorbing states are given by: NR = x NR = x

33 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n NR Matrix (continued) Retire Quit Fired Retire Quit Fired Same Same NR = NR = Promotion Promotion

34 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n Absorbing States Question What is the probability of someone who was just promoted eventually retiring?... quitting?... being fired?

35 Slide © 2005 Thomson/South-Western Example: Jetair Aerospace n Absorbing States (continued) Answer The answers are given by the bottom row of the NR matrix. The answers are therefore: Eventually Retiring =.12 Eventually Retiring =.12 Eventually Quitting =.64 Eventually Quitting =.64 Eventually Being Fired =.24 Eventually Being Fired =.24

36 Slide © 2005 Thomson/South-Western End of Chapter 17