ST3236: Stochastic Process Tutorial 6

Slides:



Advertisements
Similar presentations
Random Processes Markov Chains Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
ST3236: Stochastic Process Tutorial 9
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Markov Models.
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
ST3236: Stochastic Process Tutorial 4 TA: Mar Choong Hock Exercises: 5.
INDR 343 Problem Session
ST3236: Stochastic Process Tutorial TA: Mar Choong Hock
Discrete Time Markov Chains
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Hidden Markov Models - HMM Tutorial #5 © Ydo Wexler & Dan Geiger.
ST3236: Stochastic Process Tutorial 8
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
Eigenvalues and Eigenvectors (11/17/04) We know that every linear transformation T fixes the 0 vector (in R n, fixes the origin). But are there other subspaces.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
The Gibbs sampler Suppose f is a function from S d to S. We generate a Markov chain by consecutively drawing from (called the full conditionals). The n’th.
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Continuous time Markov chains (Sec )
Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
1 TCOM 501: Networking Theory & Fundamentals Lectures 9 & 10 M/G/1 Queue Prof. Yannis A. Korilis.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
Standardized Test Practice EXAMPLE 4 ANSWER The correct answer is B. DCBA Simplify the expression 4(n + 9) – 3(2 + n). 4(n + 9) – 3(2 + n) = Distributive.
ST3236: Stochastic Process Tutorial 2
Entropy Rate of a Markov Chain
ST3236: Stochastic Process Tutorial 10
Chapter 2 Machine Interference Model Long Run Analysis Deterministic Model Markov Model.
S TOCHASTIC M ODELS L ECTURE 1 P ART II M ARKOV C HAINS Nan Chen MSc Program in Financial Engineering The Chinese University of Hong Kong (ShenZhen) Sept.
Markov Decision Processes1 Definitions; Stationary policies; Value improvement algorithm, Policy improvement algorithm, and linear programming for discounted.
Stochastic Processes Nancy Griffeth January 10, 2014 Funding for this workshop was provided by the program “Computational Modeling and Analysis of Complex.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
S TOCHASTIC M ODELS L ECTURE 3 P ART II C ONTINUOUS -T IME M ARKOV P ROCESSES Nan Chen MSc Program in Financial Engineering The Chinese University of Hong.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
ST3236: Stochastic Process Tutorial 5
To be presented by Maral Hudaybergenova IENG 513 FALL 2015.
Meaning of Markov Chain Markov Chain is a stochastic model describing a sequence of possible events in which the probability of each event depends only.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Reliability Engineering
Markov Chains and Mixing Times
Courtesy of J. Bard, L. Page, and J. Heyl
ST3236: Stochastic Process Tutorial 7
Flows and Networks Plan for today (lecture 4):
Lecture on Markov Chain
Solutions Markov Chains 1
IENG 362 Markov Chains.
Class Notes 11.2 The Quadratic Formula.
Discrete-time markov chain (continuation)
IENG 362 Markov Chains.
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
IENG 362 Markov Chains.
Discrete-time markov chain (continuation)
Solutions Markov Chains 6
Chapter 2 Machine Interference Model
CS723 - Probability and Stochastic Processes
Random Processes / Markov Processes
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Presentation transcript:

ST3236: Stochastic Process Tutorial 6 TA: Mar Choong Hock Email: g0301492@nus.edu.sg Exercises: 7

Question 1 Consider the MC with transition probability matrix Determine the limiting distribution.

Question 1 Let  = (0, 1, 2) be the limiting distribution, we have deleting one of the first three equations, we have the solution as 0 = 0.4762, 1 = 0.2381, 2 = 0.2857

Question 2 Consider the MC with transition probability matrix What fraction of time, in the long run, does the process spend in state 1?

Question 2 Let  = (0, 1, 2) be the limiting distribution, we have deleting one of the first three equations, we have the solution as 0 = 0.2308, 1 = 0.2308, 2 = 0.5385

Question 2 With frequency 1 = 0.2308, in the long run, does the process spend in state 1

Question 3 Consider the MC with transition probability matrix Every period that the process spends in state 0 incurs a cost of 2$. Every period that the process spends in state 1 incurs a cost of 5$. Every period that the process spends in state 2 incurs a cost of 3$. What is the long run average cost per period associated with this Markov chain.

Question 3 Let  = (0, 1, 2) be the limiting distribution, we have deleting one of the first three equations, we have the solution as 0 = 0.4167, 1 = 0.1818, 2 = 0.4015

Question 3 The long run average cost per period associated with this Markov chain is 0.4167 x 2 + 0.1818 x 5 + 0.4015 x 3 = 2.9470$

Question 4 Suppose that the social classes of successive generations in a family follows a Markov chain with transition probability matrix given by What fraction of families are upper class in the long run?

Question 4 Let  = (L, M, U) be the limiting distribution, we have deleting one of the first three equations, we have the solution as L = 0.3529, M = 0.4118, U = 0.2353

Question 4 The fraction of families are upper class in the long run is U = 0.2353.