Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Chapter 4. Discrete Probability Distributions Section 4.11: Markov Chains Jiaping Wang Department of Mathematical.
Continuous-Time Markov Chains Nur Aini Masruroh. LOGO Introduction  A continuous-time Markov chain is a stochastic process having the Markovian property.
Flows and Networks Plan for today (lecture 2): Questions? Continuous time Markov chain Birth-death process Example: pure birth process Example: pure death.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Operations Research: Applications and Algorithms
Discrete Time Markov Chains
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
IEG5300 Tutorial 5 Continuous-time Markov Chain Peter Chen Peng Adapted from Qiwen Wang’s Tutorial Materials.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Discrete-Time Markov Chains. © Tallal Elshabrawy 2 Introduction Markov Modeling is an extremely important tool in the field of modeling and analysis of.
Entropy Rates of a Stochastic Process
Operations Research: Applications and Algorithms
Workshop on Stochastic Differential Equations and Statistical Inference for Markov Processes Day 1: January 19 th, Day 2: January 28 th Lahore University.
Stochastic Processes Dr. Nur Aini Masruroh. Stochastic process X(t) is the state of the process (measurable characteristic of interest) at time t the.
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
What if time ran backwards? If X n, 0 ≤ n ≤ N is a Markov chain, what about Y n = X N-n ? If X n follows the stationary distribution, Y n has stationary.
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
TCOM 501: Networking Theory & Fundamentals
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Stationary distribution  is a stationary distribution for P t if  =  P t for all t. Theorem:  is stationary for P t iff  G = 0 (under suitable regularity.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Queuing Networks: Burke’s Theorem, Kleinrock’s Approximation, and Jackson’s Theorem Wade Trappe.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}
Group exercise For 0≤t 1
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Introduction to Stochastic Models GSLM 54100
Entropy Rate of a Markov Chain
Networks of Queues Plan for today (lecture 6): Last time / Questions? Product form preserving blocking Interpretation traffic equations Kelly / Whittle.
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
1 Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Why Wait?!? Bryan Gorney Joe Walker Dave Mertz Josh Staidl Matt Boche.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Model under consideration: Loss system Collection of resources to which calls with holding time  (c) and class c arrive at random instances. An arriving.
1 Parrondo's Paradox. 2 Two losing games can be combined to make a winning game. Game A: repeatedly flip a biased coin (coin a) that comes up head with.
Flows and Networks Plan for today (lecture 2): Questions? Birth-death process Example: pure birth process Example: pure death process Simple queue General.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Flows and Networks Plan for today (lecture 6): Last time / Questions? Kelly / Whittle network Optimal design of a Kelly / Whittle network: optimisation.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Discrete Time Markov Chains
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
ST3236: Stochastic Process Tutorial 6
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
Flows and Networks Plan for today (lecture 6): Last time / Questions? Kelly / Whittle network Optimal design of a Kelly / Whittle network: optimisation.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Networks of queues Networks of queues reversibility, output theorem, tandem networks, partial balance, product-form distribution, blocking, insensitivity,
Flows and Networks Plan for today (lecture 3):
Industrial Engineering Dep
Flows and Networks Plan for today (lecture 4):
Lecture 4: Algorithmic Methods for G/M/1 and M/G/1 type models
Flows and Networks Plan for today (lecture 6):
Discrete-time markov chain (continuation)
Presentation transcript:

Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing the transition matrix to make it stochastic we get (b) We have so

and yielding detailed balance for the truncated chain. 2. Let Z n = (X 2n,X 2n+1 ). Then Z n is a Markov chain with transition matrix Let T k = P(hit (0,1) before (1,0)|start at k). First step analysis yields the equations

(b) If q 0 =p 1 =p we get p 01,01 =1/2, fair coin. 3. (a) so whence the differential equation follows by letting. (b) Since P 1 (t)=1-P 0 (t) we get

which can either be solved directly, or one can check that the given solution satisfies the differential equation. (c) Letting we get This value as a starting distribution also yields a marginal distribution that is free of t, so it behaves like a stationary distribution (which we will define later).

Announcement MathAcrossCampus Colloquium ( Evolutionary trees, coalescents, and gene trees: can mathematicians find the woods? JOE FELSENSTEIN Genome Sciences, UW Thursday, November 13, 2008, 3:30 Kane Hall 210 Reception to follow

The Markov property X(t) is a Markov process if for any n for all j, i 0,...,i n in S and any t 0 <t 1 <...<t n <t. The transition probabilities are homogeneous if p ij (s,t)=p ij (0,t-s). We will usually assume this, and write p ij (t).

Semigroup property Let P t be [p ij (t)]. Then P t is a substochastic semigroup, meaning that (a)P 0 = I (b)P s+t = P s P t (c)P t is a substochastic matrix, i.e. has nonnegative entries with row sums at most 1.

Proof (a)? (b) (c)

Standard semigroup P t,t≥0 is a standard semigroup if as. Theorem: For a standard semigroup the transition probabilities are continuous. Proof: By Chapman-Kolmogorov Unless otherwise specified we will consider standard semigroups.

Infinitesimal generator By continuity of the transition probabilities, Taylor expansion suggests We must have g ij ≥0, g ii ≤0. Let G=[g ij ]. Then (under regularity conditions) G is called the infinitesimal generator of P t.

Birth process G = Under regularity conditions we have so we must have

Forward equations so and or

Backward equations Instead of looking at (t,t+h] look at (0,h]: so

Formal solution In many cases we can solve both these equations by But this can be difficult to actually calculate.

The 0-1 case

0-1 case, continued Thus

Marginal distribution Let. Then for a starting distribution  (0) we have  (t) =  (0) P t. For the 0-1 process we get

Exponential holding times Suppose X(t)=j. Consider Let  be the time spent in j until the next transition after time t. By the Markov property, P(stay in j in (u,u+v], given stays at least u) is precisely P(stay in j v time units). Mathematically Let g(v)=P(  >v). Then we have g(u+v)=g(u)g(v), and it follows that g(u)=exp(- u). By the backward eqn and P(  >v)=p jj (v).

Jump chain Given that the chain jumps from i at a particular time, the probability that it jumps to j is -g ij /g ii. Here is why (roughly): Suppose t<  <t+h, and there is only one jump in (t,t+h] (likely for small h). Then

Construction The way the continuous time Markov chains work is: (1)Draw an initial value i 0 from  (0) (2)If, stay in i 0 for a random time which is (3)Draw a new state from the distribution where

Death process Let g i,i-1 =  i = - g i,i. The forward equation is Write. Then This is a Lagrange equation with sln or