Problems 10/3 1. Ehrenfast’s diffusion model:. Problems, cont. 2. Discrete uniform on {0,...,n}

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Hidden Markov Models (1)  Brief review of discrete time finite Markov Chain  Hidden Markov Model  Examples of HMM in Bioinformatics  Estimations Basic.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Markov Chains.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Discrete Time Markov Chains
Markov Chains 1.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
G12: Management Science Markov Chains.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Continuous Time Markov Chains and Basic Queueing Theory
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
What if time ran backwards? If X n, 0 ≤ n ≤ N is a Markov chain, what about Y n = X N-n ? If X n follows the stationary distribution, Y n has stationary.
A random sum X 1,...,X n ~ iid G X, N ~ G N has pgf N is called a stopping time.
Solutions to group exercises 1. (a) Truncating the chain is equivalent to setting transition probabilities to any state in {M+1,...} to zero. Renormalizing.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Stationary distribution  is a stationary distribution for P t if  =  P t for all t. Theorem:  is stationary for P t iff  G = 0 (under suitable regularity.
Homework 2 Question 2: For a formal proof, use Chapman-Kolmogorov Question 4: Need to argue why a chain is persistent, periodic, etc. To calculate mean.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
The moment generating function of random variable X is given by Moment generating function.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Basic Definitions Positive Matrix: 5.Non-negative Matrix:
Group exercise For 0≤t 1
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Solutions Markov Chains 2 3) Given the following one-step transition matrix of a Markov chain, determine the classes of the Markov chain and whether they.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
Chapter 3 : Problems 7, 11, 14 Chapter 4 : Problems 5, 6, 14 Due date : Monday, March 15, 2004 Assignment 3.
CS433 Modeling and Simulation Lecture 07 – Part 01 Continuous Markov Chains Dr. Anis Koubâa 14 Dec 2008 Al-Imam.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
Markov Chains Part 3. Sample Problems Do problems 2, 3, 7, 8, 11 of the posted notes on Markov Chains.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
CS433 Modeling and Simulation Lecture 11 Continuous Markov Chains Dr. Anis Koubâa 01 May 2009 Al-Imam Mohammad Ibn Saud University.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
Let E denote some event. Define a random variable X by Computing probabilities by conditioning.
Ergodicity, Balance Equations, and Time Reversibility
Markov Chains and Random Walks
Discrete-time markov chain (continuation)
Much More About Markov Chains
Randomized Algorithms Markov Chains and Random Walks
September 1, 2010 Dr. Itamar Arel College of Engineering
Continuous time Markov Chains
Discrete-time markov chain (continuation)
Presentation transcript:

Problems 10/3 1. Ehrenfast’s diffusion model:

Problems, cont. 2. Discrete uniform on {0,...,n}

Problems, cont. 3. where k=0?

Classification of states A state i for a Markov chain X k is called persistent if and transient otherwise. Let and. j is persistent iff f jj =1. Let

Some results Theorem: (a)P ii (s)=1+F ii (s)P ii (s) (b)P ij (s)=F ij (s)P ij (s) for i ≠ j. Proof: As for the random walk case we deduce from the Markov property that Multiply both sides by s m, sum over m≥1 to get

Some results, cont. Corollary: (a)State j is persistent if and then for all i. (b)State j is transient if and thenfor all i. Proof: SInce we see that But(by Abel’s thm)

A final consequence If i is transient, then Why? Example: Branching process What states are persistent? Transient? State 0 is called absorbing, since once the process reaches 0, it never leaves again.

Mean recurrence time Let T i = min{n>0: X n = i} and  i = E(T i |X 0 =i). For a transient state  i = ∞. For a persistent state We call a recurrent state positive persistent if  i < ∞, null persistent otherwise. Example: Simple random walk positive recurrent = non-null persistent

A model for radiation damage Initial damage from radiation can either heal or get worse until it is visible. 0 is a healthy organism (absorbing) 3 visible damage (absorbing) 1 initial damage 2 amplified damage

Radiation damage, cont. Recovery probability 0 is probability of reaching 0 before 3. Last step must go Thus

Communication Two states i and j communicate,, if for some m. i and j intercommunicate,, if and. Theorem: is an equivalence relation. What do we need to prove?

Equivalence classes of states Theorem: If then (a)i is transient iff j is transient (b)i is persistent iff j is persistent Proof of (a): Since there are m,n with By Chapman-Kolmogorov so summing over r we get

Closed and irreducible sets A set C of states is closed if p ij =0 for all i in C, j not in C C is irreducible if for all i,j in C. Theorem: S=T+C 1 +C where T are all transient, and the C i are irreducible disjoint closed sets of persistent states Note: The C i are the equivalence classes for

Example S={0,1,2,3,4,5} {0,1},{4,5} closed irreducible persistent {2,3} transient. Why?

Long-term behavior Recall from the 0-1 process that When does this not depend on n? (a)p 01 = p 11 (b) (c)

Stationary distribution Case (b) is the general one. Here is the idea: Recall that  (n) =  (0) P n. In order to get the same distribution for all n, we use  (0) =  where  solves  P =   (1) =  P =   P 2 =  P = ...  P n = 

Snoqualmie Falls so or