15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Absorbing Markov Chains 1.

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Markov chains Assume a gene that has three alleles A, B, and C. These can mutate into each other. Transition probabilities Transition matrix Probability.
Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis 1  Load-balancing problem  migrate (large) jobs from a busy server to an underloaded.
Lecture 6  Calculating P n – how do we raise a matrix to the n th power?  Ergodicity in Markov Chains.  When does a chain have equilibrium probabilities?
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Operations Research: Applications and Algorithms
Markov Chains.
Topics Review of DTMC Classification of states Economic analysis
11 - Markov Chains Jim Vallandingham.
TCOM 501: Networking Theory & Fundamentals
1 Part III Markov Chains & Queueing Systems 10.Discrete-Time Markov Chains 11.Stationary Distributions & Limiting Probabilities 12.State Classification.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Tutorial 8 Markov Chains. 2  Consider a sequence of random variables X 0, X 1, …, and the set of possible values of these random variables is {0, 1,
15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Discrete Time Markov Chains.
Matrix Operations. Matrix Notation Example Equality of Matrices.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
1 TCOM 501: Networking Theory & Fundamentals Lecture 7 February 25, 2003 Prof. Yannis A. Korilis.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Delivery, Forwarding, and Routing
1 Performance of Ad Hoc Networks with Two-Hop Relay Routing and Limited Packet Lifetime Ahmad Al Hanbali INRIA Sophia-Antipolis Maestro team Co-authors:
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis 1  How many samples do we need to converge?  How many Random Walk steps to get.
Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Using Complex Networks for Mobility Modeling and Opportunistic Networking: Part.
Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Introduction to Continuous-Time Markov Chains and Queueing Theory.
Entropy Rate of a Markov Chain
Piyush Kumar (Lecture 2: PageRank) Welcome to COT5405.
Abstract Sources Results and Conclusions Jessica Porath  Department of Mathematics  University of Wisconsin-Eau Claire Faculty Mentor: Dr. Don Reynolds.
 Row and Reduced Row Echelon  Elementary Matrices.
1 13-Sep-15 S Ward Abingdon and Witney College Routing loops And other problems CCNA 2 Chapter 4.
Link Estimation, CTP and MultiHopLQI. Motivation Data Collection needs to estimate the link quality –To select a good link.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
We will use Gauss-Jordan elimination to determine the solution set of this linear system.
1 C ollege A lgebra Systems and Matrices (Chapter5) 1.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Link Estimation, CTP and MultiHopLQI. Learning Objectives Understand the motivation of link estimation protocols – the time varying nature of a wireless.
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Matrices and linear transformations For grade 1, undergraduate students For grade 1, undergraduate students Made by Department of Math.,Anqing Teachers.
Discrete Time Markov Chains
Markov Chains Part 4. The Story so far … Def: Markov Chain: collection of states together with a matrix of probabilities called transition matrix (p ij.
15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Discrete Time Markov Chains.
By: Jesse Ehlert Dustin Wells Li Zhang Iterative Aggregation/Disaggregation(IAD)
Markov Chains Part 3. Sample Problems Do problems 2, 3, 7, 8, 11 of the posted notes on Markov Chains.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
RIP Routing Protocol. 2 Routing Recall: There are two parts to routing IP packets: 1. How to pass a packet from an input interface to the output interface.
2.5 – Determinants and Multiplicative Inverses of Matrices.
Distance Vector Routing
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Network Topology Single-level Diversity Coding System (DCS) An information source is encoded by a number of encoders. There are a number of decoders, each.
1 1.3 © 2016 Pearson Education, Ltd. Linear Equations in Linear Algebra VECTOR EQUATIONS.
Configuration for routing example
College Algebra Chapter 6 Matrices and Determinants and Applications
REVIEW Linear Combinations Given vectors and given scalars
Linear Equations in Linear Algebra
Discrete Time Markov Chains (cont’d)
Markov Chains Mixing Times Lecture 5
Matrix Multiplication
Rip Routing Protocol.
DTMC Applications Ranking Web Pages & Slotted ALOHA
Piyush Kumar (Lecture 2: PageRank)
Markov Chains Part 5.
Linear Equations in Linear Algebra
1.3 Vector Equations.
CCNA2 chap4 routing loops.ppt And other problems CCNA 2 Chapter 4
Discrete time Markov Chain
Discrete time Markov Chain
CS723 - Probability and Stochastic Processes
Presentation transcript:

15 October 2012 Eurecom, Sophia-Antipolis Thrasyvoulos Spyropoulos / Absorbing Markov Chains 1

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  A mouse is trapped in the above maze with 3 rooms and 1 exit  When inside a room with x doors, it chooses any of them with equal probability (1/x) Q: How long will it take it on average to exit the maze, if it starts at room i? Q: How long if it starts from a random room? exit

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  Def: T i = expected time to leave maze, starting from room I  T 2 = 1/3*1 + 1/3*(1+T 1 )+1/3*(1+T 3 ) = 1 + 1/3*(T 1 +T 3 )  T 1 = 1 + T 2  T 3 = 1 + T 2  T 2 = 5, T 3 = 6, T 1 = 6 Q: Could you have guessed it directly? A: times room 2 is visited before exiting is geometric(1/3)  on average, the wrong exit will be taken twice (each time costing two steps) and the 3 rd time the mouse exits exit

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  A packet must be routed towards the destination over the above network  “Hot Potato Routing” works as follows: when a router receives a packet, it picks any of its outgoing links randomly (including the incoming link) and send the packet immediately. Q: How long does it take to deliver the packet? 4 destination

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  First Step Analysis: We can still apply it!  But it’s a bit more complicated: 9x9 system of linear equations  Not easy to guess solution either!  We’ll try to model this with a Markov Chain 5 destination

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  9 transient states: 1-9  1 absorbing state: A Q: Is this chain irreducible? A: No! Q: Hot Potato Routing Delay  expected time to asborption? A 1 1/4 1/2 1/3 1/2 1/4 1/3 1/2 1/3 1/2 1/4 1 1/3

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  We can define transition matrix P (10x10) Q: What is P (n) as n  ∞? A: every row converges to [0,0,…,1] Q: How can we get ET iA ?  (expected time to absorption starting from i) Q: How about ? A: No, the sum goes to infinity! A 1 1/4 1/2 1/3 1/2 1/4 1/3 1/2 1/3 1/2 1/4 1 1/3

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  Transition matrix can be written in canonical form  Transient states written first, followed by absorbing ones  Calculate P (n) using canonical form Q: Q n as n  ∞? A: it goes to O Q: where does the (*) part of the matrix converge to if only one absorbing state? A: to a vector of all 1s 8

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis Theorem: The matrix (I-Q) has an inverse  N = (I-Q) -1 is called the fundamental matrix  N = I + Q + Q 2 + …  n ik : the expected number of times the chain is in state k, starting from state i, before being absorbed Proof: 9

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis Theorem:  Let T i be the expected number of steps before the chain is absorbed, given that the chain starts in state i,  let T be the column vector whose i th entry is T i. then T = Nc,  where c is a column vector all of whose entries are 1 Proof:  Σ k n ik :add all entries in the i th row of N  expected number of times in any of the transient states for a given starting state i  the expected time required before being absorbed.  T i = Σ k n ik. 10

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis Theorem:  b ij :probability that an absorbing chain will be absorbed in (absorbing) state j, if it starts in (transient) state i.  B: (t-by-r) matrix with entries b ij. then B = NR,  R as in the canonical form. Proof: 11

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  Use Matlab to get matrices  Matrix N =  Vector T =

Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis  A wireless path consisting of H hops (links)  link success probability p  A packet is (re-)transmitted up to M times on each link  If it fails, it gets retransmitted from the source (end-to-end) Q: How many transmissions until end-to-end success? 13