Kurtis Cahill James Badal.  Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example 

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

1 Introduction to Discrete-Time Markov Chain. 2 Motivation  many dependent systems, e.g.,  inventory across periods  state of a machine  customers.
Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
MARKOV CHAIN EXAMPLE Personnel Modeling. DYNAMICS Grades N1..N4 Personnel exhibit one of the following behaviors: –get promoted –quit, causing a vacancy.
Markov Models.
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Solving Equations = 4x – 5(6x – 10) -132 = 4x – 30x = -26x = -26x 7 = x.
Discrete Time Markov Chains
Graphing Using Slope - Intercept STEPS : 1. Equation must be in y = mx + b form 2. Plot your y – intercept ( 0, y ) 3. Using your y – intercept as a starting.
. Markov Chains as a Learning Tool. 2 Weather: raining today40% rain tomorrow 60% no rain tomorrow not raining today20% rain tomorrow 80% no rain tomorrow.
Topics Review of DTMC Classification of states Economic analysis
TCOM 501: Networking Theory & Fundamentals
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Discrete-Time Markov Chains. © Tallal Elshabrawy 2 Introduction Markov Modeling is an extremely important tool in the field of modeling and analysis of.
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Entropy Rates of a Stochastic Process
Markov Chain Part 2 多媒體系統研究群 指導老師:林朝興 博士 學生:鄭義繼. Outline Review Classification of States of a Markov Chain First passage times Absorbing States.
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Operations Research: Applications and Algorithms
Markov processes in a problem of the Caspian sea level forecasting Mikhail V. Bolgov Water Problem Institute of Russian Academy of Sciences.
Lecture 13 – Continuous-Time Markov Chains
1 CE 530 Molecular Simulation Lecture 8 Markov Processes David A. Kofke Department of Chemical Engineering SUNY Buffalo
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Continuous time Markov chains (Sec )
Page Rank.  Intuition: solve the recursive equation: “a page is important if important pages link to it.”  Maximailly: importance = the principal eigenvector.
1 1 Slide © 2005 Thomson/South-Western Final Exam (listed) for 2008: December 2 Due Day: December 9 (9:00AM) Exam Materials: All the Topics After Mid Term.
INFM 718A / LBSC 705 Information For Decision Making Lecture 9.
Markov Models. Markov Chain A sequence of states: X 1, X 2, X 3, … Usually over time The transition from X t-1 to X t depends only on X t-1 (Markov Property).
1 1 Slide © 2005 Thomson/South-Western Final Exam: December 6 (Te) Due Day: December 12(M) Exam Materials: all the topics after Mid Term Exam.
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec. 7.1)
Reduced Row Echelon Form Matrices and the Calculator.
Entropy Rate of a Markov Chain
CH – 11 Markov analysis Learning objectives:
1 Introduction to Stochastic Models GSLM Outline  limiting distribution  connectivity  types of states and of irreducible DTMCs  transient,
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Monte Carlo Methods Versatile methods for analyzing the behavior of some activity, plan or process that involves uncertainty.
6.6 Solving Radical Equations. Principle of power: If a = b then a n = b n for any n Question: Is it also true that if a n = b n then a = b? Explain in.
Markov Chains X(t) is a Markov Process if, for arbitrary times t1 < t2 < < tk < tk+1 If X(t) is discrete-valued If X(t) is continuous-valued i.e.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
 { X n : n =0, 1, 2,...} is a discrete time stochastic process Markov Chains.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Jiann-Ming Wu, Ya-Ting Zhou, Chun-Chang Wu National Dong Hwa University Department of Applied Mathematics Hualien, Taiwan Learning Markov-chain embedded.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
The Markov Chain Monte Carlo Method Isabelle Stanton May 8, 2008 Theory Lunch.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
ST3236: Stochastic Process Tutorial 6
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Fault Tree Analysis Part 11 – Markov Model. State Space Method Example: parallel structure of two components Possible System States: 0 (both components.
Polymerase Chain Reaction: A Markov Process Approach Mikhail V. Velikanov et al. J. theor. Biol Summarized by 임희웅
Chapter 9: Markov Processes
Reliability Engineering
SOLVING QUADRATICS: THE SQUARE ROOT PRINCIPLE PART 3.
Courtesy of J. Bard, L. Page, and J. Heyl
PageRank and Markov Chains
DTMC Applications Ranking Web Pages & Slotted ALOHA
Notes Over 9.6 An Equation with One Solution
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Discrete-time markov chain (continuation)
Various Random Number Generators and the Applications
Prof. Paolo Ferragina, Algoritmi per "Information Retrieval"
Discrete-time markov chain (continuation)
Chapman-Kolmogorov Equations
Discrete-time markov chain (continuation)
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Presentation transcript:

Kurtis Cahill James Badal

 Introduction  Model a Maze as a Markov Chain  Assumptions  First Approach and Example  Second Approach and Example  Experiment  Results  Conclusion

 Problem: To find an efficient approach of solving the rate of visitation of a cell inside a large maze  Application: To find the best possible place to intercept information

 Allows Stochastic principles to be applied to the problem  Each maze cell will be model as a state in Markov Chain  The Markov Chain will be one recurrent class

 To reduce the complexity of the problem and simulation, certain assumptions will be applied: 1.Unbiased transition to adjacent cells 2.Random walk can’t be stationary 3.No isolated cells inside the maze

 r i – Steady-state rate of the ith state of the Markov Chain  p ji – Probability of moving from state j to state i on the next step

The transition matrix for the random walk on this maze

System of Steady State Rate Equations

Row Reduced System of Steady State Rate Equations

 r i – Steady-state rate of the ith state of the Markov Chain  p – Proportionality constant  n i – Number of connections to the ith cell

Solution to System of Steady State Rate Equations

 Random Walker starts at a certain maze location and walks 10 8 steps  At each step the random walker increments the visit count of the most recently visited cell  The mean and standard deviation are measured at the end of the experiment  The measured result is compared to the calculated result

Random Walk result of a 2x2 Maze

Random Walk result of a 5x5 Maze

Random Walk result of a 10x10 Maze

Random Walk result of a 20x20 Maze

Random Walk result of a 40x40 Maze

 Modeled the maze as a Markov Chain  Applied Stochastic principles to the maze  First Approach is n 3 complexity  Second Approach is n complexity  Tested the calculated result with the measured result