CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

Probabilistic sequence modeling II: Markov chains Haixu Tang School of Informatics.
. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Introduction to Graph “theory”
CS433 Modeling and Simulation Lecture 06 – Part 03 Discrete Markov Chains Dr. Anis Koubâa 12 Apr 2009 Al-Imam Mohammad Ibn Saud University.
Discrete Time Markov Chains
Markov Chains 1.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
. Computational Genomics Lecture 7c Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
. Computational Genomics Lecture 10 Hidden Markov Models (HMMs) © Ydo Wexler & Dan Geiger (Technion) and by Nir Friedman (HU) Modified by Benny Chor (TAU)
Topics Review of DTMC Classification of states Economic analysis
G12: Management Science Markov Chains.
Chapter 17 Markov Chains.
Copyright © 2005 Department of Computer Science CPSC 641 Winter Markov Chains Plan: –Introduce basics of Markov models –Define terminology for Markov.
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
Random Walks Ben Hescott CS591a1 November 18, 2002.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Lecture 3: Markov processes, master equation
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Entropy Rates of a Stochastic Process
6.896: Probability and Computation Spring 2011 Constantinos (Costis) Daskalakis lecture 2.
Overview of Markov chains David Gleich Purdue University Network & Matrix Computations Computer Science 15 Sept 2011.
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
Markov Chains Lecture #5
CS 206 Introduction to Computer Science II 11 / 11 / Veterans Day Instructor: Michael Eckmann.
Algorithmic and Economic Aspects of Networks Nicole Immorlica.
. PGM: Tirgul 8 Markov Chains. Stochastic Sampling  In previous class, we examined methods that use independent samples to estimate P(X = x |e ) Problem:
048866: Packet Switch Architectures Dr. Isaac Keslassy Electrical Engineering, Technion Review.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
. Hidden Markov Model Lecture #6 Background Readings: Chapters 3.1, 3.2 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
2003 Fall Queuing Theory Midterm Exam(Time limit:2 hours)
1 Markov Chains Algorithms in Computational Biology Spring 2006 Slides were edited by Itai Sharon from Dan Geiger and Ydo Wexler.
Markov Chains Chapter 16.
Department of Computer Science Undergraduate Events More
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
Problems, cont. 3. where k=0?. When are there stationary distributions? Theorem: An irreducible chain has a stationary distribution  iff the states are.
. Markov Chains Lecture #5 Background Readings: Durbin et. al. Section 3.1 Prepared by Shlomo Moran, based on Danny Geiger’s and Nir Friedman’s.
1 Markov Chains H Plan: –Introduce basics of Markov models –Define terminology for Markov chains –Discuss properties of Markov chains –Show examples of.
DynaTraffic – Models and mathematical prognosis
Lecture 11 – Stochastic Processes
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
Entropy Rate of a Markov Chain
Decision Making in Robots and Autonomous Agents Decision Making in Robots and Autonomous Agents The Markov Decision Process (MDP) model Subramanian Ramamoorthy.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
Theory of Computations III CS-6800 |SPRING
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
Discrete Time Markov Chains
Flows and Networks (158052) Richard Boucherie Stochastische Operations Research -- TW wwwhome.math.utwente.nl/~boucherierj/onderwijs/158052/ html.
COMS Network Theory Week 5: October 6, 2010 Dragomir R. Radev Wednesdays, 6:10-8 PM 325 Pupin Terrace Fall 2010.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
PERTEMUAN 26. Markov Chains and Random Walks Fundamental Theorem of Markov Chains If M g is an irreducible, aperiodic Markov Chain: 1. All states are.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Processes What is a Markov Process?
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
Hidden Markov Models BMI/CS 576
Markov Chains and Random Walks
Advanced Statistical Computing Fall 2016
Discrete-time markov chain (continuation)
Discrete Time Markov Chains
Randomized Algorithms Markov Chains and Random Walks
Markov Chains Lecture #5
September 1, 2010 Dr. Itamar Arel College of Engineering
Carey Williamson Department of Computer Science University of Calgary
CS723 - Probability and Stochastic Processes
CS723 - Probability and Stochastic Processes
Lecture 11 – Stochastic Processes
Presentation transcript:

CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy Markov Models CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy

Markov Property Processes future values are conditionally dependent on the present state of the system. Strong Markov Property Similar as Markov Property, where values are conditionally dependent on the stopping time (Markov time) instead of present state. Introduction

Markov Process Process that follows Markov Property. e. g Markov Process Process that follows Markov Property. e.g. First order Markov Process for probability space (, F, P) P(Xt+1=Sj | Xt=Si, Xt-1=Sk ,...) = P(Xt+1=Sj | qt=Si) , for t = 1, 2, 3, … where,  = {S1, S2, S3, …, Sn }, State at time t is Xt=Si Markov Model Model’s a stochastic system which assumes Markov property. It is formally represented by triplet M = (K, , A), where K is a finite set of states  vector of initial probabilities for each of the states. A is a matrix representing transition probabilities. Markov model is an non deterministic finite state machine. Introduction (Contd.)

Simple Markov Model of Weather Stochastic FSM The transition matrix: Stochastic matrix: Rows sum up to 1 Double stochastic matrix: Rows and columns sum up to 1 Simple Markov Model of Weather

Prediction Based on State Transition Probability If we want to know probability of the sequence SUNNY SUNNY SUNNY SUNNY SUNNY Take initial probability of SUNNY day i.e. on a any given day probability that it will be SUNNY is 0.30 And for use to get another SUNNY day after a SUNNY day is 0.42 So, by using Markov Chain we can say probability of getting 5 consecutive SUNNY day is Prediction Based on State Transition Probability

Types of Markov Model System state is fully observable System state is partially observable System is autonomous Markov chain Hidden Markov model System is controlled Markov decision process Partially observable Markov decision process Types of Markov Model

Types of Markov Model (Contd.) Markov Chain Simplest Markov model System state at time t+1 depends on state at t. Hidden Markov Model Similar to Markov chain but system states are unobservable. Markov decision process Models a system where outcomes are partially random and partially under the control of a decision maker. Types of Markov Model (Contd.)

Types of Markov Model (Contd.) Markov decision process It is formally represented by 4-tuple M = (K, , A, R), where K is a finite set of states  vector of initial probabilities for each of the states. A is a matrix representing transition probabilities. R is reward given for transition from state St to Sp with transition probability given in matrix A. Types of Markov Model (Contd.)

Types of Markov Model (Contd.) Partially observable Markov decision process Similar to Markov decision process where state of system is partially observed. It is formally represented by 4-tuple M = (K, , O , A, ), where K is a finite set of states.  is a set of actions. O is a set of observations. A is a matrix representing transition probabilities.  is a set of conditional observation probabilities. Types of Markov Model (Contd.)

System that transitions from one state to another, between a finite number of possible states. Next state depends only on current state and not its history. Follows Markov property. Formally for a stochastic process { Xt } P{ Xt+1 = j | X0 = k0, . . . , Xt-1 = kt-1, Xt = i } = P{ Xt+1 = j | Xt = i } for every i, j, k0, . . . , kt-1 and for every t. Stationary Markov Chains Pr{ Xt+1 = j | Xt = i } = Pr{ X1 = j | X0 = i } for all t Markov Chain

If the state space S = { 0, 1, . . . , m–1} then we have j pij = 1  i and pij  0  i, j where pij = Pr{ X1 = j | X0 = i } (we must go some where) (each transition has probability  0) Transition Matrix

Representation of a Markov Chain as a Digraph Each directed edge AB is associated with the positive transition probability from A to B. A B C D 0.2 0.3 0.5 0.05 0.95 0.8 1 1 0.8 0.2 0.3 0.5 0.05 0.95 A B C D Representation of a Markov Chain as a Digraph

Properties of Markov Chain states States of Markov chains are classified by the digraph representation (omitting the actual probability values) A, C and D are recurrent states: they are in strongly connected components which are sinks in the graph. A B C D B is not recurrent – it is a transient state Alternative definitions: A state s is recurrent if it can be reached from any state reachable from s; otherwise it is transient. Properties of Markov Chain states

Another example of Recurrent and Transient States B C D A and B are transient states, C and D are recurrent states. Once the process moves from B to D, it will never come back. Another example of Recurrent and Transient States

Irreducible Markov Chains A Markov Chain is irreducible if the corresponding graph is strongly connected (and thus all its states are recurrent). A B C D 1 A B C D E Irreducible Markov Chains

A state s has a period k if k is the GCD of the lengths of all the cycles that pass via s. (in the shown graph the period of A is 2). A Markov Chain is periodic if all the states in it have a period k >1. It is aperiodic otherwise. A B C D E Periodic States

A B C D A Markov chain is ergodic if : the corresponding graph is strongly connected. It is not peridoic Ergodic Markov Chains are important since they guarantee the corresponding Markovian process converges to a unique distribution, in which all states have strictly positive probability. Ergodic Markov Chains

Reversible Markov chain A Markov chain is reversible if Reversible Markov chain

Markov Model Applications Physics Chemistry Testing Information sciences Queuing theory Internet applications Statistics Economics and finance Social science Mathematical biology Games Music Markov Model Applications

"Wkipedia-Markov model," [Online]. Available: http://en. wikipedia "Wkipedia-Markov model," [Online]. Available: http://en.wikipedia.org/wiki/Markov_chain. [Accessed 28 10 2012]. P. Xinhui Zhang, "DTMarkovchains," [Online]. Available: http://www.wright.edu/~xinhui.zhang/. [Accessed 28 10 2012]. References