DynaTraffic – Models and mathematical prognosis

Slides:



Advertisements
Similar presentations
Discrete time Markov Chain
Advertisements

. Markov Chains. 2 Dependencies along the genome In previous classes we assumed every letter in a sequence is sampled randomly from some distribution.
Introduction to Graph “theory”
Succession Model We talked about the facilitation, inhibition, and tolerance models, but all of these were verbal descriptions Today we are going to consider.
Module 5 – Networks and Decision Mathematics Chapter 24 – Directed Graphs.
Operations Research: Applications and Algorithms
1 A class of Generalized Stochastic Petri Nets for the performance Evaluation of Mulitprocessor Systems By M. Almone, G. Conte Presented by Yinglei Song.
Dynamic Bayesian Networks (DBNs)
Graphs Graphs are the most general data structures we will study in this course. A graph is a more general version of connected nodes than the tree. Both.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Topics Review of DTMC Classification of states Economic analysis
Our Group Members Ben Rahn Janel Krenz Lori Naiberg Chad Seichter Kyle Colden Ivan Lau.
Section 10.1 Basic Properties of Markov Chains
Stevenson and Ozgur First Edition Introduction to Management Science with Spreadsheets McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies,
G12: Management Science Markov Chains.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Markov Processes MBAP 6100 & EMEN 5600 Survey of Operations Research Professor Stephen Lawrence Leeds School of Business University of Colorado Boulder,
Operations Research: Applications and Algorithms
Lecture 13 – Continuous-Time Markov Chains
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
CSE 3504: Probabilistic Analysis of Computer Systems Topics covered: Discrete time Markov chains (Sec )
Markov Chains Chapter 16.
INDR 343 Problem Session
Graphs, relations and matrices
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
The effect of New Links on Google Pagerank By Hui Xie Apr, 07.
Final Exam Review II Chapters 5-7, 9 Objectives and Examples.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Artificial Intelligence Lecture 9. Outline Search in State Space State Space Graphs Decision Trees Backtracking in Decision Trees.
Capacity analysis of complex materials handling systems.
Liang Ge.  Introduction  Important Concepts in MCL Algorithm  MCL Algorithm  The Features of MCL Algorithm  Summary.
Stochastic Processes A stochastic process is a model that evolves in time or space subject to probabilistic laws. The simplest example is the one-dimensional.
Markov Cluster (MCL) algorithm Stijn van Dongen.
Introduction Previous lessons demonstrated the use of the standard normal distribution. While distributions with a mean of 0 and a standard deviation of.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
Relevant Subgraph Extraction Longin Jan Latecki Based on : P. Dupont, J. Callut, G. Dooms, J.-N. Monette and Y. Deville. Relevant subgraph extraction from.
Matrices: Simplifying Algebraic Expressions Combining Like Terms & Distributive Property.
Single Ion Channels.
MA3264 Mathematical Modelling Lecture 5 Discrete Probabilistic Modelling.
Graphs Basic properties.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
Introduction Previous lessons demonstrated the use of the standard normal distribution. While distributions with a mean of 0 and a standard deviation of.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Games TCM Conference 2016 Chris Gann
8/14/04J. Bard and J. W. Barnes Operations Research Models and Methods Copyright All rights reserved Lecture 11 – Stochastic Processes Topics Definitions.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Ch 4. Language Acquisition: Memoryless Learning 4.1 ~ 4.3 The Computational Nature of Language Learning and Evolution Partha Niyogi 2004 Summarized by.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Introduction Previous lessons demonstrated the use of the standard normal distribution. While distributions with a mean of 0 and a standard deviation of.
Markov Chain Hasan AlShahrani CS6800
Introduction Previous lessons demonstrated the use of the standard normal distribution. While distributions with a mean of 0 and a standard deviation of.
Availability Availability - A(t)
15. Directed graphs and networks
Business Modeling Lecturer: Ing. Martina Hanová, PhD.
Markov Chains Mixing Times Lecture 5
V5 Stochastic Processes
Markov Chains Part 5.
Discrete Math 2 Shortest Path Using Matrix
Discrete time Markov Chain
Discrete time Markov Chain
Discrete-time markov chain (continuation)
Markov Chains & Population Movements
Presentation transcript:

DynaTraffic – Models and mathematical prognosis Simulation of the distribution of traffic with the help of Markov chains

What is this about? Models of traffic situations Graphs: Markov chains Edges, Vertices Matrix representation Vector representation Markov chains States, transition probabilities Special states: periodic, absorbing, or transient Steady-state distribution Matrix vector multiplication DynaTraffic help to understand and learn these concepts Graphentheorie

The goal: analysis of a traffic system We are interested in this question: „How many cars are there at a certain time on a lane?” In order to be able to make statements about the development of a system, we need a model. I.e., first we build a model and then we control and observe this model.

Mathematical prognosis step 1 Build a model of an everyday situation

Photo from a side perspective

Photo of the layout © Google Imagery 2007

Model of the layout, with cars

Elements for the model without cars Nodes Arrows  What does your model look like?

Model of the layout, without cars Stop points  nodes Lanes  edges 4 3 7

Representation in DynaTraffic 4 3 7 Characters to label lanes Colored arrows and slightly different arrangement

Models Why does one build models? To better understand systems Models are a useful tool to examine systems Definition of a model: A simplified representation used to explain the workings of a real world system or event. (Source: http://en.wiktionary.org/wiki/model) Mathematical Models try to capture the relevant parameters of natural phenomena and to use these parameters for predictions in the observed system.

Mathematical prognosis step 2 Transformation of the model

Why a transformation method? We are concerned with traffic on single lanes and analyze the traffic with the help of a Markov model.  For that lanes must be vertices  Transformation of the situation graph!

Transformation recipe Transformation of situation graph to line graph: Each edge become a vertex. There is an edge between two vertices, if one can change from one lane to the other in the traffic situation. Each vertex has an edge to itself.

Transformation step a) Each edge becomes a vertex.

Transformation step b) There is an edge between two vertices, if one can change from one lane to the other in the traffic situation.

Transformation step c) Each vertex has an edge to itself. I.e.: a car can remain on a lane!

The good news concerning this transformation  We do not need to do this transformation, it is already done in DynaTraffic. But we should understand it… Situation graph Line graph to the situation graph

Mathematical prognosis step 3 Define assumptions

Define the process Every 10 seconds, each car takes a decision with a certain probability (so-called transition probability): „I change to another lane“ „I remain on this lane“ The realization of decisions is called a transition: cars change their state, if necessary.

The transition graph The transition probabilities are entered in the line graph  Transition graph (= Markov chain)

Our Markov chain The vertices represent possible states, i.e., lanes on which a car can be. The edges show to which other lanes a car can change from each lane.

Meaning of the transition probability? „If there is a car on lane A now, it will in the next transition change to lane B with a probability of 83%.

Alternative representation of the transition graph Transition graph Transition matrix

How to read the transition matrix From To

Empty entries in the transition matrix? If an edge does not exist, there is a 0 in the transition matrix at the corresponding entry.

Summary: our traffic model Model with cars Model without cars Photo Transition matrix Transition graph

Summary: our traffic model Step 1: build a model of an everyday situation Model with cars Step 2: transformation of the model Model without cars Photo Step 3: define assumptions Transition matrix Transition graph

Demo DynaTraffic

Our Markov chain again The vertices represent possible states, i.e., lanes on which a car be. The edges show on which other vertices a car can change from one vertex, and with which probability this happens per transition.

„Do a transition“? To calculate how many cars there are going to be on a certain lane, one needs: The number of cars on the individual lanes. The probabilities leading to the certain lane.

How many cars are on lane A after the next transition? Cars on individual lanes: A: 3 cars B: 4 cars C: 7 cars Probabilities leading to lane A: A  A : 0.17 C  A : 0.83 Calculation: 3 * 0.17 + 7 * 0.83 = 6.32 cars

Probabilities for transitions The required probabilities can directly be read from the transition matrix! 3 * 0.17 + 7 * 0.83 = 6.32 cars

State vector The number of cars per lane in the state vector notation

Calculate transitions As seen: Multiplication of the first row of the transition matrix with the current number of cars on the first lane (= second entry of the state vector) gives the new number of cars on the first lane.

Calculate transitions compactly With a matrix vector multiplication a transition can be calculated at once for all lanes!

Matrix vector multiplication + + + + =

Properties of transition graphs Based on the transition probabilities, certain states of a transition graph can be classified. States can be absorbing, periodic, or transient. There are further steady-state distributions and irreducible transition graphs.

Absorbing state A state which has no out-going transition with positive probability  Over time all cars conglomerate there! Where does this happen with real traffic? - Junkyard - dead-end one-way street ;-)

Periodic states States take periodically the same values Traffic oscillates between certain states. Where are such streets in everyday life? - e.g. between work and home

Transient state A state to which a care can never return. Ein Zustand der nicht transient ist, ist rekurrent

Irreducible transition graph Each state is reachable from every other state.

Is the following graph irreducible? No! (State D is not reachable from every other state!) Which transition probabilites could be changed in order to make this graph ireducible?

Properties of the transition matrix Column sum = 1: stochastic matrix Column sum ≠ 1: Column sum < 1: total number of cars goes toward 0 Column sum > 1: total number of cars grow infinetly

Steady-state distribution If a transition graph is irreducible and does not have periodic states, then the system swings into a steady-state distribution, independently of the initial state.

Notation of transition probabilities „The probability to change from vertex A to vertex B is 10%” P(A, B) = 0.1

Summary Properties of states: Periodic: Cars move to and fro. Absorbing: All cars are finally there. Transient: A car never returns there. Transitions graphs can be irreducible: Cars can change from every state to every other state. Distributions can be steady-state: the system has swung into.

Models and their limitations Lanes can hold a infinitely large number of cars in our model. This is not realistic! Therefore: Simulation stops, of > 2000 cars on a lane. In the upper-limit mode a individual upper limit (< 2000) can be defined per lane.

The upper-limit mode Possible application: Different parking areas. Cars should fill the parking areas C, D, and E in this order.

Layout of parking areas in DynaTraffic Upper limits of lanes are displayed

Process upper-limit: lane C reached its capacity Set all edges incident to C to 0.  No more cars should arrive. This is not a stochastic matrix any more!  Columns must be normalized.

Process upper-limit: lane C is unlocked again Original row of the lane is reestablished Only outgoing edges are reestablished: This is ok for all vertices. Normalize column sum.

What is this about? Models of traffic situations Graphs: Markov chains Edges, Vertices Matrix representation Vector representation Markov chains States, transition probabilities Special states: periodic, absorbing, or transient Steady-state distribution Matrix vector multiplication DynaTraffic help to understand and learn these concepts Graphentheorie

Summary We model and analyze a traffic system with the help of Markov chains. How does the traffic distribution evolve? Does the system swing into? Like this we can make predictions about the system based on our Markov model! Markov und Leiterspiel: http://www.ethbib.ethz.ch/exhibit/mathematik/leiterspiel.html Markov und Monopoly: http://www.bewersdorff-online.de/monopoly/

DynaTraffic Situation graph Transition graph State statistics Control of transitions State statistics Transition matrix & state vector

Demo DynaTraffic