Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

Many useful applications, especially in queueing systems, inventory management, and reliability analysis. A connection between discrete time Markov chains.
Operations Research: Applications and Algorithms
Markov Chains.
Operations Research: Applications and Algorithms
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Markov Chains 1.
Markov Models Charles Yan Markov Chains A Markov process is a stochastic process (random process) in which the probability distribution of the.
Introduction to Discrete Time Semi Markov Process Nur Aini Masruroh.
Chapter 5 Probability Models Introduction –Modeling situations that involve an element of chance –Either independent or state variables is probability.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Ch-9: Markov Models Prepared by Qaiser Abbas ( )
Entropy Rates of a Stochastic Process
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Hidden Markov Models Fundamentals and applications to bioinformatics.
Operations Research: Applications and Algorithms
1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.
Markov processes in a problem of the Caspian sea level forecasting Mikhail V. Bolgov Water Problem Institute of Russian Academy of Sciences.
Reliable System Design 2011 by: Amir M. Rahmani
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Scenario Generation for the Asset Allocation Problem Diana Roman Gautam Mitra EURO XXII Prague July 9, 2007.
Linear Programming Applications
Linear Programming Applications
Stochastic Process1 Indexed collection of random variables {X t } t   for each t  T  X t is a random variable T = Index Set State Space = range.
The Lognormal Distribution
Lecture II-2: Probability Review
CS6800 Advanced Theory of Computation Fall 2012 Vinay B Gavirangaswamy
Chapter 13 Statistics © 2008 Pearson Addison-Wesley. All rights reserved.
6. Markov Chain. State Space The state space is the set of values a random variable X can take. E.g.: integer 1 to 6 in a dice experiment, or the locations.
WFM 5201: Data Management and Statistical Analysis © Dr. Akm Saiful IslamDr. Akm Saiful Islam WFM 5201: Data Management and Statistical Analysis Akm Saiful.
Generalized Semi-Markov Processes (GSMP)
1 7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to.
Borgan and Henderson:. Event History Methodology
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Multiple Random Variables Two Discrete Random Variables –Joint pmf –Marginal pmf Two Continuous Random Variables –Joint Distribution (PDF) –Joint Density.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
© 2015 McGraw-Hill Education. All rights reserved. Chapter 19 Markov Decision Processes.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
ECE-7000: Nonlinear Dynamical Systems Overfitting and model costs Overfitting  The more free parameters a model has, the better it can be adapted.
Hidden Markovian Model. Some Definitions Finite automation is defined by a set of states, and a set of transitions between states that are taken based.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
D Nagesh Kumar, IIScOptimization Methods: M6L5 1 Dynamic Programming Applications Capacity Expansion.
Stochastic Optimization
1 EE571 PART 3 Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic Engineering Eastern.
Introduction to Optimization
(i) Preliminaries D Nagesh Kumar, IISc Water Resources Planning and Management: M3L1 Linear Programming and Applications.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Uncertainty and Reliability Analysis D Nagesh Kumar, IISc Water Resources Planning and Management: M6L2 Stochastic Optimization.
Markov Games TCM Conference 2016 Chris Gann
D Nagesh Kumar, IIScOptimization Methods: M8L1 1 Advanced Topics in Optimization Piecewise Linear Approximation of a Nonlinear Function.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
Introduction and Preliminaries D Nagesh Kumar, IISc Water Resources Planning and Management: M4L1 Dynamic Programming and Applications.
Other Models for Time Series. The Hidden Markov Model (HMM)
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
(iii) Simplex method - I D Nagesh Kumar, IISc Water Resources Planning and Management: M3L3 Linear Programming and Applications.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
Reliability Engineering
Copyright © Cengage Learning. All rights reserved. 8 PROBABILITY DISTRIBUTIONS AND STATISTICS.
Stochastic Optimization
V5 Stochastic Processes
Chapter 12 Statistics 2012 Pearson Education, Inc.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Dynamic Programming and Applications
CONTEXT DEPENDENT CLASSIFICATION
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
7. Two Random Variables In many experiments, the observations are expressible not as a single quantity, but as a family of quantities. For example to record.
Chapter 12 Statistics.
Presentation transcript:

Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization

Objectives D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 2  To understand stochastic processes, Markov chains and Markov processes  To discuss about transition probabilities and steady state probabilities

Stochastic Processes D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 3 Random hydrological variables (e.g. rainfall, streamflow) are obtained as a sequence of observations of historic record, called a time series In time series, the observations are ordered with reference to time These observations are often dependent, i.e., r.v. at one time influences the r.v. at later times. Thus, time series is essentially observation of a single r.v. A r.v. whose value changes with time according to some probabilistic laws is termed as a stochastic process A time series is one realization of a stochastic process A single observation at a particular time is one possible value a r.v. can take A stochastic process is a time series of a r.v. Stationary Process: Statistical properties of one realization will be equal to those of another realization or the probability distribution of a stationary process does not vary with time.

Markov Processes and Markov Chains D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 4 In water resources system modeling, most of the stochastic processes are often treated as Markov process A process is said to be a Markov process (first order) if the dependence of future values of the process on the past values is completely determined by its dependence on the current value alone A first order Markov process has the property Since the current value summarizes the state, it is often referred as the state.

Markov Processes and Markov Chains… D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 5 A Markov process whose state X t takes only discrete values is termed Markov chain. The assumption of a Markov Chain implies that the dependence of any hydrological variable in the next period on its current and all previous periods’ values is completely described by its dependence on its current period value alone The dependence of a r.v. in the next period on the current value is expressed in terms of transition probabilities.

Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 6 Consider a stream whose inflow Q is a stationary random variable Transition probabilities measure the dependence of the inflow during period t+1 on the inflow during the period t. Transition probability P ij t is defined as the probability that the inflow during the period t+1 will be in the class interval j, given that the inflow during the period t lies in the class interval i. P ij t = P[Q t+1 = j / Q t = i] where Q t = i, indicates that the inflow during the period t belongs to the discrete class interval i. The historical inflow data is discretized first into suitable classes Each inflow value in the historical data set is then assigned a class interval. P ij t are estimated from the number of times when the inflow in the period t belongs to class i and the inflow in period t + 1 goes to class j, divided by the number of times the inflow belongs to class i in period t.

Example D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 7 A sequence of inflows for 30 time periods are given below. Find the transition probabilities by discretising the inflows into three intervals 0-2, 2-4 and 4-6. tQtQt tQtQt tQtQt

Example… D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 8 Solution: From the data, Probability of being in first class 0-2, PQ 1 = 7/30 = 0.23 Probability of being in second class 2-4, PQ 2 = 12/30 = 0.40 Probability of being in third class 4-6, PQ 3 = 11/30 = 0.37 The number of times a flow in interval j followed a flow in interval i is shown in the form of a matrix below. Here only 29 values are considered. j i

Example… D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 9 Transition probabilities P ij are then calculated by dividing each value by the sum of corresponding row values. Given an observed flow in an interval i in period t, the probabilities of being in one of the possible intervals j in the next period t +1 must sum to 1. The sum of the probabilities in each row equals 1. Matrices of transition probabilities whose rows sum to 1 are also called stochastic matrices or first-order Markov chains. j i Transition probability Matrix

Steady State Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 10 One can compute the probability of observing a flow in any interval at any period in the future given the present flow interval, using the transition probability matrix For example assume the flow in the current time period t =1 is in interval i =2 Following the above example, the probabilities, PQ j,2, of being in any of the three intervals in the following time period t =2 are the probabilities shown in the second row of the matrix in the Transition probability matrix Table. The probabilities of being in an interval j in the following time period t=3 is the sum over all intervals i of the joint probabilities of being in interval i in period t =2 and making a transition to interval j in period t =3. i.e., for all intervals j and periods t.

Steady State Probabilities… D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 11 This operation can be continued till N future time periods After some time period, the flow interval probabilities as calculated above seem to be converging to the unconditional probabilities (calculated from the historical data). These are termed as the steady state probabilities. If the current period flow is in class i =2, then the probabilities are [0 1 0] The results for seven future periods and the steady state probabilities determined are: Time period Flow interval i Steady state probabilities

Steady State Probabilities… D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 12 As the future time period t increases, the flow interval probabilities are converging to unconditional probabilities i.e., PQ it will equal PQ i,t+1 for each flow interval i This indicates that as the number of time periods increases between the current period and that future time period, the predicted probability of observing a future flow in any particular interval at some time in the future becomes less and less dependent on the current flow interval. Steady state probabilities can be determined by solving for all intervals j and

D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Thank You