1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries.

Slides:



Advertisements
Similar presentations
MATRIX MULTIPLICATION Brought to you by Tutorial Services – The Math Center.
Advertisements

ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Applied Informatics Štefan BEREŽNÝ
Copyright (c) 2003 Brooks/Cole, a division of Thomson Learning, Inc
Mathematics. Matrices and Determinants-1 Session.
1 Markov Chains (covered in Sections 1.1, 1.6, 6.3, and 9.4)
Section 10.1 Basic Properties of Markov Chains
Matrices, Digraphs, Markov Chains & Their Use. Introduction to Matrices  A matrix is a rectangular array of numbers  Matrices are used to solve systems.
Operations Research: Applications and Algorithms
What is the probability that the great-grandchild of middle class parents will be middle class? Markov chains can be used to answer these types of problems.
Matrix Multiplication To Multiply matrix A by matrix B: Multiply corresponding entries and then add the resulting products (1)(-1)+ (2)(3) Multiply each.
Maths for Computer Graphics
1 Markov Chains Tom Finke. 2 Overview Outline of presentation The Markov chain model –Description and solution of simplest chain –Study of steady state.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 60 Chapter 8 Markov Processes.
Determinants King Saud University. The inverse of a 2 x 2 matrix Recall that earlier we noticed that for a 2x2 matrix,
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
8.2 Regular Stochastic Matrices
Matrix Algebra. Quick Review Quick Review Solutions.
A rectangular array of numbers (we will concentrate on real numbers). A nxm matrix has ‘n’ rows and ‘m’ columns What is a matrix? First column First row.
9  Markov Chains  Regular Markov Chains  Absorbing Markov Chains  Game Theory and Strictly Determined Games  Games with Mixed Strategies Markov Chains.
13.1 Matrices and Their Sums
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …,
If A and B are both m × n matrices then the sum of A and B, denoted A + B, is a matrix obtained by adding corresponding elements of A and B. add these.
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
Courtesy of J. Akinpelu, Anis Koubâa, Y. Wexler, & D. Geiger
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
Matrices: Simplifying Algebraic Expressions Combining Like Terms & Distributive Property.
Sec 4.1 Matrices.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
What is Matrix Multiplication? Matrix multiplication is the process of multiplying two matrices together to get another matrix. It differs from scalar.
10.1 Properties of Markov Chains In this section, we will study a concept that utilizes a mathematical model that combines probability and matrices to.
Stochastic Processes and Transition Probabilities D Nagesh Kumar, IISc Water Resources Planning and Management: M6L5 Stochastic Optimization.
11. Markov Chains (MCs) 2 Courtesy of J. Bard, L. Page, and J. Heyl.
Markov Games TCM Conference 2016 Chris Gann
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 60 Chapter 8 Markov Processes.
4.1 Exploring Data: Matrix Operations ©2001 by R. Villar All Rights Reserved.
A rectangular array of numeric or algebraic quantities subject to mathematical operations. The regular formation of elements into columns and rows.
From DeGroot & Schervish. Example Occupied Telephone Lines Suppose that a certain business office has five telephone lines and that any number of these.
A very brief introduction to Matrix (Section 2.7) Definitions Some properties Basic matrix operations Zero-One (Boolean) matrices.
2.1 Matrix Operations 2. Matrix Algebra. j -th column i -th row Diagonal entries Diagonal matrix : a square matrix whose nondiagonal entries are zero.
Matrices. Matrix A matrix is an ordered rectangular array of numbers. The entry in the i th row and j th column is denoted by a ij. Ex. 4 Columns 3 Rows.
MTH108 Business Math I Lecture 20.
13.4 Product of Two Matrices
Matrix. Matrix Matrix Matrix (plural matrices) . a collection of numbers Matrix (plural matrices)  a collection of numbers arranged in a rectangle.
Matrix Operations Free powerpoints at
Matrix Operations.
Matrix Operations Free powerpoints at
Matrix Multiplication
Matrix Operations Monday, August 06, 2018.
Matrix Operations.
Matrix Operations Free powerpoints at
Multiplying Matrices.
MULTIPLYING TWO MATRICES
2. Matrix Algebra 2.1 Matrix Operations.
MATRICES MATRIX OPERATIONS.
Section 2.4 Matrices.
Objectives Multiply two matrices.
Matrices and Matrix Operations
Multiplying Matrices.
3.5 Perform Basic Matrix Operations
3.6 Multiply Matrices.
Chapter 4 Matrices & Determinants
Multiplying Matrices.
Discrete-time markov chain (continuation)
Multiplying Matrices.
Markov Chains & Population Movements
Matrix Multiplication Sec. 4.2
Multiplying Matrices.
Presentation transcript:

1. Markov Process 2. States 3. Transition Matrix 4. Stochastic Matrix 5. Distribution Matrix 6. Distribution Matrix for n 7. Interpretation of the Entries of A n 1

 Suppose that we perform, one after the other, a sequence of experiments that have the same set of outcomes. If the probabilities of the various outcomes of the current experiment depend (at most) on the outcome of the preceding experiment, then we call the sequence a Markov process. 2

3 A particular utility stock is very stable and, in the short run, the probability that it increases or decreases in price depends only on the result of the preceding day's trading. The price of the stock is observed at 4 P.M. each day and is recorded as "increased," "decreased," or "unchanged." The sequence of observations forms a Markov process.

 The experiments of a Markov process are performed at regular time intervals and have the same set of outcomes. These outcomes are called states, and the outcome of the current experiment is referred to as the current state of the process. The states are represented as column matrices. 4

 The transition matrix records all data about transitions from one state to the other. The form of a general transition matrix is 5.

 A stochastic matrix is any square matrix that satisfies the following two properties:  1. All entries are greater than or equal to 0;  2. The sum of the entries in each column is 1.  All transition matrices are stochastic matrices. 6

7 For the utility stock of the previous example, if the stock increases one day, the probability that on the next day it increases is.3, remains unchanged.2 and decreases.5. If the stock is unchanged one day, the probability that on the next day it increases is.6, remains unchanged.1, and decreases.3. If the stock decreases one day, the probability that it increases the next day is.3, is unchanged.4, decreases.3. Find the transition matrix.

8 The Markov process has three states: "increases," "unchanged," and "decreases." The transitions from the first state ("increases") to the other states are

9 The transitions from the other two states are

10 Putting this information into a single matrix so that each column of the matrix records the information about transitions from one particular state is the transition matrix.

 The matrix that represents a particular state is called a distribution matrix.  Whenever a Markov process applies to a group with members in r possible states, a distribution matrix for n is a column matrix whose entries give the percentages of members in each of the r states after n time periods. 11

12 Let A be the transition matrix for a Markov process with initial distribution matrix then the distribution matrix after n time periods is given by

13 Census studies from the 1960s reveal that in the US 80% of the daughters of working women also work and that 30% of daughters of nonworking women work. Assume that this trend remains unchanged from one generation to the next. If 40% of women worked in 1960, determine the percentage of working women in each of the next two generations.

14 There are two states, "work" and "don't work." The first column of the transition matrix corresponds to transitions from "work". The probability that a daughter from this state "works" is.8 and "doesn't work" is =.2. Similarly, the daughter from the "don't work" state "works" with probability.3 and "doesn't work" with probability.7.

15 The transition matrix is The initial distribution is.

16 In one generation, So 50% women work and 50% don't work. For the second generation, So 55% women work and 45% don't work.

17 The entry in the i th row and j th column of the matrix A n is the probability of the transition from state j to state i after n periods.

18 Interpret from the last example. If a woman works, the probability that her granddaughter will work is.7 and not work is.3. If a woman does not work, the probability that her granddaughter will work is.45 and not work is.55.

 A Markov process is a sequence of experiments performed at regular time intervals involving states. As a result of each experiment, transitions between states occur with probabilities given by a matrix called the transition matrix. The ij th entry in the transition matrix is the conditional probability Pr(moving to state i |in state j ). 19

 A stochastic matrix is a square matrix for which every entry is greater than or equal to 0 and the sum of the entries in each column is 1. Every transition matrix is a stochastic matrix.  The n th distribution matrix gives the percentage of members in each state after n time periods. 20

 A n is obtained by multiplying together n copies of A. Its ij th entry is the conditional probability Pr(moving to state i after n time periods | in state j ). Also, A n times the initial distribution matrix gives the n th distribution matrix. 21