Download presentation

Presentation is loading. Please wait.

Published byRoxana Easterbrook Modified over 3 years ago

1
Dynamic Programming Lecture 13 (5/21/2014)

2
- A Forest Thinning Example - 1 260 2 650 3 535 4 410 5 850 6 750 7 650 8 600 9 500 10 400 11 260 0 50 100 0 150 200 850 750 650 600 500 400 0 0 100 175 150 75 Stage 1 Stage 2 Stage 3 Age 10 Age 20 Age 30 Age 10 Source: Dykstra’s Mathematical Programming for Natural Resource Management (1984)

3
Dynamic Programming Cont. Stages, states and actions (decisions) Backward and forward recursion

4
Solving Dynamic Programs Recursive relation at stage i (Bellman Equation):

5
Dynamic Programming Structural requirements of DP –The problem can be divided into stages –Each stage has a finite set of associated states (discrete state DP) –The impact of a policy decision to transform a state in a given stage to another state in the subsequent stage is deterministic –Principle of optimality: given the current state of the system, the optimal policy for the remaining stages is independent of any prior policy adopted

6
Examples of DP The Floyd-Warshall Algorithm (used in the Bucket formulation of ARM):

7
Examples of DP cont. Minimizing the risk of losing an endangered species (non-linear DP): Source: Buongiorno and Gilles’ (2003) Decision Methods for Forest Resource Management $2M $1M $0 $1M $2M Stage 1Stage 2Stage 3 $2M $0 $1M $0 $2M $0 $1M $0 $2M $1M $0

8
Minimizing the risk of losing an endangered species (DP example) Stages (t=1,2,3) represent $ allocation decisions for Project 1, 2 and 3; States (i=0,1,2) represent the budgets available at each stage t; Decisions (j=0,1,2) represent the budgets available at stage t+1; denotes the probability of failure of Project t if decision j is made (i.e., i-j is spent on Project t); and is the smallest probability of total failure from stage t onward, starting in state i and making the best decision j*. Then, the recursive relation can be stated as:

9
Markov Chains (based on Buongiorno and Gilles 2003) Transition probability matrix P Vector p t converges to a vector of steady-state probabilities p*. p* is independent of p 0 !

10
Markov Chains cont. Berger-Parker Landscape Index Mean Residence Times Mean Recurrence Times

11
Markov Chains cont. Forest dynamics (expected revenues or biodiversity with vs. w/o management:

12
Markov Chains cont. Present value of expected returns

13
Markov Decision Processes

Similar presentations

OK

MIT and James Orlin © 2003 1 Dynamic Programming 1 –Recursion –Principle of Optimality.

MIT and James Orlin © 2003 1 Dynamic Programming 1 –Recursion –Principle of Optimality.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on types of parallelograms worksheets Ppt on historical places in india Motivational ppt on life Ppt on disaster management in hindi Slides for ppt on internet Ppt on polynomials for class 8 Ppt on houses around the world Ppt on professional development plan Ppt on ram and rom differences Ppt on surface water