Download presentation

Presentation is loading. Please wait.

Published byOlivia Martinez Modified over 3 years ago

1
BIOECONOMICS Oscar Cacho School of Economics University of New England AARES Pre-Conference Workshop Queenstown, New Zealand 13 February 2007

2
2 Definitions General models Solution techniques Incorporating risk Examples Extensions Useful literature Outline

3
3 Bioeconomics - Definitions In its original coinage, "bioeconomics" referred to the study of how organisms of all kinds earn their living in "nature's economy," with particular emphasis on co-operative interactions and the progressive elaboration of the division of labor (see Hermann Reinheimer, Evolution by Co-operation: A Study in Bioeconomics, 1913). Today the term is used in various ways, from Georgescu- Roegen's thermodynamic analyses to the work in ecological economics on the problems of fisheries management. Corning (1996) Institute for the Study of Complex Systems Palo Alto, CA

4
4 Bioeconomics - Definitions Bioeconomics is what bioeconomists do. Bioeconomics aims at the integration or consilience (Wilson 1998) of two disciplines...for the purpose of enriching both disciplines by substantially enlarging the theoretical and empirical bases which ultimately contribute to building of new hypotheses, theorems, theories and paradigms. Landa (1999) Department of Economics, York University Editor of the Journal of Bioeconomics

5
5 Bioeconomics - Definitions The use of mathematical models to relate the biological performance of a production system to its economic and technical constraints. Allen et al (1984) The idea of maximizing net economic yield while maintaining sustainable yield. van der Ploeg et al. (1987) The interrelations between the economic forces affecting the fishing industry and the biological factors that determine the production and supply of fish in the sea. Clark (1985) (more in line with how AARES members apply the term)

6
6 Journal of Bioeconomics The Bioeconomics of Cooperation (new institutional economics) The Ecology of Trade (sustainability) Surrender Value of Capital Assets: The Economics of Strategic Virginity Loss (love) Making Good Decisions with Minimal Information: Simultaneous and Sequential Choice (ecological rationality) Altruism and Spite in a Selfish Gene Model of Endogenous Preferences (evolution) Evolutionary Theory and Economic Policy with Reference to Sustainability (behavioral economics) A sample of titles (with keywords)

7
7 The Bioeconomics of Marine Sanctuaries The Bioeconomics of the Spatial Distribution of an Endangered Species: The Case of the Swedish Wolf Population Implementing a Stochastic Bioeconomic Model for the North-East Arctic Cod Fishery Optimization of Harvesting Return from Age-Structured Population Selective versus Random Moose Harvesting: Does it Pay to be a Prudent Predator? Using Genetic Algorithms to Estimate and Validate Bioeconomic Models: The Case of the Ibero-atlantic Sardine Fishery Journal of Bioeconomics A sample of titles in resource management

8
8 Populations of natural organisms can be viewed as stocks of capital assets which provide potential flows of services. The critical characteristics of capital are: Durability: makes it necessary to apply intertemporal planning. Adjustment costs: force the decision maker to consider the future in order to spread out the cost of altering the capital stock. Types of decisions: Timing problem: ie. when to harvest a stand of trees. Harvest problem: i.e. how much of a resource to harvest each year. In both cases the flow of profits per time period depends upon the stock level (biomass) and the control variable (harvest). Bioeconomics of renewable resources Wilen (1985)

9
9 In the simplest case the value derived from natural resources is related to consumptive use by harvesting. The flows are usually measured in terms of number of organisms or biomass (weight). In the more complex cases the size of the stock may also have intrinsic value (i.e. number of birds available for birdwatchers). Models can be extended to include externalities. Bioeconomics of renewable resources Wilen, J.E.(1985).Bioeconomics of renewable resource use. In Kneese, A.V and Sweeney, J.L. (ed.), Handbook of natural resource and energy economics, Vol 1. North-Holland, Amsterdam 61-124.

10
10 subject to: General model in continuous time The Hamiltonian is: x(t) =state variable (resource stock) u(t)= control variable (harvest rate) equation of motion initial state reward final value

11
11 FOC in continuous time transversality condition maximum condition adjoint equation equation of motion initial state This system is used to solve for the optimal trajectories u*(t), x*(t), *(t)

12
12 subject to: General model in discrete time x t =state variable (resource stock) u t = control variable (harvest rate) state transition initial state The Hamiltonian is: reward final value

13
13 The Hamiltonian is the total rate of increase in the value of the asset (resource): Interpretation Value of net returns at time t Shadow price of the state variable ( x ) at time t (user cost)

14
14 FOC in discrete time This system has 3T+1 equations and 3T+1 unknowns: u t for t = 0,...T 1 x t for t = 0,...T t for t = 1,...T t = 0,...,T 1 t = 1,...,T 1 t = 0,...,T 1

15
15 subject to: Infinite horizon with discounting The current-value Hamiltonian is: state transition initial state discount factor:

16
16 FOC of infinite horizon problem In steady state: u t+1 = u t = u x t+1 = x t = x t+1 = t = (1) (2) (4) (3) Solving (1) for steady state, substituting into (2) and rearranging yields the optimal condition: This can be used to solve for (x*,u*) given the steady state condition from (3):

17
17 Summary of general model

18
18 Typical problems in bioeconomics

19
19 Solution techniques Numerical solution of optimal control model Nonlinear programming Dynamic programming Here I will consider only optimal control and dynamic programming

20
20 Numerical optimal control

21
21 Dynamic Programming (DP) subject to: recursive equation state transition 4. Use this decision rule to derive the optimal path for any initial state x 0 2. Solve by backward recursion for a finite set of values 3. Obtain the optimal (state-contingent) decision rule 1. Set terminal value To solve:

22
22 Dynamic programming algorithm

23
23 Alternative DP solution techniques Finite horizon modelsBackward recursion Infinite horizon models Function iteration Policy iteration practical only for infinite-horizon problems (DP is converted into a root-finding problem) (essentially the same as backward recursion but stop when V < tolerance)

24
24 Introducing risk Two general approaches exist for stochastic optimisation of bioeconomic models: Stochastic differential equations (Ito calculus) for continuous models Stochastic dynamic programming (SDP) for discrete models Here I will only deal with SDP

25
25 SDP Basics As before: the state variable x t can be observed before selecting a value for the control u t which results in a known reward R ( x t, u t ) But now future returns are uncertain because the system is subject to stochastic influences: where t is a random variable with known probability, assumed to be iid and therefore the stochastic process { t } is stationary There is a fixed state set X and a fixed control set U, with n and m elements respectively The time horizon may be fixed at T or

26
26 Let the Markovian probability matrix: The transition probability matrix (TPM) denote the n × n state transition probabilities when policy u is followed: (probability of jumping from state i to state j, given that action u is taken) To solve the problem first create an array P of dimensions n × n × m ( P contains m transition probability matrices P(u) )

27
27 4. Recursion step; solve: for all x X SDP Algorithm 2. Run Monte Carlo simulation to create transition probability matrix P(n,n,m) and reward matrix R(n,m) 1. Set dimensions ( n, m ) and initialise state set X and control set U 3. initialise terminal value vector V T (x) and set t=T-1 6. Decrease time counter and return to 4 until t=0 or convergence is achieved 5. Save optimal decision rule

28
28 7. For infinite horizon problem, create optimal transition probability matrix P*(n,n) by selecting the elements of the Markovian probability matrices that satisfy the optimal decision rule for the given state state a : SDP Algorithm (2) 8. Simulate the optimal state path by performing Monte Carlo simulation for any initial state x 0 a P ij * is the probability of jumping from state i to state j in the following period given that the optimal policy u*(x i ) is followed

29
29 Example: Weed Control A weed can be viewed as a renewable resource with the seed bank representing the stock of this resource ( x ). The size of x changes through time due to depletion by weed management and new organisms being created via seed production. The change in the seed bank from one period to the next is represented by the state transition equation x t+1 -x t =f(x t,u t ). The seed bank can be regulated through control u by targeting reproduction and seed mortality. The objective is to determine the level of control ( u ) in each season that maximises profit over a period of T years. Jones and Cacho (2000) A dynamic optimisation model of weed control http://www.une.edu.au/economics/publications/gsare/index.php

30
30 Weed control model subject to: simulation model The simulation model consists of a system of equations that represent the weed population dynamics, the effect of weed density on crop yields and the effect of herbicide on weed survival The reward is net revenue: x t =seedbank (seeds/m 2 ) u t =herbicide (l/ha) y=crop yield (t/ha) p i =price of i ($/unit) c y =cropping cost ($/ha)

31
31 Numerical optimal control (NOC) is the net profit obtained from the existing level of x t and u t plus the value of a unit change in x t valued at price t+1 The Hamiltonian: t+1, the costate variable, represents the shadow price of a unit of the stock of the seed bank; its value is 0 because the state variable is bad for profits

32
32 NOC results The Hamiltonian and its components u H(x,u, ) $/ha R(x,u) f(x,u) x=50, =-2

33
33 NOC results Optimal paths ut*ut* xt*xt* t t Control State

34
34 NOC results The costate variable u t * Shadow price of seedbank

35
35 1.Solve simulation model: x j = x i + f(x i,u, ), for = 1,…, k. where k is the number of Monte Carlo iterations, with each drawn from a lognormal distribution. 2.Use results from 1 to estimate P ij (u) given that 3.Calculate the reward R i (u). 4.Repeat steps 1-3 for x i =x 1,…x n to fill up the rows of P and R. 5.Repeat step 4 with u=u 1,…u m to fill up the columns of R and the 3 rd dimension of P. 6.Perform Backward recursion to solve SDP. SDP model Use simulation model to generate transition probability matrix ( P ) and Reward matrix ( R ):

36
36 SDP model: TPM matrix u = 1.0 from state ( x t ) to state (x t+1 ) probabilities

37
37 u = 3.0 u = 2.0 u = 1.0 TPM array

38
38 SDP results x Optimal decision rule u*

39
39 SDP results t Optimal state path x*

40
40 SDP results t Optimal state path (Monte Carlo) x*

41
41 The optimal transition probability matrix is created by selecting the elements of the Markovian probability matrices that satisfy the optimal decision rule for the given state. Optimal probability maps for any initial condition can then be generated for any future time period t by applying (P*) t P*=

42
42 Example with multiple outputs Odom et al (2002). Ecol Econ 44: 119-135 Optimal control of Scotch broom (Cytisus scoparius) in the Barrington Tops National Park

43
43 Integrated weed management uiui parameters control package

44
44 Optimal state transition xtxt x t+1 45 o

45
45 Optimal paths t x* Sites invaded (%) w / budget constraint no constraint

46
46 Optimal paths and policy t x* Optimal soil C (t/ha) t With C credits With no C credits Soil carbon content of an agoforestry system under optimal control Wise and Cacho (2005). Dynamic optimisation of land-use systems in the presence of carbon payments: http://www.une.edu.au/carbon/wpapers.php

47
47 Extensions Multiple state variables Multiple control variables Multiple outputs Spatially-explicit models Fish sanctuary models Multiple species / interactions Matrix population models Metamodelling

48
48 Literature and Links Conrad, JM (1999). Resource Economics. Cambridge University Press. Conrad, JM and Clark, CW (1987). Natural Resource Economics: notes and problems. Cambridge University Press. Fryer, MJ and Greenman, JV (1987). Optimisation theory: applications in OR and economics. Macmillan. Judge, KL (1998). Numerical methods in Economics. The MIT Press. Miranda, MJ and Fackler PL (2002). Applied computational economics and finance. The MIT Press. NEOS Optimization Tree: http://www-fp.mcs.anl.gov/otc/Guide/OptWeb/ Optimization Software Guide: http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/index.html

49
49 Matlab optimal control model WeedOC optimisation 1 ( 0 ) [Lx0]=TransvCond(x0,tmax,ubound,delta) Lx0 = fminbnd(@ObjFn) [xstar,ustar,Lxstar]=SolveWOC(x0,y,tmax,ubound,delta) g = ObjFn(y) [xstar,ustar,Lxstar]=SolveWOC(x0,Lx0,tmax,ubound,delta) ustar = fminbnd(@ObjFn) uopt=MaxHam(x,Lx,delta,ubound) [x1,density]=seedbank(x,u) [H]=HWeed(x,u,Lx,df) g = ObjFn(u) profit=gm(density,u); dHdx=dHweed(x,uopt,Lx,delta) optimisation 2 (u*) set x 0

50
50 Matlab SDP model CreateMatrix for t=nt:-1:1; for i=1:nx vopt=-inf; for k=1:nu fval = YM(i,:,k) * v(:,t+1); vnow = R(i,k) + delta*fval; if (vnow > vopt) vopt=vnow; uopt=k; end; v(i,t)=vopt; ustar(i,t)=uopt; end; [x1,weeds]=seedbank(x,u) profit=gm(density,u); Generates TPM matrix (YM) and reward matrix (R) SDP: keep best control value function stage loop state loop control loop expected v t+1

Similar presentations

OK

Chapter 8 Estimation Understandable Statistics Ninth Edition

Chapter 8 Estimation Understandable Statistics Ninth Edition

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google