Presentation on theme: "Evaluating Growth Models: A Case Study Using Prognosis BC Evaluating Growth Models: A Case Study Using Prognosis BC Peter Marshall, University of British."— Presentation transcript:
Evaluating Growth Models: A Case Study Using Prognosis BC Evaluating Growth Models: A Case Study Using Prognosis BC Peter Marshall, University of British Columbia, Vancouver, BC Pablo Parysow, Northern Arizona University, Flagstaff, AZ Shadrach Akindele, Federal University of Technology, Akure, Nigeria Presented at the Third FVS Conference, Feb , Fort Collins, CO
Outline Background - Prognosis BC - Validation - Study area Testing Against Data Simulation Testing Conclusions Within a framework of general observations about validation techniques and processes.
Validation Observation #1 Validation is important … but it tends to be much more of interest to the person doing it than it is to the person hearing about it.
Prognosis BC An adaptation of the northern Idaho (NI) version of the original Prognosis model The architecture of the original model remains but many of the internal equations have been reformulated and the remainder have been recalibrated Habitat types have been replaced with appropriate units within the BC Biogeoclimatic Ecosystem Classification (BEC) system All inputs and outputs are in metric units Different versions have been developed for various BEC subzones Additional information is available at the following URL:
Validation There are various definitions of “validation” in use and a growing literature on different approaches to use in validation For the purposes of this presentation, I will define validation as: For the purposes of this presentation, I will define validation as: “The process of evaluating model outputs for consistency and usefulness.” Under this definition, validation is very much context dependent - which model outputs are being evaluated - in what location(s) - for what purposes
Study Area British Columbia Victoria Vancouver Study Site Location Interior Douglas-Fir Zone CANADA
Validation Observation #2 Validation is most effective if several different approaches to validation are used – there are gains from added perspective.
Testing Against Independent Data Data were from two research installations established in the late 1980s in stands which were uneven-aged and primarily interior Douglas-fir One installation, consisting of 6 plots measured on 4 occasions, was set up to follow stand dynamics under different structural conditions (1) predominance of large older trees (dbh > 30 cm) (2) predominance of pole-sized trees (dbh cm) (3) predominance of saplings (dbh 30 cm) (2) predominance of pole-sized trees (dbh cm) (3) predominance of saplings (dbh < 15 cm) The second installation, consisting of 24 plots measured on 3 occasions, was set up as a precommercial thinning experiment in stands which were diameter-limit logged in the 1960s - 3 blocks consisting of three thinning treatments and a control, with two plots in each block/treatment Projections made for 11 years to match one of the possible remeasurement intervals (closest match to the 10-year projections of Prognosis BC )
Validation Observation #3 If at all possible, try to look at both individual tree projections as well as stand-level projections. Joint comparisons might well highlight issues that otherwise would not be apparent.
Validation Observation #4 Even apparently well-tested models may well still contain hidden coding errors that have subtle impacts. They are worth looking for carefully. This process is known by some as “verification”. (We found a few such errors by running various of the component equations both within and outside the model environment looking for “oddities”.)
Validation Observation #5 It is best to look for coding errors early on in the validation process. Otherwise, you may have to re-do some of your previous work.
Validation Observation #6 Individuals who are at arm’s length from the model development process are often more likely to spot errors, since they don’t usually assume that they “know” what is going on within the model.
Validation Observation #7 Regression-based equivalence tests (Robinson et al. 2005) provide a convenient means of examining model predictions versus observations. The routine for equivalence testing in R produces nice pictures. Robinson, A.P., R.A. Duursma, and J.D. Marshall A regression- based equivalence test for model validation: shifting the burden of proof. Tree Physiology 25:
Overall equivalence test for the tree-level DBH predictions.
Validation Observation #8 What you see depends on what you look at. For example, the relationship between observed and predicted DBH will appear considerably stronger than the relationship between observed and predicted DBH growth, which is actually what is estimated within Prognosis BC.
Conclusions The validation exercise allowed us to identify and repair minor errors in the coding. Once these errors were fixed, the model performed well against data at both the single tree and stand level. The model produced results under a wide variety of stand structures which were consistent with our understanding of stand dynamics.
Validation Observation #9 When preparing validation reports, remember Observation #1: “Validation is important … but it tends to be much more of interest to the person doing it than it is to the person hearing about it.” It is easy to bury readers with an avalanche of results. However, syntheses and summaries are much more likely to be read and understood.
Site Map FIA Council Search this Site Reports & Publications Submit Project Deliverables BC Forest Worker Safety Land Base Investment Forest Science Small Tenures Crown Land Use Planning Enhancement Tree Improvement International Marketing & Product Development Ministry of Forests and Range Ministry Library Ministry of Agriculture and Lands Ministry of Environment This project was funded by the BC Ministry of Forests and Range, using funds provided for continuing work on Prognosis BC by the Forest Investment Account (FIA). We are grateful for this support.