Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 10 review Spatial sampling design –Systematic sampling is generally better than random sampling if the sampling universe has large-scale structure.

Similar presentations


Presentation on theme: "Lecture 10 review Spatial sampling design –Systematic sampling is generally better than random sampling if the sampling universe has large-scale structure."— Presentation transcript:

1 Lecture 10 review Spatial sampling design –Systematic sampling is generally better than random sampling if the sampling universe has large-scale structure (gradients, etc.) –Set up transects and grids to maximize variation, ie to cut across gradients Crab and shrimp fisheries are good examples of where simple length-based assessment methods give misleading estimates of exploitation rate –Length-based methods can grossly overestimate harvest rates –Big problem is not estimation of local density (can use depletion experiments), but rather estimation of the total area (sampling universe) to which the density estimates apply)

2 Lecture 11: synthesis models “Synthesis model” is a term coined by Methot for what had been called “statistical catch at age” (SCA) models; examples are SS2, CASAL Basic idea is to use age-structured model to generate predictions of multiple types of observations –Catch for multiple fleets with different age selectivities –Length and age composition of catch –Multiple abundance trend indices Basic aim is to reconstruct historical changes in stock size and recruitment Main limitation is bad trend index data and complex temporal change in size-age selection patterns

3 Parameter estimation and state reconstruction for dynamic models State dynamics Model N Observation Model (predicted y) Statistical criterion Data (observed y) y N N t+1 =N t -C t y t =qN t Parameter N o Parameter q Log-likelihood function Parameters Process errors Observation errors

4 Catch at age table effects Year effect Age effect Cohort effect

5 And there can be so much nice data (as for Newfoundland cod)

6 Where do you get the C at ? Age composition sampling (best) Age composition by length, expansion of length frequencies using that composition (OK if good composition data at all lengths) Age-length “key” (assign each length in length frequency data to an age) (Bad!) Predict length composition directly (Synthesis type models only) (Very bad unless GTG accounting used for size structure)

7 What’s the matter with good old catch curves? Z=0.52 -Assume older ages equally vulnerable (why are there less old fish?) -Only use information in data from “fully recruited” (equally vulnerable) ages -Ignore effects of changes in recruitment (again, why are there less old fish?) -Assume same harvest impact in all past years

8 Two ways to parameterize catch at age models (SCA vs VPA) VPA: backward in time N a,t =N a+1,t+1 /S+C a,t Problem: how to get the U’s along the edge of the table, to set N=C/U? SCA: forward in time N a+1,t+1 =N a,t (1-U at )S C a,t =U t v a,t N a,t Problem: too many parameters R 1 R 2 R 3 …………………R n N2N3…NmN2N3…Nm C 1 /U 1 C 2 /U 2 C 3 /U 3 … C m /U m C n /U 1 C n /U 2 C n /U 3 …C n-1 /U n

9 Deciding between SCA and VPA DON’T DECIDE. When you can, use both (can’t use VPA except over periods where catch at age can be estimated for every historical year) SCA more precise and accurate when fishery has had stable, simple logistic vulnerability at age pattern (SCA predicts catch at age, so “sees” F effects in data) But SCA can be badly biased when vulnerability schedule has changed a lot and/or is dome- shaped (F effects in catch at age data confounded with vulnerability changes) Use VPA whenever possible to check for complex, changing vulnerability patterns

10 Common causes of severe biases in estimated stock size and trend Bad abundance index data (especially hyperstable cpues) Rapid changes in size selectivity (especially targeting small fish as stock declines, makes recruitment appear high) Inappropriate priors for recruitment variation when recruitment “constrained” to vary around a stock-recruitment curve Dome-shaped vulnerability (makes F look too high when ignored, and too low when estimated but estimates are confounded with effects of F or recruitment trends on proportions of older fish

11 But sometimes they are still very useful, as for Vaughan’s menhaden data How could Z have decreased while effort was Increasing?

12 Output control and age-structured assessments don’t mix well? Shelton, ICES J. Mar. Sci. (2007); note increases in F (circles) as stock size declined in range-contracting stocks. Shelton notes that scientific advice is not being consistently followed, even when that advice is not biased by assessment problems for range-contracting stocks.


Download ppt "Lecture 10 review Spatial sampling design –Systematic sampling is generally better than random sampling if the sampling universe has large-scale structure."

Similar presentations


Ads by Google