Download presentation

Presentation is loading. Please wait.

Published bySuzan Lang Modified about 1 year ago

1
Lecture 7 review Physical habitat variables are poor predictors of distributions, even of resting fish (trophic factors like predation risk are at least as important) Ontogenetic changes in habitat use are critical in assessing “essential fish habitat” Always use likelihood functions for parameter estimation (efficient, combine data, allow use of prior information) Never use information theoretic criteria (AIC, BIC, etc) to compare alternative models for estimating policy parameters

2
Lecture 8: estimation of absolute abundance in fish populations First ask whether you need to do it in the first place, given that successful management requires regulation of exploitation rate which is often possible to measure and control without ever knowing stock size When you need to know stock size Options for obtaining stock size estimates

3
When do you need to know stock size in the fist place? When you are forced to manage by “output controls” (TAC, Quotas, ITQs) and must set those output levels, or manage by “proven production potential” PPP=F x (Proven stock) When there is a legal requirement to do so (e.g. endangered species act) When there is no other way to estimate exploitation rate except U=Catch/Stock When you are providing advice on the potential size of a new fishery, for economic planning purposes

4
Options for controlling exploitation rate

5
Options for estimating total stock size Direct census (visual, acoustic, etc.) Density expansion (time, area) Change in index methods (depletion, ratio) C/U methods (Gulland’s old trick) Pcap methods using marked animals Bt/Bo methods using stock assessment models that estimate Bo as leading parameter

6
Direct census (count them all) Used mainly where stock is extremely concentrated, e.g. migrating salmon or herring spawning aggregations Typically involves a “visibility” or proportion seen conversion factor (e.g. acoustic target count to numbers, egg count to spawner count using eggs/spawner) Can most often be replaced with much cheaper density expansion method

7
Density expansion methods Decompose estimate into two problems: Stock size=(number per area)x(total area) In sampling theory, this means defining a way to estimate mean density, and carefully defining the “sampling universe” or “sampling frame” to which the densities are assumed to apply. DO NOT begin design of a density expansion method by the usual biologist’s whining about how variable the sample densities are likely to be, and calculating sample sizes accordingly. Really big failures in density expansion methods (and sampling designs in general) are usually due to lack of care in initial definition and careful description of the sampling universe. Habitat mapping and habitat association analysis are typically critical here.

8
Designing effective sampling programs DEFINE THE UNIVERSE OF POSSIBLE SAMPLES, NOT EXPECTING INTELLIGENT ADVICE FROM STATISTICIANS (WHO KNOW LITTLE ABOUT THE REAL UNIVERSE) STRATIFY the sampling universe (classify every possible sample unit), being careful to identify units that you know beforehand are almost certain to have zero abundance and those where your sampling gear won’t work properly (you still must estimate mean density for such units, or else be content with a minimum N est. Consider your options for sample unit choice within each stratum: random vs systematic, use of spatial statistical methods to interpolate unsampled densities

9
The sampling universe N is the sum over all units i of the abundance n i in each unit: N=Σn i N is also the mean n i times the number of sampling units in the universe Each little box is a sampling unit, has abundance n i

10
The sampling universe To estimate N, remember that you must somehow assign an abundance n i to EVERY unit I, whether or not you could or did sample it Your options include: 1.Assign mean of sampled n i to all unsampled units (assume your units are a random sample) 2.Sample units at regular spacing (grid) so as to uncover any spatial structure that may be present (take a systematic sample, whose mean will have lower variance than a random sample if and only if there is large-scale structure) 3.Assume structure in how the n i vary over space (or time), and try to estimate that structure (assign n i values to unsampled I) using spatial statistics models

11
What to do when the n i are estimated by fishers or biologists who don’t know how to design spatial sampling programs (catch rates, expanded by estimates of area swept by each unit of effort) You still have to assign an n i to every sampling unit in the universe It is foolish to assume that fishers have sampled the units at random (but that is what you are assuming when you use average cpue) Filling in the missing n i is called the “folly and fantasy” problem. Options for doing it include 1.Spatial statistics methods (FishMap demo) 2.Backfilling for each i using data from other times

12
Change in index methods Here you estimate N by examining how much change in an index y t =qN is changed by taking known removal(s) C t y t is usually either a relative abundance or a sex ratio Multiple relative abundance y t and catch C t observations lead to the “Leslie” and “Delury” depletion models

13
Leslie depletion model State and observation dynamics –Closed population: N t+1 =N t -Ct=N o -K t –Linear observation process: y t =qN t Combining state and observation models gives y t =qN t =q(N o -K t )= qN o -qK t (linear) i.e. get q from slope of y vs K, N o from intercept Depletion estimates of N o are typically: –Biased downward by about 50% due to change in q over time (q higher at first as you get the stupid ones) –Sensitive to closure assumption (immigration/emigration cause upward bias in N o ) –Usually used to estimate local n i in larger area studies Note: “removal” “kill”, can use in combination with mark-recapture experiments to get cross-validation

14
C/u methods (exploitated populations only) Get an estimate of total catch C, then assume C=uN and estimate N given u Methods for estimating u include: 1.mark-recapture: mark M animals before C is taken, get the number r of these that are recovered during fishery; u=r/M (N.B. big M doesn’t help if tag loss and tag reporting rates are unknown !) 2.Catch curves: get Z from curve, natural mortality M from somewhere, u=Z-M 3.Swept area: measure effort E, area “a” swept by average unit of effort, total area A, then u=aE/A

15
Mark-recapture experiments Mark M animals, recover n total animals of which r are marked ones P cap estimate is then r/M, and total population estimate is N=n/P cap = nM/r, i.e. you assume that n is the proportion P cap of total N Critical rules for mark-recapture methods: 1.NEVER use same method for both marking and recapture (marking always changes behavior) 2.Try to insure same probability of capture and recapture for all individuals in N (spread marking and recapture effort out over population) 3.Watch out for tag loss/tag induced mortality especially with spagetti tags (use PIT or CWT when possible)

16
Open population mark-recapture experiments (Jolly-Seber models) Mark M i animals at several occasions i, assuming number alive will decrease as M it =M i S t where S t is survival rate to the t th recapture occasion. Recover r it animals from marking occasion i at each later t. Estimate total marked animals at risk to capture at occasion i as TM i =Σ i-1 M i, to give Pcap i estimate Σ i-1 r it /TM i. Total population estimate N i at occasion i is then just N i =TN i /Pcap i, where TN i is total catch at i. Estimate recruitment as R i =N i -SN i-1 or other more elaborate assumption

17
Integrated stock assessment models: depletion models with recruitment and mortality dynamics, multiple data types Here is what you don’t want to happen:

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google