Presentation is loading. Please wait.

Presentation is loading. Please wait.

What have we learned from the John Day protocol comparison test? Brett Roper John Buffington.

Similar presentations


Presentation on theme: "What have we learned from the John Day protocol comparison test? Brett Roper John Buffington."— Presentation transcript:

1 What have we learned from the John Day protocol comparison test? Brett Roper John Buffington

2 $$ = This effort was a group (PNAMP) effort Watershed Sciences

3 Objectives How consistent are measurements within a monitoring program, Ability of protocols to detect environmental heterogeneity (signal-to-noise ratio), Understand relationships among different monitoring program’s measurement of an attribute, and to more intensively measured values determined by a research team (can we share data?). Goal – More efficiently collect and use stream habitat data.

4 Sample Design 7 monitoring programs 3 crews 3 channel types (12 streams) Plane-bed (Tinker, Bridge, Camas, Potamus) Pool-riffle (WF Lick, Crane, Trail, Big) Step-pool (Whiskey, Myrtle, Indian, Crawfish) plane-bedpool-rifflestep-pool Maximize variability so we can discern differences

5 Flow Set Begin Point Different End Points Depending upon protocol and crew Review of Design at a Stream Site Fixed Transects for Selected Attributes; Bankfull width, BF Depth, Banks, Conduct surveys in late summer (base flow).

6 On top of this “the truth”, “the gold standard” survey points pool bar contour interval = 10 cm riffle

7 AttributeAREMPCFGEMAPNIFCODFWPIBOUC Gradient Sinuosity Bankfull WD % Pool Pool/km MRPD d 50 % Fines LWD Within a program, many attributes are consistently measured ( ), some are less so ( ). -Objective 1

8 Egg-to-fry survival rates from estimates of percent fines ( ) from Potamus Creek (a) and WF Lick Creek (b), for two PIBO crews. SEF= [92.65/(1 + e-3.994+0.1067*Fines)]/100 Al-Chokhachy and Roper, submitted

9 Within Program Consistency Most programs collect the majority of their attributes in a consistent manner. When problems are identified within a protocol they can often be quickly addressed through minor changes (additional training, clarifying protocols, increasing operational rule sets). QAQC is the only way to identify problems within a protocols. Some sets of stream attributes (habitat units, sediment grain size) can be more difficult to be consistent with– problem is these are often the most important to aquatic biota. Consistency is affected (+ and -) by transformations.

10 AttributeAREMPCFGEMAPNIFCODFWPIBOUC Gradient Sinuosity Bankfull WD % Pool Pool/km MRPD d 50 % Fines LWD Generally lower S:N than internal consistency. Two exceptions, Bankfull width and large wood. -Objective 2

11 Detecting Environmental Variability Within this sample of streams there may not be sufficient signal in some variables (sinuosity --true, width-to-depth -- ??). The focus on repeatability may reduce signal. Hard for me to look at the photo of the sites and not see a lot of variability. In attributes where signal can be highly variable (large wood) transformations will almost always improve signal and increase the ability to tell differences.

12 Even if you are measuring the same underlying attribute, the more noise/less signal the weaker the estimate of the underlying relationship. Example; Assume you knew the truth perfectly but you compared that to imperfect protocol; how strong could the relationship be? (Stoddard et al. 2008; Kaufmann et al. 1999)

13 Objective 3 - Sharing Data What are the ranges of relationships between programs given the signal to noise? Given some inherent variability in our measurements are we measuring the same underlying attribute?

14 AttributeHighest/ Lowest Group 1Group 2S:N 1S:N 2Max r 2 GradientHighAREMPPIBO188.2124.40.987 LowODFWCFG5.64.90.704 W/DHighNIFCODFW6.12.20.589 LowUCPIBO1.71.50.374 % PoolsHighNIFCODFW13.55.80.794 LowPIBOCFG1.40.40.174 Pool DepthHighUCPIBO11.97.40.813 LowODFWCFG3.90.20.127 d 50 HighPIBOUC6.03.60.671 LowEMAPAREMP1.02.40.353

15 To minimize the effect of observer variation we use the mean of means. So although there is variation among crews in measuring sediment, it appears the monitoring protocols are measuring the same underlying characteristic.

16 In other cases it is clear programs are measuring different things – likely based on different operational definitions.

17 AttributeAREMPCFGEMAPNIFCODFWPIBOUC Gradient0.990.980.99 NM0.970.99 Sinuosity0.93 NM0.95 NM 0.760.87 MBW0.590.630.730.570.650.590.51 WD0.01 0.120.330.490.340.03 Pool/km0.430.330.030.280.180.300.10 MRPD0.910.280.870.120.940.930.94 d 50 0.79 NM0.87 NM 0.920.73 LWD0.430.440.760.850.760.580.65 You can then relate each program to “the gold standard”. These coefficient of determination (r 2 ) between intensively measured attributes and each program (mean of each reach).

18 What data could we share? Probably Gradient Sinuosity Median Particle Size Mostly Bankfull Residual Depth Large Wood With Difficultly Width to depth Pools (%,/km) Percent Fines

19 Conclusions Most groups do a decent job implementing their own protocol. Every group still has room for improvement through training, improved definitions,… QAQC is key. Groups seem to be forgoing some signal in order to minimize noise. Difficult to exchange one groups result with another for many attributes. Perhaps best as a block effect for those with no interaction.

20 Recommendations We will never progress on what is the right way without an improved understanding of the truth or agreed upon criteria. How should we define a good protocol. Which protocols have the strongest relationship with the biota? Which best implies condition? Which is closest to the real truth (ground based LiDAR)?

21 Issues for paper I am trying to incorporate all the final suggestions and should have it out for a quick review then submission right after the new year.


Download ppt "What have we learned from the John Day protocol comparison test? Brett Roper John Buffington."

Similar presentations


Ads by Google