TWG 2/27/03 # 1 A FEW THOUGHTS on MONITORING STUDIES for COLORADO RIVER in THE GRAND CANYON N. Scott Urquhart STARMAP Program Director Department of Statistics.

Slides:



Advertisements
Similar presentations
Basic Principles of GMP
Advertisements

Sampling: Theory and Methods
Planning Reports and Proposals
Bellwork If you roll a die, what is the probability that you roll a 2 or an odd number? P(2 or odd) 2. Is this an example of mutually exclusive, overlapping,
Chapter 12 Leadership: New Concepts and Applications
Edition Vitale and Giglierano Chapter 6 Assessing and Forecasting Markets Prepared by John T. Drea, Western Illinois University.
Growing Every Child! The following slides are examples of questions your child will use in the classroom throughout the year. The questions progress from.
1 of 20 Evaluating an Information Project From Questions to Results © FAO 2005 IMARK Investing in Information for Development Evaluating an Information.
1 of 19 How to invest in Information for Development An Introduction IMARK How to invest in Information for Development An Introduction © FAO 2005.
2 Session Objectives Increase participant understanding of effective financial monitoring based upon risk assessments of sub-grantees Increase participant.
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management.
Knowledge Dietary Managers Association 1 PART II - DMA Certification Exam Blueprint and Exam Development-
Copyright © 1999 Harcourt Brace & Company Canada, Ltd. Chapter 7 Selection Falkenberg, Stone, and Meltz Human Resource Management in Canada Fourth Edition.
Aviation Security Training Module 4 Design and Conduct Exercise II 1.
Seven New Management and Planning Tools.
The Managing Authority –Keystone of the Control System
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Title Subtitle.
0 - 0.
ALGEBRAIC EXPRESSIONS
DIVIDING INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
ADDING INTEGERS 1. POS. + POS. = POS. 2. NEG. + NEG. = NEG. 3. POS. + NEG. OR NEG. + POS. SUBTRACT TAKE SIGN OF BIGGER ABSOLUTE VALUE.
SUBTRACTING INTEGERS 1. CHANGE THE SUBTRACTION SIGN TO ADDITION
MULT. INTEGERS 1. IF THE SIGNS ARE THE SAME THE ANSWER IS POSITIVE 2. IF THE SIGNS ARE DIFFERENT THE ANSWER IS NEGATIVE.
Addition Facts
C82MST Statistical Methods 2 - Lecture 2 1 Overview of Lecture Variability and Averages The Normal Distribution Comparing Population Variances Experimental.
Audit Evidence Week 11.
Module 2 Sessions 10 & 11 Report Writing.
ZMQS ZMQS
Introduction to the Flagship Framework
APS Teacher Evaluation
Correction, feedback and assessment: Their role in learning
The Framework for Teaching Charlotte Danielson
EMS Checklist (ISO model)
Effective Test Planning: Scope, Estimates, and Schedule Presented By: Shaun Bradshaw
What Is Cost Control? 1 Controlling Foodservice Costs OH 1-1.
S-Curves & the Zero Bug Bounce:
1 Small Data Assessment and Action Research Making it Count: Opportunities and Challenges for Library Assessment lauc-b 2013, Berkeley April Cunningham,
LOWER YUBA RIVER ACCORD Monitoring and Evaluation Program Redd Surveys Casey Campos PSMFC.
© S Haughton more than 3?
5.9 + = 10 a)3.6 b)4.1 c)5.3 Question 1: Good Answer!! Well Done!! = 10 Question 1:
0 Solving Problems in Groups ©2008, University of Vermont and PACER Center Solving Problems in Groups PCL Module 9.
Leadership ®. T EAM STEPPS 05.2 Mod Page 2 Leadership ® 2 Objectives Describe different types of team leaders Describe roles and responsibilities.
Twenty Questions Subject: Twenty Questions
1 REVIEWER ORIENTATION TO ENHANCED PEER REVIEW April
Interview Skills Training
Principles of Microeconomics
This, that, these, those Number your paper from 1-10.
Module 17: Two-Sample t-tests, with equal variances for the two populations This module describes one of the most utilized statistical tests, the.
1 First EMRAS II Technical Meeting IAEA Headquarters, Vienna, 19–23 January 2009.
Addition 1’s to 20.
25 seconds left…...
Test B, 100 Subtraction Facts
11 = This is the fact family. You say: 8+3=11 and 3+8=11
Week 1.
Copyright © 2010 Pearson Addison-Wesley. All rights reserved. Chapter 10 One- and Two-Sample Tests of Hypotheses.
Faculty Added Questions: Where do I begin? David Neiss & Laura Sandino.
We will resume in: 25 Minutes.
Testing Hypotheses About Proportions
Screen 1 of 20 Reporting Food Security Information Reporting for Results Learning Objectives At the end of this lesson you will be able to: understand.
Problem Solving and Decision Making
1 Volume measures and Rebasing of National Accounts Training Workshop on System of National Accounts for ECO Member Countries October 2012, Tehran,
EPA & Ecology 2005 # 1 AN ACADEMICIAN’S VIEW OF EPA’s ECOLOGY PROGRAM ESPECIALLY ITS ENVIRONMENTAL MONITORING AND ASSESSMENT PROGRAM (EMAP) N. Scott Urquhart,
SARMMM 9/8/05 # 1 DEVELOPMENTS IN TREND DETECTION IN AQUATIC SURVEYS N. Scott Urquhart STARMAP Program Director Department of Statistics Colorado State.
Chapter 8 Introduction to Hypothesis Testing
Hypothesis Testing – A Primer. Null and Alternative Hypotheses in Inferential Statistics Null hypothesis: The default position that there is no relationship.
Welcome to AB140 Introduction to Management Unit 3 Seminar – Planning.
1 Formulate Alternatives Planning Step 5. 2 Social Science Activities in Land Use Planning Planning Steps Social Science Activities Steps 1 & 2: Identify.
Welcome to AB140 Introduction to Management Unit 3 Seminar – Planning.
TROUBLESOME CONCEPTS IN STATISTICS: r2 AND POWER
Presentation transcript:

TWG 2/27/03 # 1 A FEW THOUGHTS on MONITORING STUDIES for COLORADO RIVER in THE GRAND CANYON N. Scott Urquhart STARMAP Program Director Department of Statistics Colorado State University

TWG 2/27/03 # 2 MY PLAN versus REALITY  I had given a talk to GCMRC in Flagstaff entitled “DESIGN and POWER of VEGETATION MONITORING STUDIES for THE RIPARIAN ZONE NEAR THE COLORADO RIVER in THE GRAND CANYON”

TWG 2/27/03 # 3 MY PLAN versus REALITY continued  I agreed to give a management-oriented version of that technical talk here.  Interrupting my vacation!  That talk was about 85% done when late Tuesday night I received a message from Dennis Kubly encouraging me to address a VERY different set of questions.  This response cannot be a well prepared, given my lack of opportunity to prepare it. -- Sorry

TWG 2/27/03 # 4 QUESTIONS DENNIS ASKED (Abbreviated versions) (1) Compare roles of scientists and managers in adaptive management programs? (2) Prioritizing monitoring and research: compromises resulting from declining budget & conflicting views (3) Evaluating utility of gathered information what are the measures of worth to managers of information gathered, analyzed, and interpreted by scientists? (4) Does some information have higher value than other info? (5) Tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate? (6) Risk assessment: Potential effects of reduced sampling intensity and consequent lower levels of detection of resource change that are necessitated by funding or logistical limitations? (7) When should we worry about Type II errors more than Type I errors? How do they differ?

TWG 2/27/03 # 5 MONITORING IS NOT RESEARCH impacts on answers to questions 1 – 4  Research concerns how & why things happen.  May need to be temporally intensive  Monitoring concerns “What has happened?”  Major differences: measures & temporal intensity EX: Mike Kearsley’s vegetation index versus detailed stem counts  Adaptive management may require some of both  But managers need to look critically at the need for research. Specifically: Its linkage to manageable actions

TWG 2/27/03 # 6 QUESTION 5: What are the tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate?  When you need to make a statement about an entire area, sample it.  Biological investigators know far less about where various resources reside than they think they do. All kinds of things turn up where they “aren’t suppose to be.”  Model development: Targeted site selection is appropriate – even necessary  Pick gradient of sites which will support estimation of model components.

TWG 2/27/03 # 7 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ?  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 8 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? TRUE SITUATION  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 9 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? TRUE SITUATION INNOCENT  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 10 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? TRUE SITUATION INNOCENTGUILTY  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 11 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 12 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o )  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 13 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o ) CORRECT ACTION  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 14 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o ) CORRECT ACTION CONVICT (H a )  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 15 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o ) CORRECT ACTION CONVICT (H a ) CORRECT ACTION  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 16 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o ) CORRECT ACTION CONVICT (H a ) TYPE I ERROR CORRECT ACTION  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 17 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ? ACTION TRUE SITUATION INNOCENTGUILTY ACQUIT (H o ) CORRECT ACTION TYPE II ERROR CONVICT (H a ) TYPE I ERROR CORRECT ACTION  Illustration: Trial of accused murderer.  H o : Accused is innocent of murder  H a : Accused is murderer

TWG 2/27/03 # 18 QUESTION 7: When should we worry about Type II errors more than Type I errors? How do they differ?  Illustration: Trial of accused murderer.  Type I Error = Convict an Innocent person  Type II Error = Free a murderer  Which is worse? Depends on your perspective  From the view of the accused Type I Error: I get jailed when I shouldn’t be! Type II Error: I beat the system!  From the view of society Type I Error: regrettable consequence of protecting society Type II: A travesty! Criminal is free to again assault society!

TWG 2/27/03 # 19 AN ALTERNATIVE TO FIXED LEVEL TESTING  Use the test statistic (like the t-value) to determine the marginal  at which the test would barely be rejected, then  Incorporate that result into humans making the decision.  FACT: Virtually every study has real limitations or failures of assumptions.  Most of the time investigators recognize these, and usually can determine how these failures would influence the outcome.

TWG 2/27/03 # 20 QUESTIONS DENNIS ASKED (1) What are the different roles and functions of scientists and managers in adaptive management programs? Where are they different and where do they merge? This distinction may seem trivial and intuitive, but is not based on our experience in working with program members. (2) What are some relevant criteria for prioritizing monitoring and research in the face of a declining budget and conflicting views on management objectives? (3) How is the utility of information gathered by research and monitoring to be evaluated, i.e. what are the measures of worth to managers of information gathered, analyzed, and interpreted by scientists? (4) Are managers correct if they attribute a higher value to information gathered during core monitoring than that gathered in research projects and experiments, i.e. is there an easy way to assess which is more or less important in any given situation? What is the relative value of the two approaches in assigning cause and effect relationships to changes in resource status? (5) What are the tradeoffs between sampling designs that allow extrapolation to the entire Grand Canyon and those that do not? When is each most appropriate? (6) How should managers assess the risk created by reduced sampling intensity and consequent lower levels of detection of resource change that are necessitated by funding or logistical limitations? (7) When should we worry about Type II errors more than Type I errors? How do they differ?