Using Multiple Methods to Reduce Errors in Survey Estimation: The Case of US Farm Numbers Jaki McCarthy, Denise Abreu, Mark Apodaca, and Leslee Lohrenz.

Slides:



Advertisements
Similar presentations
Innovation data collection: Advice from the Oslo Manual South East Asian Regional Workshop on Science, Technology and Innovation Statistics.
Advertisements

Innovation Surveys: Advice from the Oslo Manual South Asian Regional Workshop on Science, Technology and Innovation Statistics Kathmandu,
Innovation Surveys: Advice from the Oslo Manual National training workshop Amman, Jordan October 2010.
The Use of Automated Telephone Reminders as an Alternative to Postcard Reminders in Survey Data Collection United States Department of Agriculture National.
Survey Methodology Nonresponse EPID 626 Lecture 6.
An Assessment of the Impact of Two Distinct Survey Design Modifications on Health Insurance Coverage Estimates in a National Health Care Survey Steven.
Copyright 2010, The World Bank Group. All Rights Reserved. Estimation and Weighting Part II.
Jaki S. McCarthy, Daniel G. Beckler, and Suzette M. Qualey Slide 1Slide Slide 1 International Conference on Establishment Surveys III Montreal June 18-21,
Lesson Designing Samples. Knowledge Objectives Define population and sample. Explain how sampling differs from a census. Explain what is meant by.
CE Overview Jay T. Ryan Chief, Division of Consumer Expenditure Survey December 8, 2010.
Who and How And How to Mess It up
Sampling.
Documentation and survey quality. Introduction.
United Nations Workshop on Revision 3 of Principles and recommendations for Population and Housing Censuses and Census Evaluation Amman, Jordan, 19 – 23.
Chapter Three Research Design.
1 1 Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University © 2002 South-Western College Publishing/Thomson Learning.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
THE WEIGHTING GAME Ciprian M. Crainiceanu Thomas A. Louis Department of Biostatistics
Like – But Oh, How Different: The effect of different questionnaire formats in the 2005 Census of Agriculture Content Test. Jaki S. McCarthy National Agricultural.
Exploring Marketing Research Chapter 9 Survey Research: An Overview Dr. Werner R. Murhadi
International Workshop on Industrial Statistics Dalian, China June 2010 Shyam Upadhyaya UNIDO Benchmarking of monthly/quarterly.
Ten State Mid-Atlantic Cropland Data Layer Project Rick Mueller Program Manager USDA/National Agricultural Statistics Service Remote Sensing Across the.
Chapter 33 Conducting Marketing Research. The Marketing Research Process 1. Define the Problem 2. Obtaining Data 3. Analyze Data 4. Rec. Solutions 5.
Estimation of Demand Prof. Ravikesh Srivastava Lecture-8.
Determining Sample Size
Copyright 2010, The World Bank Group. All Rights Reserved. Estimation and Weighting, Part I.
From Sample to Population Often we want to understand the attitudes, beliefs, opinions or behaviour of some population, but only have data on a sample.
Volunteer Angler Data Collection and Methods of Inference Kristen Olson University of Nebraska-Lincoln February 2,
African Centre for Statistics United Nations Economic Commission for Africa Chapter 6: Chapter 6: Data Sources for Compiling SUT Ramesh KOLLI Senior Advisor.
12th Meeting of the Group of Experts on Business Registers
Chapter Nine Copyright © 2006 McGraw-Hill/Irwin Sampling: Theory, Designs and Issues in Marketing Research.
Q2010, Helsinki Development and implementation of quality and performance indicators for frame creation and imputation Kornélia Mag László Kajdi Q2010,
Slide 1 Incentives in Surveys with Farmers Third International Conference on Establishment Surveys Montreal, Quebec, Canada June 19, 2007 Slide Kathy Ott.
List frames area frames and administrative data, are they complementary or in competition? Elisabetta Carfagna University of Bologna Department of Statistics.
Central egency for public mobilization and statistics.
Design Effects: What are they and how do they affect your analysis? David R. Johnson Population Research Institute & Department of Sociology The Pennsylvania.
Data Users Needs & Capacity Building International Programs National Agricultural Statistics Service United States Department of Agriculture November 2009.
How USDA Forecasts Production and Supply/Demand. Overview  USDA publishes crop supply and demand estimates for the U.S. each month.  Because of the.
HOW TO WRITE RESEARCH PROPOSAL BY DR. NIK MAHERAN NIK MUHAMMAD.
Scot Exec Course Nov/Dec 04 Survey design overview Gillian Raab Professor of Applied Statistics Napier University.
Research Design.
Lesson Designing Samples. Knowledge Objectives Define population and sample. Explain how sampling differs from a census. Explain what is meant by.
The relationship between error rates and parameter estimation in the probabilistic record linkage context Tiziana Tuoto, Nicoletta Cibella, Marco Fortini.
© 2010 Pearson Prentice Hall. All rights reserved Chapter Data Collection 1.
Selection of Multi-Temporal Scenes for the Mississippi Cropland Data Layer, 2004 Rick Mueller Research and Development Division National Agricultural Statistics.
Topic (vi): New and Emerging Methods Topic organizer: Maria Garcia (USA) UNECE Work Session on Statistical Data Editing Oslo, Norway, September 2012.
Use of Administrative Data Seminar on Developing a Programme on Integrated Statistics in support of the Implementation of the SNA for CARICOM countries.
Identifying Sources of Error: the 2007 Classification Error Survey for the US Census of Agriculture Jaki McCarthy and Denise Abreu USDA’s National Agricultural.
1 1 Slide © 2003 South-Western/Thomson Learning™ Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
AP STATISTICS Section 5.1 Designing Samples. Objective: To be able to identify and use different sampling techniques. Observational Study: individuals.
Linear Discriminant Analysis (LDA). Goal To classify observations into 2 or more groups based on k discriminant functions (Dependent variable Y is categorical.
© John M. Abowd 2005, all rights reserved Assessing Data Quality John M. Abowd April 2005.
Outline 1.Government Priorities 2.Data to be collected from PHC 3.Data items to be covered in Agriculture Census 4.Data items to be covered in follow-up.
3-1 Copyright © 2010 Pearson Education, Inc. Chapter Three Research Design.
Stretching Your Data Management Skills Chuck Humphrey University of Alberta Atlantic DLI Workshop 2003.
2007 Census Test – Analysis of Coverage Owen Abbott Methodology Directorate.
1Your reference The Menu of Indicators and the Core Set from the South African Point of View Moses Mnyaka 13/08/2009.
Copyright 2010, The World Bank Group. All Rights Reserved. Core and Supplementary Agricultural Topics Section A 1.
1 A theoretical framework for register-based statistics --- Can we carry on without it? Li-Chun Zhang Statistics Norway
Nagraj Rao Statistician Asian Development Bank CROP CUTTING: AN INTRODUCTION.
Small area estimation combining information from several sources Jae-Kwang Kim, Iowa State University Seo-Young Kim, Statistical Research Institute July.
Adjustment Methodologies for the Census of Agriculture Andrea C. Lamas, Denise A. Abreu, Shu Wang, Daniel Adrian, Linda J. Young National Agricultural.
Nagraj Rao Statistician Asian Development Bank CROP CUTTING: AN INTRODUCTION.
Using County Assessor's Records To Improve Data Collection Efforts For The June Area Survey Denise A. Abreu, Wendy Barboza, Matt Deaton and Linda J. Young.
Cost of Production: Uses and Users
Section 5.1 Designing Samples
Section 10.1: Confidence Intervals
Methods of Associating Segments with Reporting Units
SURVEY RESEARCH.
Presentation transcript:

Using Multiple Methods to Reduce Errors in Survey Estimation: The Case of US Farm Numbers Jaki McCarthy, Denise Abreu, Mark Apodaca, and Leslee Lohrenz National Agricultural Statistics Service US Department of Agriculture Paper presented at the International Total Survey Error Workshop Stowe, VT June 2010

What is the goal of reducing TSE? Surveys used to estimate a construct Goal of TSE reduction is to more accurately estimate construct in that survey Construct: Number of Farms in the US Measured in multiple ways: – June Agricultural Survey – Census of Agriculture

The Council of Advisors Multiple sources provide advice Each is likely biased in some direction We assume that collective advice is better than any single source

Comparisons between “advisors” may uncover errors in both or either In most cases, there is no 100% accurate source Each estimate makes different assumptions, uses different procedures

Farm Number Estimates June Agricultural Survey (JAS) – Purpose: direct estimates of acreage and measures of sampling coverage – Area Frame Based – In Person Data Collection – Sample Survey – Voluntary – All non-response is manually estimated Census of Agriculture (COA) – Purpose: detailed county level agricultural data on all commodities produced and expenses, income and operator characteristics – List Based – Primarily self administered mail data collection – Census – Mandatory – Non-response weighting adjustment

Census Undercoverage and Misclassification Historically, JAS is a benchmark for COA – Area frame has theoretically complete coverage – Flagship survey for NASS with personal interviews Classification Error Survey uncovered errors in both JAS and COA identification of farms, but with most in JAS

What perspective? Advisor #1: JAS – Primary objective is to produce acreage estimates, farm numbers are secondary Advisor #2: COA – Primary objective is to collect information on ALL farms

US Farm Numbers

So we begin with 2 independent estimates of farm numbers….. To improve JAS estimate, additional follow up was conducted to estimate number of farms in subset that were originally estimated or classified as NOT farms Result: additional farms missed (misclassified) in the JAS This can be added to original JAS estimates

US Farm Numbers

ADD another advisor: another independent estimate of farms Assumption that operations reporting themselves as farms on the 2007 COA, but not JAS were misclassified in JAS Another regression estimate based on this assumption applied to 2009 JAS survey data

US Farm Numbers

The Council of Advisors is used to set the “official” farm number Each of the methods is measuring the same construct Each of the methods is independent, has different emphasis, and has its unique errors Objective is not to “fix” an individual survey estimate or measure its errors Objective is to combine all of these estimates to produce the “best” number: Reducing Total Construct Error

My questions to you: Do you use similar practices? How do you combine multiple sources of information? What is the best way to do this? How does this fit into the TSE context?