Curb-stoning, a Too Neglected and Very Embarrassing Survey Problem Comments Jaki S. McCarthy Senior Cognitive Research Methodologist US Department of Agriculture.

Slides:



Advertisements
Similar presentations
SURVEY QUALITY CONTROL
Advertisements

MICS4 Survey Design Workshop Multiple Indicator Cluster Surveys Survey Design Workshop Survey Quality Control.
The Response Process Model as a Tool for Evaluating Business Surveys Deirdre Giesen Statistics Netherlands Montreal, June 20th 2007 ICES III.
Self-Administered Surveys: Mail Survey Methods ChihChien Chen Lauren Teffeau Week 10.
Keystroke analysis and implications for fieldwork WP3.3 – task 2 Johanna Bristle (SHARE, MEA) Verena Halbherr (ESS, GESIS) DASISH Final Conference – Gothenburg,
Preparing Data for Quantitative Analysis
Brian A. Harris-Kojetin, Ph.D. Statistical and Science Policy
1 Incorporating Statistical Process Control and Statistical Quality Control Techniques into a Quality Assurance Program Robyn Sirkis U.S. Census Bureau.
Missey Jackson, CPP Payroll Director Parallon/HCA.
Gathering Performance Information: Overview
Advanced Topics in Standard Setting. Methodology Implementation Validity of standard setting.
Workshop on Energy Statistics, China September 2012 Data Quality Assurance and Data Dissemination 1.
Multiple Indicator Cluster Surveys Survey Design Workshop
About ONLINE Industry leader for more than 50 years Headquartered in North Carolina Originally a small merchant credit bureau In 1997, focus shifted from.
Jaki S. McCarthy, Daniel G. Beckler, and Suzette M. Qualey Slide 1Slide Slide 1 International Conference on Establishment Surveys III Montreal June 18-21,
Mining the Data Ira M. Schoenberger, FACHCA Senior Administrator 2011 AHCA/NCAL Quality Symposium Friday February 18, 2011.
Dr. Michael R. Hyman, NMSU Survey Error: Focus on Systematic Error.
© Copyright 2004 Echo TM The Ultimate Service Improvement System Let Your Customers Define Great Service.
Focus on Systematic Error
RESEARCH METHODS Lecture 19
Informing an Organization High-Tech Solutions for a Low-Tech Environment.
FINAL REPORT: OUTLINE & OVERVIEW OF SURVEY ERRORS
Exploring Marketing Research William G. Zikmund
1 1 Web and Paper Questionnaires seen from the Business Respondent’s Perspective Gustav Haraldsen, Statistics Norway Jacqui Jones, UK Office of National.
Fieldwork. The Nature of Fieldwork  Researchers have two major options for collecting data:  Developing their own organizations or  Contracting with.
Harpreet RIMT-IMCT Chapter Thirteen Fieldwork Harpreet RIMT-IMCT Fieldwork/Data Collection Process Fig Selecting Field WorkersTraining Field.
Learn Management the Easy Way with the Help of Downloadable Power-point Presentations - Learn at Your Own Pace. The Presentation contains Animation. To.
THINK BEYOND THE CURBSTONE TAKING FABRICATION DETECTION AND PREVENTION BEYOND THE INTERVIEWER LEVEL STEVE KOCZELA 12/2/2014, WSS.
Basic Terms Research—the process of finding information relevant to a particular topic Source—any medium that provides information relevant to a particular.
The Future of Statistical Data Collection? Challenges and Opportunities Johan Erikson (Statistics Sweden) Gustav Haraldsen (Statistics Norway) Ger Snijkers.
1 WRS Feedback Overview. 2 Agenda Introduction to WRS Assessment Feedback Report Developmental Planning Best Practices Summary/Wrap Up.
American Community Survey ACS Content Review Webinar State Data Centers and Census Information Centers Gary Chappell ACS Content Review Project Lead April.
Financial Literacy Trends, Problems, and Best Practices Presented By: Rob LaBreche.
Student Engagement Survey Results and Analysis June 2011.
Copyright © 2014, 2011 Pearson Education, Inc. 1 Chapter 14 Sampling Variation and Quality.
PMS Implementation. Implementing PMS This requires the involvement of lot of players Successful implementation requires a clear understanding of how the.
Evaluating a Research Report
Using Multiple Methods to Reduce Errors in Survey Estimation: The Case of US Farm Numbers Jaki McCarthy, Denise Abreu, Mark Apodaca, and Leslee Lohrenz.
Lecture 8 APPLYING MOTIVATION THEORIES: REWARDS AND PERFORMANCE MANAGEMENT.
Employee engagement Guide Global Human Resources June 2014.
Back Ground Checks and Measurement types MANA 4328 Dennis C. Veit
AADAPT Workshop South Asia Goa, December 17-21, 2009 Maria Isabel Beltran 1.
Survey Design and Data Collection Most Of the Error In Surveys Comes From Poorly Designed Questionnaires And Sloppy Data Collection Procedures.
Midterm Stats Min: 16/38 (42%) Max: 36.5/38 (96%) Average: 29.5/36 (78%)
Understanding the Respondent – a Key to Improve our Data Collection Strategies? Seminar on Statistical Data Collection 26 th September 2013 Anton Johansson,
The Challenge of Non- Response in Surveys. The Overall Response Rate The number of complete interviews divided by the number of eligible units in the.
Copyright 2010, The World Bank Group. All Rights Reserved. Reducing Non-Response Section A 1.
Copyright 2010, The World Bank Group. All Rights Reserved. Reducing Non-Response Section B 1.
1 Non-Response Non-response is the failure to obtain survey measures on a sample unit Non-response increases the potential for non-response bias. It occurs.
Identifying Sources of Error: the 2007 Classification Error Survey for the US Census of Agriculture Jaki McCarthy and Denise Abreu USDA’s National Agricultural.
CREATING FOR YOUR BUSINESS. Slide 3 What Are Buyer Personas?...……………………………………………………………. Slide 3 Slide 4 What Are Negative Personas? ……………………………………… …..
CBS-SSB STATISTICS NETHERLANDS – STATISTICS NORWAY Work Session on Statistical Data Editing Oslo, Norway, September 2012 Jeroen Pannekoek and Li-Chun.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Workshop on Price Index Compilation Issues February 23-27, 2015 Data Collection Issues Gefinor Rotana Hotel, Beirut, Lebanon.
Information, Analysis, and Knowledge Management in the Baldrige Criteria Examines how an organization selects, gathers, analyzes, manages, and improves.
Data Collection: Enhancing Response Rates & Limiting Errors Chapter 10.
1 Quality Control for Field Operations. 2 Overview Goal To ensure the quality of survey field work Purpose To detect and deter interviewer errors and.
United Nations Oslo City Group on Energy Statistics OG7, Helsinki, Finland October 2012 ESCM Chapter 8: Data Quality and Meta Data 1.
Sampling Design and Analysis MTH 494 Ossam Chohan Assistant Professor CIIT Abbottabad.
The Importance of Control
Monitoring Activities Answer the Following Questions: What is being done? Scope How well is it being done? Quality Are we doing it on a large enough scale?
Descriptive Research & Questionnaire Design. Descriptive Research Survey versus Observation  Survey Primary data collection method based on communication.
Session 21-1 Session 44 The Verification Selection Process.
Managing Multi Mode Collection Instruments in the 2011 UK Census Frank Nolan, Heather Wagstaff, Ruth Wallis Office for National Statistics UK.
Collecting Information on Mental Factors The Mental Toughness Questionnaire.
Data Collection Methods Pros and Cons of Primary and Secondary Data.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Three SOURCES AND COLLECTION OF DATA.
Jonathan Eggleston and Lori Reeder
Nonresponse and Measurement Error in Employment Research
OBSERVER DATA MANAGEMENT PRINCIPLES AND BEST PRACTICE (Agenda Item 4)
Presentation transcript:

Curb-stoning, a Too Neglected and Very Embarrassing Survey Problem Comments Jaki S. McCarthy Senior Cognitive Research Methodologist US Department of Agriculture National Agricultural Statistics Service WSS Seminar, December 2, 2014

Another perspective on interviewer falsification

Why do interviewers falsify? Understanding why can: – Help identify relevant measures for models – Help develop strategies to prevent falsification

Falsifying to max $/min effort Most indicators have this underlying assumption (i.e. easy answers, shorter answers, rounding, etc.) Can we use data mining to identify other (less obvious) indicators?

Falsifying to Meet Deadlines Do indicators change? Maybe completion dates are important here

Other reasons to falsify? Deliberate fabrication/data misrepresentation Fatigue Perceived reduction in respondent burden Would indicators be the same?

Do these inform potential indicators/falsification model inputs? Speed indicators (length of interviews, completed interviews/day, etc.) Item nonresponse rates Edit rates Contact histories GPS tracking

How do we prevent falsification? How do we change motivation? Intrinsic versus extrinsic motivation Employee (and respondent) engagement

Extrinsic Motivation Should we pay interviewers more? How much do you have to pay to ensure data won’t be falsified? “Because of what they are paying me, I‘m going to collect the most accurate data possible.”

Extrinsic Motivation Interviewers may falsify because they think no one is checking and it doesn’t matter Knowing that QA procedures are in place, and work is monitored can help here “Because I know they are checking my work, I’m going to collect the most accurate data possible.”

Intrinsic Motivation How else can we motivate interviewers? “Because _____________________________, I’m going to collect the most accurate data possible.”

How to get interviewers invested in the process Minimize Us versus Them – Supervisors/Monitors versus interviewers – HQ versus field – Data collectors versus data providers Value of the agency Value of the work “Because __________________, I’m going to collect the most accurate data possible.”

This extends to respondents too! Many of the indicators would flag poor quality data provided by respondents Why do respondents want to provide good quality data? How can we improve the quality of respondents’ inputs? Will INTs who are good at gaining cooperation (i.e. getting cooperation from “hard to reach” units) look like they are collecting lower quality data?

What can we do to get the right answer in that blank? Need to invest in – Training – Employee engagement – Communication up and down the chain Ultimate goal is to have only unintentional errors to detect

Comments on Winker’s paper

Curb-stoning as Fraud Detection Problem Advantages to this approach? Why not a classification problem? Why not score interviewers using an index of indicators? How about scoring interviews and verifying cases (not interviewers), or following up interviewers with highest percent of suspicious records? Is this an outlier detection problem?

Objective way to narrow focus How to target scarce resources But as in other fraud detection problems, likely doesn’t go far enough (i.e. need to detect at much lower rates than 20% falsifiers with 70% falsification rate)

How can this method be extended? Are there other variables beyond indicators that might be useful in classifying falsifiers? – Data relevant indicators (time stamps, edit rates) – Person relevant indicators (INT characteristics – Yes, I realize we are getting into dicey territory!) – This would require “real” data – i.e. cannot be done with simulated data