Difference-in-Differences Models

Slides:



Advertisements
Similar presentations
Nigeria Case Study HIVAIDS Specific Slides. ANALYZING AND INTERPRETING DATA.
Advertisements

Rwanda Case Study Additional Slides on Stakeholder Involvement.
GENERATING DEMAND FOR DATA Module 1. Session Objectives  Understand the importance of improving data-informed decision making  Understand the role of.
Understanding the role of child marriage on reproductive health outcomes: evidence from a multi- country study in South Asia Deepali Godha, David Hotchkiss,
Technical Approach to and Experiences from Strengthening National Monitoring and Evaluation System for Most Vulnerable Children Program in Tanzania Prisca.
MEASURE Evaluation DATIM Data Exchange Denise Johnson
Business as Unusual: Changing the Approach to Monitoring OVC Programs Karen G. Fleischman Foreit, PhD Futures Group/MEASURE Evaluation.
Linking Data with Action Part 2: Understanding Data Discrepancies.
Regional Forum: Use of Gender Data in Sub-national Decision-making Kigali, Rwanda August 2012 Key Gender Terms and Concepts.
Violence Against Women and Girls A Compendium of Monitoring and Evaluation Indicators By Shelah S. Bloom Presented by: Anupa Deshpande.
Introduction to Group Work. Learning Objectives The goal of the group project is to provide workshop participants with an opportunity to further develop.
Unmet Need Exercise  Review the trends in CPR. What do you conclude about the performance of the FP program in each country?  Review trends in unmet.
Day 4: Field Practicum This presentation has been supported by the U.S President’s Emergency Plan for AIDS Relief (PEPFAR) through the U.S. Agency for.
MEASURE Evaluation Data Quality Assurance Workshop Session 3 Introduction to Routine Data Quality Assessment.
MEASURE EVALUATION Session: 7 Developing Action Plans Based on Results Data Quality Assurance Workshop.
Data Use for Gender-Aware Health Programming Welcome and Introductions.
Monitoring & Evaluation Capacity Strengthening Workshop WORKSHOP INTRODUCTION AND OVERVIEW.
Integration of Community Based Services It seems like a good idea, but how to make it work? Molly Cannon Palladium/MEASURE Evaluation September 28, 2015.
Management of RHIS Resources
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 9:
Community Health Information System in Action in SSNPR/Ethiopia
Ensuring Data Quality for Monitoring and Evaluation
Data Quality Assurance Workshop
Introduction MODULE 2: Indicators and Data Collection and Reporting
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 5:
RHIS Design and Reform ROUTINE HEALTH INFORMATION SYSTEMS MODULE 10:
From the Conceptual Framework to the Empirical Model
Session: 5 Using the RDQA tool for System Assessment
Using Data to Inform Community-Level Management
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 8:
Community Health Information System in Action in SNNPR/Ethiopia
Introduction MODULE 6: RHIS Data Demand and Use
General belief that roads are good for development & living standards
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
The PLACE Mapping Tool Becky Wilkes, MS, GISP Marc Peterson, MA, GISP
Training of Trainers on the OVC Household Vulnerability Prioritization Tool.
ROUTINE HEALTH INFORMATION SYSTEMS
Measuring Success Toolkit
MEASURE Evaluation Using a Primary Health Care Lens Gabriela Escudero
Presenting an Information Needs Framework for PEPFAR OVC Programs
Gustavo Angeles MEASURE Evaluation
Introduction to Comprehensive Evaluation
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 4:
Session: 4 Using the RDQA tool for Data Verification
Use of Community Health Data for Shared Accountability
Community Health Information System in Action in SNNPR/Ethiopia
Assessment Training Session 9: Assessment Analysis
Training Content and Orientation
Introduction ROUTINE HEALTH INFORMATION SYSTEMS MODULE 3:
Introduction RHIS Design and Reform ROUTINE HEALTH INFORMATION SYSTEMS
Introduction to Health Informatics:
Introduction to the PRISM Framework
Information Systems for Health:
Process Improvement, System Design, and Usability Evaluation
Process Improvement, System Design, and Usability Evaluation
Impact Evaluation Methods: Difference in difference & Matching
Information Systems for Health:
Introduction to Health Informatics:
Session: 6 Understanding & Using the RDQA Tool Output
Introduction MODULE 7: RHIS Governance and Management of Resources
Process Improvement, System Design, and Usability Evaluation
Siân Curtis, PhD OVC Evaluation Dissemination Meeting,
Data and Interoperability:
Measuring Data Quality
Introduction to Health Informatics
Session: 9 On-going Monitoring & Follow Up
Process Improvement, System Design, and Usability Evaluation
EHRs and Privacy Protection in LMICs
Willis Odek, PhD Chief of Party/Senior Technical Advisor,
Presentation transcript:

Difference-in-Differences Models Gustavo Angeles MEASURE Evaluation University of North Carolina at Chapel Hill Workshop on Impact Evaluation of Population, Health and Nutrition Programs Accra, Ghana July 18-29, 2016

I. Difference-in-differences: Basic set-up 2 groups: Program group (“with program”) Comparison group (“without program”) 2 points in time: Baseline survey Follow-up survey Recommended: Follow-up survey is longitudinal at the individual, household or locality level

Difference-in-Differences Outcome B Program Group A Baseline Follow-up Time

Difference-in-Differences B Outcome Program Group B-A A Baseline Follow-up Time

Difference-in-Differences Outcome B Program Group B-A A D Comparison Group D-C C Baseline Follow-up Time

Difference-in-Differences Outcome B Program Group B-A D-C A D Comparison Group D-C C Baseline Follow-up Time

Difference-in-Differences Impact = (B-A)-(D-C) Outcome B Program Group B-A D-C A D Comparison Group D-C C Baseline Follow-up Time

Difference-in-Differences Impact = (B-A)-(D-C) B Outcome Program Group B-A D-C A D Comparison Group D-C C Follow-up Time Baseline Key condition: “Parallel trends assumption.” Program group would have had the same change as the Comparison group in absence of the program.

Difference-in-Differences Impact = (B-A)-(D-C) Outcome B Program Group B-A A True change; diff-in-diff under-estimates program impact D Comparison Group D-C C Baseline Follow-up Time Key condition: “Parallel trends assumption.” Program group would’ve had the same change as the Comparison group in absence of the program. Limitations: - Strong assumption; “true change” could’ve been different.

Difference-in-Differences Impact = (B-A)-(D-C) Outcome B Program Group B-A A True change; diff-in-diff under-estimates program impact D Comparison Group D-C C Baseline Follow-up Time Key condition: “Parallel trends assumption.” Program group would’ve had the same change as the Comparison group in absence of the program. Limitation: - Strong assumption; true change could’ve been different - It requires “short” time interval, but it reduces magnitude of impact to estimate.

Difference-in-Differences Impact = (B-A)-(D-C) Outcome B Program Group B-A D-C A D Comparison Group D-C C Baseline Follow-up Time Key Issue: Selection of Comparison Group Question: What is the best way to select a program and comparison group, so the two groups will behave similarly and will have the same change, in the absence of the program?

Difference-in-Differences: Testing the “Parallel trends assumption” Impact = (B-A)-(D-C) Outcome B Program Group A D E Comparison Group C F Pre-Baseline Baseline Follow-up Time One way: You need Pre-Baseline data!

Difference-in-Differences: Testing the “Parallel trends assumption” Impact = (B-A)-(D-C) B Outcome Program Group A D E A-E Comparison Group C F C-F Pre-Baseline Baseline Follow-up Time One way: You need Pre-Baseline data! In this example, “Parallel trends assumption” holds if: (A-E)=(C-F) Problems: - Pre-baseline data rarely available - Past behavior is only an indication of future behavior.

Difference-in-Differences: Not good if different true changes Impact = (B-A)-(D-C) Outcome B Program Group A E D True change Comparison Group C F Pre-Baseline Baseline Follow-up Time In this case Diff-in-diff provides and incorrect estimate of program impact. It underestimates program impact.

Difference-in-Difference: Not good if different trends Impact = (B-A)-(D-C) Outcome B Program Group True Impact A E D True change Comparison Group C F Pre-Baseline Baseline Follow-up Time In this case Diff-in-diff provides and incorrect estimate of program impact. It underestimates program impact.

Difference-in-Differences: Extensions (3 points in time) Outcome B Impact 1 Program Group A D Comparison Group C Baseline Follow-up 1 Follow-up 2 Time Key condition: “Parallel trends assumption” holds for each time period.

Difference-in-Differences: Extensions (3 points in time) Outcome G Impact 2 B Impact 1 Program Group A D H Comparison Group C Baseline Follow-up 1 Follow-up 2 Time Key condition: “Parallel trends assumption” holds for each time period.

The DID model

Figure 1. DID Model – Structure of the pooled data set Variable names Iid Cluster P T PxT Y x1 x2 x3 … ID Iid: Individual identifier number 1 1 1 0 0 0 … … … 2 1 1 0 0 1 … … … 3 1 1 0 0 1 … … … 4 1 1 0 0 0 … … … 1 2 0 0 0 1 … … … 2 2 0 0 0 0 … … … 3 2 0 0 0 1 … … … … … … … … … … … … 1 200 1 0 0 0 … … … 2 200 1 0 0 1 … … … 3 200 1 0 0 0 … … … Cluster ID: Cluster identifier number P: Program summy T: Time dummy PxT: The interaction dummy Y: The dependent variable x1, x2, x3 : cluster, household, or individual characteristics Baseline 1 1 1 1 1 1 … … … 2 1 1 1 1 0 … … … 3 1 1 1 1 1 … … … 4 1 1 1 1 0 … … … 1 2 0 1 0 0 … … … 2 2 0 1 0 1 … … … 3 2 0 1 0 1 … … … … … … … … … … … … 1 200 1 1 1 1 … … … 2 200 1 1 1 0 … … … 3 200 1 1 1 1 … … … Follow- Up

Difference-in-differences Method widely used in program evaluation Key is the “Parallel trends assumption.” It works better if the program was randomly allocated between the program group and the comparison group. The two groups will be “similar” in observed and unobserved characteristics. An alternative is to use matching procedures to find a matched Comparison Group, before you implement the baseline. You can match on community-characteristics. It works better if there is a “short” time interval between baseline and follow-up, but, how “short” to still measure impact? It depends on the outcome and selection of comparison group. It could control for fixed unobserved characteristics that could be the source of biased estimates of program impact (endogeneity). It is better to have longitudinal surveys.

Thank you!

This presentation was produced with the support of the United States Agency for International Development (USAID) under the terms of MEASURE Evaluation cooperative agreement AID-OAA-L-14-00004. MEASURE Evaluation is implemented by the Carolina Population Center, University of North Carolina at Chapel Hill in partnership with ICF International; John Snow, Inc.; Management Sciences for Health; Palladium; and Tulane University. Views expressed are not necessarily those of USAID or the United States government. www.measureevaluation.org