Randomized Evaluation: Dos and Don’ts An example from Peru Tania Alfonso Training Director, IPA.

Slides:



Advertisements
Similar presentations
Povertyactionlab.org Planning Sample Size for Randomized Evaluations Esther Duflo J-PAL.
Advertisements

Risk The chance of something happening that will have an impact on objectives. A risk is often specified in terms of an event or circumstance and the consequences.
What could go wrong? Deon Filmer Development Research Group, The World Bank Evidence-Based Decision-Making in Education Workshop Africa Program for Education.
Operationalizing IE: Case Study example: Textbooks and Teacher Training in Sierra Leone APEIE Workshop, May
WHY LARGE-SCALE RANDOMIZED CONTROL TRIALS? David Myers Senior Vice President IES 2006 Research Conference David Myers Senior Vice President IES 2006 Research.
Program Evaluation It’s Not Just for OMB Anymore….
1 Managing Threats to Randomization. Threat (1): Spillovers If people in the control group get treated, randomization is no more perfect Choose the appropriate.
Dr. Chris L. S. Coryn Spring 2012
Indicators, Data Sources, and Data Quality for TB M&E
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Agenda: Block Watch: Random Assignment, Outcomes, and indicators Issues in Impact and Random Assignment: Youth Transition Demonstration –Who is randomized?
Impact Evaluation Session VII Sampling and Power Jishnu Das November 2006.
Evaluation of Math-Science Partnership Projects (or how to find out if you’re really getting your money’s worth)
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Sampling Basics Jeremy Kees, Ph.D.. Conceptually defined… Sampling is the process of selecting units from a population of interest so that by studying.
Povertyactionlab.org Planning Sample Size for Randomized Evaluations Esther Duflo MIT and Poverty Action Lab.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Creating Equity Dashboards to Monitor Racial, Ethnic and Linguistic Disparities in Health Care: Lessons from the Disparities Leadership Program DiversityRx.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Development Impact Evaluation in Finance and Private Sector 1.
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
Pilot and Feasibility Studies NIHR Research Design Service Sam Norton, Liz Steed, Lauren Bell.
Common Pitfalls in Randomized Evaluations Jenny C. Aker Tufts University.
Impact of School Grants in Primary Schools In The Gambia Impact Evaluation Team The Gambian Team AFRICA IMPACT EVALUATION INITIATIVE, AFTRL Africa Program.
IMPACT EVALUATION PBAF 526 Class 5, October 31, 2011.
Uses of Diagnostic Tests Screen (mammography for breast cancer) Diagnose (electrocardiogram for acute myocardial infarction) Grade (stage of cancer) Monitor.
Evaluation Planning Checklist (1 of 2) Planning Checklist Planning is a crucial part of the evaluation process. The following checklist (based on the original.
FAST TRACK PROJECT: IMPACT EVALUATION DESIGN Musa Kpaka & Kevin Leiby | Component Leaders Meeting | 3 Aug, 2015.
PRAGMATIC Study Designs: Elderly Cancer Trials
AP Statistics Review Day 2 Chapter 5. AP Exam Producing Data accounts for 10%-15% of the material covered on the AP Exam. “Data must be collected according.
Institute of Professional Studies School of Research and Graduate Studies Selecting Samples and Negotiating Access Lecture Eight.
Experimental Design Ragu, Nickola, Marina, & Shannon.
Core Research Competencies:
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
Threats and Analysis.
Clinical Studies Continuum
Randomized Trials: A Brief Overview
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Performance Improvement Project Validation Process Outcome Focused Scoring Methodology and Critical Analysis Presenter: Christi Melendez, RN, CPHQ Associate.
The Impact, Costs, and Benefits of NC’s Early College Model
Chapter Eight: Quantitative Methods
Power, Sample Size, & Effect Size:
Pragmatic Trial Designs
Title List your Team Members
It’s Not Just for OMB Anymore…
Experiments and Quasi-Experiments
Implementation of Randomized Trials
Development Impact Evaluation in Finance and Private Sector
Types of Control I. Measurement Control II. Statistical Control
Implementation Challenges
Title List your Team Members
Experiments and Quasi-Experiments
Sampling and Power Slides by Jishnu Das.
Rerandomization to Improve Baseline Balance in Educational Experiments
External Validity.
Building a Strong Outcome Portfolio
Improving Overlap Farrokh Alemi, Ph.D.
Evidence Based Practice
Presenter: Kate Bell, MA PIP Reviewer
Positive analysis in public finance
Sample Sizes for IE Power Calculations.
Monitoring and Evaluating FGM/C abandonment programs
Title List your Team Members
Misc Internal Validity Scenarios External Validity Construct Validity
Surveys How to create one.
Title Team Members.
Presentation transcript:

Randomized Evaluation: Dos and Don’ts An example from Peru Tania Alfonso Training Director, IPA

Outline Design Implementation Analysis

Outline Design – Research question – Power – Randomization – Sampling Implementation Analysis

Research question Do make sure the research question is policy relevant Do make sure your indicators are answering your research question

Power Don’t conduct an under-powered evaluation – What does it mean to be under-powered? – Sample size and power

Power Do power calculations first – Effect size – Sample size – Getting data – (What will take-up be?)

Power Do cluster your standard errors when doing power calculations – Bad examples (two districts, 10,000 people)

Randomization Do Ensure balance – Stratification – Re-randomizing – Costs and benefits

Sampling Do make sure your sampling frame is as close to your target population as possible – Effect size

Outline Design Implementation – Measurement – Monitoring – Attrition Analysis

Measurement Don’t collect data differently for treatment and control groups – Introducing bias

Measurement Don’t use as your primary indicator something that may change with the intervention, even when the outcome does not

Monitoring Do monitor your intervention to ensure the treatment groups are receiving the treatment, and control groups are not – Contamination

Monitoring Do collect process indicators to unpack the black box

Attrition Do whatever it takes to minimize attrition – Attrition bias

Outline Design Implementation Analysis – Treatment integrity – Attrition – Final outcomes – Subgroup analyses – Covariates

Integrity of design “Once in treatment, always in treatment” Don’t switch treatment or control status, based on compliance – Intention to Treat – Treatment on Treated

Attrition “Once in sample, always in sample” Do not ignore “attritors” – Attrition bias

Attrition Don’t relax just because rates of attrition are the same in treatment and control groups – How do we test – How do we know

Final outcomes Don’t run regressions on 20 different outcomes and only report on 1 or 2 “significant impacts” Do report on all outcomes

Sub-group analysis Don’t run regressions on 20 different subgroups and only report on 1 or 2 “significant impacts”

Covariates Do specify the regression(s) you plan to run beforehand Do include covariates that you stratified on and those helpful for absorbing variance.

External Validity Do be modest about the external validity of your results – Consider the context (needs assessment) – Consider the process (process evaluation)

Cost effectiveness Do have listened to Iqbal’s lecture yesterday – Not sure if he is presenting or covered this…just a guess…