Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measuring Impact: Experiments

Similar presentations


Presentation on theme: "Measuring Impact: Experiments"— Presentation transcript:

1 Measuring Impact: Experiments
Daniel Stein Economist, DIME Istanbul, May 12, 2015

2 Practical Challenge Find a very good counterfactual to tell us what would have happened… Non-experimental methods require many assumptions and very good data Is there an easier way to go?

3 Experiments Other names: Randomized Control Trials (RCTs), Randomization or random assignment Assignment to treatment and control is based on chance, it is random (like flipping a coin) This is the best way of recovering the counterfactual

4 Randomization? Thats Not For Me!
Randomization cant offer all questions, but it can be a useful tool in many types of projects Maybe you cant randomize trade laws, but maybe you can experiment with: Information Incentives to export Loans Etc Randomization allows clear answers to YOUR questions!

5 Experiments: plan Purpose of randomizing and how it works: Intuition
Important Concepts Set Up Different ways of randomizing Real Threats to Your Experiment

6 Purpose… Identify the causal impact of an intervention on some outcomes of interest To do so, you need: Identify what is your targeted population (or eligible group) Select two groups: treatment and control Assure the groups are, on average, identical in observed and unobserved characteristics How to do it? You need a random assignment (or experiment)

7 Basic intuition: random sampling ≠ random assignment
3. Randomization 1. Population 2. Evaluation sample Comparison For presentation (with animation effects) Treatment External Validity Internal Validity

8 More intuition… “A program is targeted to the most needed firms so they can catch-up” Automatically excluded -- ineligibles Automatically included May or may not enter – eligibles to enter

9 More intuition… “A program is targeted to the most needed firms so they can catch-up” Automatically excluded -- ineligibles Automatically included -- eligibles May or may not enter – randomize here!

10 Set Up Example: The government of Eurasia wants to test the effect of a training program. It can be evaluated within or between firms. You’re hired to do an impact evaluation Case 1: You first consider to work within firms A firm with 800 employees is selected and it’s believed that some of them could perform better if received some training What can be done to see if the training is actually improving workers’ productivity on the job?

11 1. Pure Randomization We don’t know beforehand if the program works or not. It’s then decided to test a pilot: Sampling frame: 400 employees Give to all workers the same chance to participate into the pilot A random number is assigned to each worker, the workers are ordered in an ascending order based on the number they were given, and the first 200 are selected to the program This is an example of pure randomization in practice Challenge you may face: unselected workers get unhappy and respond to the selection process changing behavior Why would that be a problem?

12 1. Pure Randomization Effect = (YT) - (YC) Randomize treatment
Average performance outcome in the end of the month (YC) Comparison: 200 The globe is your firm Average performance outcome in the end of the month (YT) For presentation (with animation effects) Treatment: 200 Effect = (YT) - (YC)

13 Example of Spillover Effects
If the performance of the trained workers affects the behavior of the untrained in the same team, then…

14 Example of Spillover Effects
If the performance of the trained workers affects the behavior of the untrained in the same team, then… This is an example of spillover effects

15 1. Pure Randomization Suppose that workers are teamed up
C C T T C T Work teams are randomly assigned to get trained instead Advantages of working at cluster level Reduce risk of spillovers within groups C C T

16 Important Concepts Pure randomization: unit of intervention = unit of analysis you randomize workers and assess the workers’ performance individually you randomize team and assess team’s performance Cluster randomization: unit of intervention ≠ unit of analysis you randomize firms and assess workers’ performance individually

17 Important Concepts Unit of Randomization: choose according to type of program As a rule of thumb, randomize at the smallest viable unit of implementation. Individual/Household/Workers Firm/School/Health Clinic (Clusters) Block/Village/Community (Clusters) Ward/District/Region (Clusters)

18 Set Up Case 2: You decide to explore the intervention across firms but you’re worried about some practical issues such as limited capacity, low participation and inexistence of pure control What can you do?

19 Opportunities for Randomization
Budget constraints  prevent full coverage Random assignment (lottery) is fair and transparent Limited implementation capacity Randomized phase-in gives all the same chance to go first No evidence on which alternative is best Random variation in treatment with equal ex ante chance of success Take up of existing program is not complete Encouragement design: Randomly provide information or incentive for some to sign up 19

20 Scenario The government of Eurasia has identified 200 firms who have export potential but are not accessing markets They want to implement a program to give them export advisory services. What types of randomized designs might be possible?

21 Budget Constraints->Pure Randomization
Imagine that the government only has the budget to deliver the service to 100 out of 200 firms. How to choose? This is an opportunity for a pure randomized selection Randomization is fair, transparent, and allow a rigorous IE!

22 Capacity Constraints: Phase-in Design
The government only has the capacity to deliver 100 out of the 200 eligible in the first year of the program. There is a waiting list and the government wants to give firms the same chance to participate first You randomly assign 100 to participate now and treat the other 100 later – say, one year. This solved your problem, but notice that as soon as the control firms enter the program you are no longer able to identify the impact of it – no long-term effects!

23 Not sure to work best: variation in treatment
Let’s say the government is not sure what to offer the firms: export subsidies or training. You can test which works better: randomly assign 100 firms to receive training and 100 to receive subsidies. Figure out which is more effective!

24 Lack of Interest->Encouragement design
Now suppose that the government does not want to exclude anyone from the program who really wants it. This is the opportunity for an Encouragement Design You can randomly select 100 of the 200 firms to get special encouragement (visits, phone calls, tax breaks) if they take advantage of the program. The other 100 can still sign up. If your encouragement is effective, this still allows you to evaluate the program.

25 Why randomize? Randomization is the “gold standard”
Compared to other techniques, results from randomization will be treated differently by donors, researchers, policy makers. Go for the gold!

26 Thank you! WEB http://dime.worldbank.org facebook.com/ieKnow
#impacteval blogs.worldbank.org/impactevaluations microdata.worldbank.org/index.php/catalog/impact_evaluation


Download ppt "Measuring Impact: Experiments"

Similar presentations


Ads by Google