Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.

Similar presentations


Presentation on theme: "Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate."— Presentation transcript:

1 Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate and Interpret Evidence

2 Research and Evaluation Center THERE ARE MANY TYPES OF EVIDENCE 2 1. Assessment of social problems and needs To what extent are community and standards met? Needs assessment; problem description 2. Determination of goalsWhat must be done to meet those needs and standards? Needs assessment; service needs 3. Design of program alternatives What services could be used to produce the desired changes? Assessment of program logic or theory 4. Selection of alternativeWhich of the possible program approaches is best? Feasibility study; formative evaluation 5. Program implementationHow should the program be put into operation? Implementation assessment 6. Program operationIs the program operating as planned?Process evaluation; program monitoring 7. Program outcomesIs the program having the desired effects? Outcome evaluation 8. Program efficiencyAre program effects attained at a reasonable cost? Cost-benefit analysis; cost-effectiveness analysis Stage of DevelopmentQuestion to be AskedEvaluation Function Focus of Evidence-Based Practices and Policy Rossi, P., M. Lipsey and H. Freeman (2004). Evaluation: A Systematic Approach (7th Edition), p. 40. Sage Publications; adapted from Pancer & Westhues 1989).

3 Research and Evaluation Center THE ESSENTIAL QUESTION IN ALL EVALUATION RESEARCH IS… … COMPARED TO WHAT? 3

4 Research and Evaluation Center CLASSIC EXPERIMENTAL, RANDOM- ASSIGNMENT DESIGN 4 Eligibility for Randomization Determined by Evaluators or Using Guidelines From Evaluators Client Referrals Begin Services Control Group Treatment Group Data Collection 1 2 3 4 5 6 Time 1 2 3 4 5 6 Data Collection No Services or Different Services OUTCOMES Analyze Differences, Effect Size, etc. Random Assignment Process Issues: Equivalent Data Collection? No Services Group? How to Randomize? Eligibility?

5 Research and Evaluation Center QUASI-EXPERIMENTAL DESIGN – MATCHED COMPARISON GROUPS 5 Client Referrals Treatment Group Data Collection 1 2 3 4 5 6 Time 1 2 3 4 5 6 Data Collection OUTCOMES Analyze Differences, Effect Size, etc. Matching Process Pool of Potential Comparison Cases Comparison Group According to: - sex, race, age - prior services - scope of problems - etc. Issues: Control Services to Comparison Cases?Equivalent Data Collection? Comparison Cases? Matched on What?

6 Research and Evaluation Center SOME NON-TRADITIONAL DESIGNS CAN BE PERSUASIVE 6

7 Research and Evaluation Center QUASI-EXPERIMENTAL DESIGN – STAGGERED START 7 Client Referrals X Data Collection Points Group 1 Group 2 Group 3 Time 1 2 3 4 5 6 X X X Intervention OUTCOMES

8 Research and Evaluation Center MANY FACTORS ARE INVOLVED IN CHOOSING THE BEST DESIGN 8  Each design has to be adapted to the unique context of its setting  Experimental designs are always preferred, but rarely feasible  The critical task of evaluation design is choose the most rigorous, but realistic design possible  Key stakeholders should be involved in early deliberations over evaluation design, both to solicit their views and to gain their support for the eventual design  Design criticisms should be anticipated and dealt with early

9 Research and Evaluation Center TWO TYPES OF THREATS TO VALIDITY 9 External: Something about the way the study is conducted makes it impossible to generalize the findings beyond this particular study. Can findings of effectiveness be transferred to the other settings and other circumstances? Internal: The study failed to establish credible evidence that the intervention (e.g., services, policy change) affected the outcomes in a demonstrable and causal way. Can we really say that A > caused > B? From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see www.hrsdc.gc.ca).

10 Research and Evaluation Center THREATS TO INTERNAL VALIDITY 10 (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see www.hrsdc.gc.ca). Threats generated by evaluators Testing: Effects of taking a pretest on subsequent post-tests. People might do better on the second test simply because they have already taken it Also, taking a pretest may sensitize participants to a program. Participants may perform better simply because they know they are being tested — the “Hawthorne effect.” Instrumentation: Changes in the observers, scores, or the measuring instrument used from one time to the next.

11 Research and Evaluation Center THREATS TO INTERNAL VALIDITY 11 Changes in the environment or in participants History: Changes in the environment that occur at the same time as the program and will change the behavior of participants (e.g., a recession might make a good program look bad). Maturation: Changes within individuals participating in the program resulting from natural biological or psychological development. (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see www.hrsdc.gc.ca).

12 Research and Evaluation Center THREATS TO INTERNAL VALIDITY 12 (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see www.hrsdc.gc.ca). Participants not representative of population Selection: Assignment to participant or non-participant groups yield groups with different characteristics. Pre-program differences may be confused with program effect. Attrition: Participants drop out of program. Drop-outs may be different from those who stay. Statistical Regression: The tendency for those scoring extremely high or low on a selection measure to be less extreme during the next test. For example, if only those who scored worst on a reading test are included in the literacy program, they might be bound to do better on the next test regardless of the program just because the odds of doing as poorly next time are low.

13 Research and Evaluation Center INTERPRETING EFFECTS Two important concepts:  Statistical Significance — How confident can we be that differences in outcome are really there and not just due to dumb luck?  Effect Size — How meaningful are the differences in outcome? Differences can be statistically significant, but trivial in terms of their application and benefit in the real world. 13

14 Research and Evaluation Center 14 Percent Change in Recidivism - 20%-10%0%10%20%

15 Research and Evaluation Center 15 Percent Change in Recidivism - 20%-10%0%10%20%

16 Research and Evaluation Center 16 Percent Change in Recidivism - 20%-10%0%10%20%

17 Research and Evaluation Center MUCH OF OUR REASONING COMES FROM KNOWLEDGE OF DISTRIBUTIONS 17

18 Research and Evaluation Center ANOTHER WAY TO THINK ABOUT IT… 18 Evaluators must assess not only outcomes, but whether changing outcomes are attributable to program or policy  Outcome level is that status of an outcome at some point in time (e.g., the amount of smoking among teenagers)  Outcome change is the difference between outcome levels at different points in time or between groups  Program effect is the portion of a change in outcome that can be attributed uniquely to a program as opposed to the influence of other factors Rossi, P., M. Lipsey and H. Freeman (2004). Evaluation: A Systematic Approach (7th Edition), p. 208. Sage Publications.

19 Research and Evaluation Center CONTACT INFORMATION Jeffrey A. Butts, Ph.D. Director, Research & Evaluation Center John Jay College of Criminal Justice City University of New York http://about.me/jbutts jbutts@jjay.cuny.edu


Download ppt "Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate."

Similar presentations


Ads by Google