Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.

Slides:



Advertisements
Similar presentations
PhD Research Seminar Series: Valid Research Designs
Advertisements

Ch 8: Experimental Design Ch 9: Conducting Experiments
Experimental and Quasi-Experimental Research
Defining Characteristics
What You Will Learn From These Sessions
GROUP-LEVEL DESIGNS Chapter 9.
Experimental Research Designs
Measuring and Monitoring Program Outcomes
Correlation AND EXPERIMENTAL DESIGN
Research Design: The Experimental Model and Its Variations
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 Evidence-Based Models for.
Research Design and Validity Threats
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Quasi-Experimental Designs Whenever it is not possible to establish cause-and-effect relations because there is not complete control over the variables.
Experiments Pierre-Auguste Renoir: Barges on the Seine, 1869.
Chapter 9 Experimental Research Gay, Mills, and Airasian
Experimental Research
EVALUATING YOUR RESEARCH DESIGN EDRS 5305 EDUCATIONAL RESEARCH & STATISTICS.
Experimental Research Take some action and observe its effects Take some action and observe its effects Extension of natural science to social science.
Chapter 8 Experimental Research
Experimental Design The Gold Standard?.
Quasi-experimental Design CRJS 4466EA. Introduction Quasi-experiment Describes non-randomly assigned participants and controls subject to impact assessment.
Applying Science Towards Understanding Behavior in Organizations Chapters 2 & 3.
Research Methods in Psychology
I want to test a wound treatment or educational program in my clinical setting with patient groups that are convenient or that already exist, How do I.
Copyright © 2008 by Pearson Education, Inc. Upper Saddle River, New Jersey All rights reserved. John W. Creswell Educational Research: Planning,
Day 6: Non-Experimental & Experimental Design
Chapter 11 Experimental Designs
Power Point Slides by Ronald J. Shope in collaboration with John W. Creswell Chapter 11 Experimental Designs.
Designing a Random Assignment Social Experiment In the U.K.; The Employment Retention and Advancement Demonstration (ERA)
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
 Internal Validity  Construct Validity  External Validity * In the context of a research study, i.e., not measurement validity.
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Evaluating HRD Programs
Chapter Four Experimental & Quasi-experimental Designs.
CDIS 5400 Dr Brenda Louw 2010 Validity Issues in Research Design.
Experimental Designs. Experiments are conducted to identify how independent variables influence some change in a dependent variable.
Evaluating Impacts of MSP Grants Hilary Rhodes, PhD Ellen Bobronnikov February 22, 2010 Common Issues and Recommendations.
After giving this lecture the student should be able to do the following: After giving this lecture the student should be able to do the following: List.
Experimental Research
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Ch 9 Internal and External Validity. Validity  The quality of the instruments used in the research study  Will the reader believe what they are readying.
Quasi Experimental and single case experimental designs
1 Module 3 Designs. 2 Family Health Project: Exercise Review Discuss the Family Health Case and these questions. Consider how gender issues influence.
SOCW 671: #6 Research Designs Review for 1 st Quiz.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
EXPERIMENTAL DESIGNS. Categories Lab experiments –Experiments done in artificial or contrived environment Field experiments –Experiments done in natural.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
Criminal Justice and Criminology Research Methods, Second Edition Kraska / Neuman © 2012 by Pearson Higher Education, Inc Upper Saddle River, New Jersey.
Can you hear me now? Keeping threats to validity from muffling assessment messages Maureen Donohue-Smith, Ph.D., RN Elmira College.
Educational Research Experimental Research Chapter 9 (8 th Edition) Chapter 13 (7 th Edition) Gay and Airasian.
Experimental Design Ragu, Nickola, Marina, & Shannon.
CHOOSING A RESEARCH DESIGN
Approaches to social research Lerum
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Experiments Why would a double-blind experiment be used?
Clinical Studies Continuum
Chapter 8 Experimental Design The nature of an experimental design
Building a Strong Outcome Portfolio
Single-Case Designs.
External Validity.
Experimental Research
Building a Strong Outcome Portfolio
Experimental Research
Presentation transcript:

Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate and Interpret Evidence

Research and Evaluation Center THERE ARE MANY TYPES OF EVIDENCE 2 1. Assessment of social problems and needs To what extent are community and standards met? Needs assessment; problem description 2. Determination of goalsWhat must be done to meet those needs and standards? Needs assessment; service needs 3. Design of program alternatives What services could be used to produce the desired changes? Assessment of program logic or theory 4. Selection of alternativeWhich of the possible program approaches is best? Feasibility study; formative evaluation 5. Program implementationHow should the program be put into operation? Implementation assessment 6. Program operationIs the program operating as planned?Process evaluation; program monitoring 7. Program outcomesIs the program having the desired effects? Outcome evaluation 8. Program efficiencyAre program effects attained at a reasonable cost? Cost-benefit analysis; cost-effectiveness analysis Stage of DevelopmentQuestion to be AskedEvaluation Function Focus of Evidence-Based Practices and Policy Rossi, P., M. Lipsey and H. Freeman (2004). Evaluation: A Systematic Approach (7th Edition), p. 40. Sage Publications; adapted from Pancer & Westhues 1989).

Research and Evaluation Center THE ESSENTIAL QUESTION IN ALL EVALUATION RESEARCH IS… … COMPARED TO WHAT? 3

Research and Evaluation Center CLASSIC EXPERIMENTAL, RANDOM- ASSIGNMENT DESIGN 4 Eligibility for Randomization Determined by Evaluators or Using Guidelines From Evaluators Client Referrals Begin Services Control Group Treatment Group Data Collection Time Data Collection No Services or Different Services OUTCOMES Analyze Differences, Effect Size, etc. Random Assignment Process Issues: Equivalent Data Collection? No Services Group? How to Randomize? Eligibility?

Research and Evaluation Center QUASI-EXPERIMENTAL DESIGN – MATCHED COMPARISON GROUPS 5 Client Referrals Treatment Group Data Collection Time Data Collection OUTCOMES Analyze Differences, Effect Size, etc. Matching Process Pool of Potential Comparison Cases Comparison Group According to: - sex, race, age - prior services - scope of problems - etc. Issues: Control Services to Comparison Cases?Equivalent Data Collection? Comparison Cases? Matched on What?

Research and Evaluation Center SOME NON-TRADITIONAL DESIGNS CAN BE PERSUASIVE 6

Research and Evaluation Center QUASI-EXPERIMENTAL DESIGN – STAGGERED START 7 Client Referrals X Data Collection Points Group 1 Group 2 Group 3 Time X X X Intervention OUTCOMES

Research and Evaluation Center MANY FACTORS ARE INVOLVED IN CHOOSING THE BEST DESIGN 8  Each design has to be adapted to the unique context of its setting  Experimental designs are always preferred, but rarely feasible  The critical task of evaluation design is choose the most rigorous, but realistic design possible  Key stakeholders should be involved in early deliberations over evaluation design, both to solicit their views and to gain their support for the eventual design  Design criticisms should be anticipated and dealt with early

Research and Evaluation Center TWO TYPES OF THREATS TO VALIDITY 9 External: Something about the way the study is conducted makes it impossible to generalize the findings beyond this particular study. Can findings of effectiveness be transferred to the other settings and other circumstances? Internal: The study failed to establish credible evidence that the intervention (e.g., services, policy change) affected the outcomes in a demonstrable and causal way. Can we really say that A > caused > B? From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see

Research and Evaluation Center THREATS TO INTERNAL VALIDITY 10 (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see Threats generated by evaluators Testing: Effects of taking a pretest on subsequent post-tests. People might do better on the second test simply because they have already taken it Also, taking a pretest may sensitize participants to a program. Participants may perform better simply because they know they are being tested — the “Hawthorne effect.” Instrumentation: Changes in the observers, scores, or the measuring instrument used from one time to the next.

Research and Evaluation Center THREATS TO INTERNAL VALIDITY 11 Changes in the environment or in participants History: Changes in the environment that occur at the same time as the program and will change the behavior of participants (e.g., a recession might make a good program look bad). Maturation: Changes within individuals participating in the program resulting from natural biological or psychological development. (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see

Research and Evaluation Center THREATS TO INTERNAL VALIDITY 12 (the intervention made a real difference in this study) From “Quasi-Experimental Evaluation.” Evaluation and Data Development Strategic Policy, Human Resources Development Canada. January 1998, page 5 (SP-AH053E-01-98, see Participants not representative of population Selection: Assignment to participant or non-participant groups yield groups with different characteristics. Pre-program differences may be confused with program effect. Attrition: Participants drop out of program. Drop-outs may be different from those who stay. Statistical Regression: The tendency for those scoring extremely high or low on a selection measure to be less extreme during the next test. For example, if only those who scored worst on a reading test are included in the literacy program, they might be bound to do better on the next test regardless of the program just because the odds of doing as poorly next time are low.

Research and Evaluation Center INTERPRETING EFFECTS Two important concepts:  Statistical Significance — How confident can we be that differences in outcome are really there and not just due to dumb luck?  Effect Size — How meaningful are the differences in outcome? Differences can be statistically significant, but trivial in terms of their application and benefit in the real world. 13

Research and Evaluation Center 14 Percent Change in Recidivism - 20%-10%0%10%20%

Research and Evaluation Center 15 Percent Change in Recidivism - 20%-10%0%10%20%

Research and Evaluation Center 16 Percent Change in Recidivism - 20%-10%0%10%20%

Research and Evaluation Center MUCH OF OUR REASONING COMES FROM KNOWLEDGE OF DISTRIBUTIONS 17

Research and Evaluation Center ANOTHER WAY TO THINK ABOUT IT… 18 Evaluators must assess not only outcomes, but whether changing outcomes are attributable to program or policy  Outcome level is that status of an outcome at some point in time (e.g., the amount of smoking among teenagers)  Outcome change is the difference between outcome levels at different points in time or between groups  Program effect is the portion of a change in outcome that can be attributed uniquely to a program as opposed to the influence of other factors Rossi, P., M. Lipsey and H. Freeman (2004). Evaluation: A Systematic Approach (7th Edition), p Sage Publications.

Research and Evaluation Center CONTACT INFORMATION Jeffrey A. Butts, Ph.D. Director, Research & Evaluation Center John Jay College of Criminal Justice City University of New York