Presented by: Tom Chapel Focus On…Thinking About Design.

Slides:



Advertisements
Similar presentations
Program Evaluation Alternative Approaches and Practical Guidelines
Advertisements

Using the CDC Evaluation Fwork to Avoid Minefields on the Road to Good Evaluation Presented to: 2002 National Asthma Conference October 24, 2002 By: Thomas.
“Mixed Methods in Program Evaluation” Presented by Tom Chapel
Experimental and Quasiexperimental Designs Chapter 10 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.
The Challenge and Importance of Evaluating Residents and Fellows Debra Weinstein, M.D. PHS GME Coordinators Retreat March 25, 2011.
Introduction to Monitoring and Evaluation
Postgraduate Course 7. Evidence-based management: Research designs.
1 Chapter 4 The Designing Research Consumer. 2 High Quality Research: Evaluating Research Design High quality evaluation research uses the scientific.
What is Science? or True False
What You Will Learn From These Sessions
The counterfactual logic for public policy evaluation Alberto Martini hard at first, natural later 1.
8. Evidence-based management Step 3: Critical appraisal of studies
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
Reading the Dental Literature
Correlation AND EXPERIMENTAL DESIGN
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Study Design Data. Types of studies Design of study determines whether: –an inference to the population can be made –causality can be inferred random.
Studying Behavior. Midterm Review Session The TAs will conduct the review session on Wednesday, October 15 th. If you have questions, your TA and.
Research Methods in Psychology Pertemuan 3 s.d 4 Matakuliah: L0014/Psikologi Umum Tahun: 2007.
Postgraduate Course 6. Evidence based management: What is the best available evidence?
BC Jung A Brief Introduction to Epidemiology - XI (Epidemiologic Research Designs: Experimental/Interventional Studies) Betty C. Jung, RN, MPH, CHES.
“Walking Through the Steps and Standards” Presented by: Tom Chapel Focus On…
Research in Psychology. Research Basics  All psychological research MUST follow the scientific method  Improves accuracy and validity of findings 
 Be familiar with the types of research study designs  Be aware of the advantages, disadvantages, and uses of the various research design types  Recognize.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Selecting a Research Design. Research Design Refers to the outline, plan, or strategy specifying the procedure to be used in answering research questions.
Chapter 3 The Research Design. Research Design A research design is a plan of action for executing a research project, specifying The theory to be tested.
Epidemiology The Basics Only… Adapted with permission from a class presentation developed by Dr. Charles Lynch – University of Iowa, Iowa City.
Designing Your Own Experiment (follow along on your sheet)
Quasi Experimental Methods I Nethra Palaniswamy Development Strategy and Governance International Food Policy Research Institute.
Criteria for Assessing The Feasibility of RCTs. RCTs in Social Science: York September 2006 Today’s Headlines: “Drugs education is not working” “ having.
EVIDENCE BASED MEDICINE Effectiveness of therapy Ross Lawrenson.
Evaluating a Research Report
Randomized Clinical Trials: The Versatility and Malleability of the “Gold Standard” Wing Institute Jack States Ronnie Detrich Randy Keyworth “Get your.
Group Quantitative Designs First, let us consider how one chooses a design. There is no easy formula for choice of design. The choice of a design should.
Research Methods in Psychology Descriptive Methods Naturalistic observation Intensive individual case study Surveys/questionnaires/interviews Correlational.
Grobman, K. H. "Confirmation Bias." Teaching about. Developmentalpsychology.org, Web. 16 Sept Sequence Fits the instructor's Rule? Guess.
Neal D. Kohatsu, MD, MPH Medical Director May 17, 2012 Evaluation and Quality Improvement.
Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14.
Preliminary Results – Not for Citation Strengthening Institutions Program Webinar on Competitive Priority on Evidence April 11, 2012 Note: These slides.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
Rigorous Quasi-Experimental Evaluations: Design Considerations Sung-Woo Cho, Ph.D. June 11, 2015 Success from the Start: Round 4 Convening US Department.
STUDYING BEHAVIOR © 2009 The McGraw-Hill Companies, Inc.
Evaluating Title IIID Programs. November 8, :00-2:00 PM Webinar conference line toll free number (888) Code Login information
Research Design 10/3/2013. Graduate & Professional School Fair: Oct. 7 Explore Graduate, Law, Medical and Professional Schools THIS coming Monday! – Monday,
Gathering Useful Data. 2 Principle Idea: The knowledge of how the data were generated is one of the key ingredients for translating data intelligently.
What is Science? or 1.Science is concerned with understanding how nature and the physical world work. 2.Science can prove anything, solve any problem,
Research Methods in Psychology Chapter 2. The Research ProcessPsychological MeasurementEthical Issues in Human and Animal ResearchBecoming a Critical.
What is randomization and how does it solve the causality problem? 2.3.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
PAIR 1005 MATT RYAN 1 The experimental method. Should we 2 …be control freaks? …be manipulative? …treat people who are the same completely differently?
CLINICAL RESEARCH: PART 3. Overview  Randomized Controlled Trials  Experiments in clinical settings  Key considerations  Control groups  Basics are.
Evaluation design and implementation Puja Myles
Evaluating VR Systems. Scenario You determine that while looking around virtual worlds is natural and well supported in VR, moving about them is a difficult.
Study Design Research Methods Professional Development Institute Kali Trzesniewski December 4, 2015.
EXPERIMENTS Lecture 5. Administrative STATA Course Mailing List Info:   No subject  In body.
Design of Clinical Research Studies ASAP Session by: Robert McCarter, ScD Dir. Biostatistics and Informatics, CNMC
Experiments.  Labs (update and questions)  STATA Introduction  Intro to Experiments and Experimental Design 2.
Erik Augustson, PhD, National Cancer Institute Susan Zbikowski, PhD, Alere Wellbeing Evaluation.
Construct validity s.net/kb/consthre.htm.
How do you know your product “works”? And what does it mean for a product to “work”?
Approaches to social research Lerum
Types of Research Studies Architecture of Clinical Research
Causation & Experimental Design
Designing Your Own Experiment (follow along on your sheet)
Clinical Research: Part 2
Clinical Research: Part 2 Quasi-Experiments
Experiments II: Validity and Design Considerations
Experiments: Part 2.
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

Presented by: Tom Chapel Focus On…Thinking About Design

Design Choice Evaluation design is informed by standards: Utility Feasibility Propriety Accuracy Utility especially is key-- what is the purpose/ user/use of the evaluation?

Evaluation Purposes Accountability Prove success or failure of a program Determine potential for program implementation Proof of causation or causal attribution Ask yourself: Is proof a primary purpose of this evaluation? With what level of rigor do I need to prove causation or causal attribution?

What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over. What Do We Mean By An Experimental Model?

Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

Proving Causation: Continuum of Evaluation Designs Strongest to Weakest Design: Experimental Design: Subjects randomly assigned to experimental or control groups. Quasi-Experimental Design: The experimental group is compared to another, similar group called the comparison group. Non-Experimental Design: Only one group is evaluated.

What Do You Lose as You Move Away from Experimental Model? If you omit randomization…. you may introduce selection bias. Subjects may have something in common or may even self select.

What Do You Lose as You Move Away from Experimental Model? If you omit the control group….. you may introduce confounders and secular factors. A comparison group can help avoid this.

Experimental Model as Gold Standard Sometimes an experimental model is fools gold… Internal validity vs. external validity (i.e. generalizability) Community interventions Sometimes Right but hard to implement Sometimes Easy to implement but wrong Experimental Model as Gold Standard

Sometimes an experimental model is fools gold… Internal validity vs. external validity (i.e. generalizability) Community interventions Sometimes Right but hard to implement Sometimes Easy to implement but wrong Experimental Model as Gold Standard

Beyond the Scientific Research Paradigm …the use of randomized control trials to evaluate health promotion initiatives is, in most cases, inappropriate, misleading, and unnecessarily expensive... WHO European Working Group on Health Promotion Evaluation

Beyond the Scientific Research Paradigm..requiring evidence from randomized studies as sole proof of effectiveness will likely exclude many potentially effective and worthwhile practices… GAO, November 2009

Or This… Parachutes reduce the risk of injury after gravitational challenge, but their effectiveness has not been proved with randomized controlled trials. Smith GCS, Pell JP. BMJ Vol 327, Dec 2003.

Other Ways to Justify… Other ways to justify that our intervention is having an effect: Proximity in time Accounting for/eliminating alternative explanations Similar effects observed in similar contexts Plausible mechanisms/program theory

Other Ways to Justify… Other ways to justify that our intervention is having an effect: Proximity in time Accounting for/eliminating alternative explanations Similar effects observed in similar contexts Plausible mechanisms/program theory

Program Theory If A B and B C and C D Then … you can say that A is making a contribution to D. A B and B C and C D

Program Theory: Am I Making a Contribution? If… Your training changing provider attitudes and Changing provider attitudes changing standards of practice and Changing standards of practice policy improvements Then … You can say that your training is making a contribution to policy improvements.

In Short The right design choice depends… There is no one right design. Purpose, user, use are key. Other standards play a role. In some cases, an experimental design is not feasible or not accurate.

Remember… Cause or causal attribution is not always the purpose of our evaluations. Sometimes experimental design is the best method. Sometimes experimental design, while desirable, is not feasible. Sometimes experimental design can lead us in the wrong direction.

End Thinking About Design Webinar 4: Gathering Data, Developing Conclusions, and Putting Your Findings to Use Return to Evaluation Webinars home page