Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented by: Tom Chapel Focus On…Thinking About Design.

Similar presentations


Presentation on theme: "Presented by: Tom Chapel Focus On…Thinking About Design."— Presentation transcript:

1 Presented by: Tom Chapel Focus On…Thinking About Design

2 Design Choice Evaluation design is informed by standards: Utility Feasibility Propriety Accuracy Utility especially is key-- what is the purpose/ user/use of the evaluation?

3 Evaluation Purposes Accountability Prove success or failure of a program Determine potential for program implementation Proof of causation or causal attribution Ask yourself: Is proof a primary purpose of this evaluation? With what level of rigor do I need to prove causation or causal attribution?

4 What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

5 What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

6 Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over. What Do We Mean By An Experimental Model?

7 Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

8 What Do We Mean By An Experimental Model? Requirements 1.Experimental and control conditions Must be at least two groups: One that gets the program of interest; one that gets some other program. 2. Single experimental condition The experimental group gets the activity or program; the other (comparison) group is only observed. 3. Random assignment to conditions Participants are just as likely to be assigned to the experimental condition as to the control condition. 4. Pre- and post-program measurements At a minimum, measures are taken from people in both conditions before the program begins and after it is over.

9 Proving Causation: Continuum of Evaluation Designs Strongest to Weakest Design: Experimental Design: Subjects randomly assigned to experimental or control groups. Quasi-Experimental Design: The experimental group is compared to another, similar group called the comparison group. Non-Experimental Design: Only one group is evaluated.

10 What Do You Lose as You Move Away from Experimental Model? If you omit randomization…. you may introduce selection bias. Subjects may have something in common or may even self select.

11 What Do You Lose as You Move Away from Experimental Model? If you omit the control group….. you may introduce confounders and secular factors. A comparison group can help avoid this.

12 Experimental Model as Gold Standard Sometimes an experimental model is fools gold… Internal validity vs. external validity (i.e. generalizability) Community interventions Sometimes Right but hard to implement Sometimes Easy to implement but wrong Experimental Model as Gold Standard

13 Sometimes an experimental model is fools gold… Internal validity vs. external validity (i.e. generalizability) Community interventions Sometimes Right but hard to implement Sometimes Easy to implement but wrong Experimental Model as Gold Standard

14 Beyond the Scientific Research Paradigm …the use of randomized control trials to evaluate health promotion initiatives is, in most cases, inappropriate, misleading, and unnecessarily expensive... WHO European Working Group on Health Promotion Evaluation

15 Beyond the Scientific Research Paradigm..requiring evidence from randomized studies as sole proof of effectiveness will likely exclude many potentially effective and worthwhile practices… GAO, November 2009

16 Or This… Parachutes reduce the risk of injury after gravitational challenge, but their effectiveness has not been proved with randomized controlled trials. Smith GCS, Pell JP. BMJ Vol 327, Dec 2003.

17 Other Ways to Justify… Other ways to justify that our intervention is having an effect: Proximity in time Accounting for/eliminating alternative explanations Similar effects observed in similar contexts Plausible mechanisms/program theory

18 Other Ways to Justify… Other ways to justify that our intervention is having an effect: Proximity in time Accounting for/eliminating alternative explanations Similar effects observed in similar contexts Plausible mechanisms/program theory

19 Program Theory If A B and B C and C D Then … you can say that A is making a contribution to D. A B and B C and C D

20 Program Theory: Am I Making a Contribution? If… Your training changing provider attitudes and Changing provider attitudes changing standards of practice and Changing standards of practice policy improvements Then … You can say that your training is making a contribution to policy improvements.

21 In Short The right design choice depends… There is no one right design. Purpose, user, use are key. Other standards play a role. In some cases, an experimental design is not feasible or not accurate.

22 Remember… Cause or causal attribution is not always the purpose of our evaluations. Sometimes experimental design is the best method. Sometimes experimental design, while desirable, is not feasible. Sometimes experimental design can lead us in the wrong direction.

23 End Thinking About Design Webinar 4: Gathering Data, Developing Conclusions, and Putting Your Findings to Use Return to Evaluation Webinars home page


Download ppt "Presented by: Tom Chapel Focus On…Thinking About Design."

Similar presentations


Ads by Google