Presentation is loading. Please wait.

Presentation is loading. Please wait.

Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.

Similar presentations


Presentation on theme: "Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration."— Presentation transcript:

1 Program Evaluation

2 Program evaluation Methodological techniques of the social sciences social policy public welfare administration.

3 Evaluation Formative – help form the program Ongoing assessment to improve implementation Outcome – after the fact

4 Needs Assessment Program Theory Assessment Process Evaluation Outcome Evaluation Efficiency Assessment

5 Needs assessment Who needs the program? How great is the need? What might work to meet the need? What resources are available?

6 “Evaluability” assessment Is an evaluation feasible? How stakeholders can shape its usefulness.

7 Structured Conceptualization Define the program or technology. Define the target population. Define possible outcomes

8 Process Evaluation Investigates the process of delivery and alternatives. Summative – summarize the effects

9 Implementation evaluation Monitors the fidelity of delivery

10 Outcome Evaluations Demonstrable effects on defined targets.

11 Impact evaluation Net effects intended and unintended on program as a whole

12 Cost-effectiveness / Cost benefit. Examines efficiency by standardizing outcomes in dollar costs and values.

13 Secondary analysis Examine existing data to address new questions or use different methods.

14 Meta analysis Integrates outcome with other studies to get summary judgment.

15 Meta-analysis Analysis of analyses Summarize a body of work Replication is good but can lead to inconsistent results

16 Useful for 1)clarifying inconsistencies 2) program evaluation 3) review work 4) broadly framed questions

17 replicationstreatmentcontroldiff Exp 122193 Exp 220182 Exp 323176 Exp 41516

18 Sampling Error in measurement Systematic error 3 in 4 studies show.. Or Mean difference = 2.5 (average out experimental errors….)

19 replicationstreatmentcontroldiff Exp 1 (n=10) 22193 Exp 2 (n = 10) 20182 Exp 3 (n= 15) 23176 Exp 4 (n = 1000) 1516

20 replicationstreatmentcontroldiff Exp 122193 p<0.05 Exp 220182 p<0.05 Exp 323176 p<0.05 Exp 41516-1 p<0.001

21 Pooled data 35 people in 1000 show…. Can overpower data Statistics based on large N tend to be more reliable – but only if the study is valid Meta-analysis tends to decrease random and systematic errors

22 What if studies are not replications but variations on a theme… Exp 1 uses a scale from 1-5 Exp 2 uses scale from 1-100 treatmentcontroldifference Exp 1500400100 Exp 224222 Average difference =51 ???

23 Average difference =51??????????? treatmentcontroldifference Effect size d Exp 15004001000.5 Exp 2242220.67 Average d = 0.58

24 What is summarized? 1) count studies for and against does not give magnitude and has low power 2) combine significance levels 3) combine effect sizes (effect gives the magnitude of the relationship between 2 variables) Advantage - a) increase sample size and power b) increase internal validity- soundness of conclusions about relationship c) increase external validity – generalizability to other places people etc d) shows effect even if small if it is consistent

25 Synthesis is a better estimate of effect size If effect is real and consistent it will be detected BUT Limited by the original studies

26 Steps in meta-analysis 1)Formulate the question 2) Collect previous studies 3) Evaluate and code 4) Analyze and interpret 5) Presentation

27 Data Sources Study Selection Data Abstraction Statistical Analysis

28 Data Sources 1.Computer searches 2.Cross-referencing 3.Hand-searching 4.Expert(s) to review list

29 Study Selection 1.Study designs 2.Subjects 3.Publication types 4.Languages 5.Interventions 6.Time Frame

30 Need to establish criteria for inclusion Eg if reading program for schools then maybe it is only effective for younger children. … Determine cut-off of age acceptable. Or separate analyses for two groups Or use it as a moderating factor

31 Data Abstraction 1.Number of items coded 2.Inter-coder bias 3.Items coded

32 Coding… Are all studies the same? One has N=10 another has N= 1000…. Different DV scales 1-5 vs 500 point scale How flawed is ok??? Do we include a study if we think it has a confound? Publication bias…

33 Statistical Analysis 1.Choice of metric 2.Choice of model/ heterogeneity 3.Publication bias 4.Study quality 5.Moderator analysis

34 Choice of Metric  Original  Standardized mean difference (Mean/Standard Deviation) Choice of Model/ Heterogeneity  Fixed Effects – current group of studies explained  Random Effects – assumes that this is a random group from all possible

35 Publication Bias  Graphical methods  Quantitative methods Study Quality a. Difficult to assess b. Interpret with caution c. Numerous scales and checklists available

36 Moderator Analysis a. Categorical Analysis b. Regression Analysis Allows for explanation of effects

37 Meta analysis compared to review Objective or subjective???

38 The Contingent Smile: A Meta- Analysis of Sex Differences in Smiling M LaFrance M A. Hecht E Levy Paluck Psychological Bulletin. 2003, Vol. 129, No. 2, 305–334

39 Based on 20 published studies, the effect size (d) she reported was a moderate 0.63. In a follow-up report, J. A. Hall and Halberstadt (1986) added seven new cases and reported a somewhat lower weighted effect size of 0.42.

40 We included in our meta-analysis unpublished studies such as conference papers and theses, as well as previously unanalyzed data that were not included in their prior meta-analysis. Second, we explored the influence of several moderators derived from work in other areas of sex difference research

41 The third goal for the present meta- analysis was to conduct a more fine-grained analysis of several moderators previously considered by J. A. Hall and Halberstadt (1986)

42 Method Retrieval of Studies We searched the empirical literature for studies that documented a quantitative relationship between sex and smiling, even if that relationship was not the central one of the investigation. Along with published articles, unpublished materials such as conference papers, theses, dissertations, and other unpublished papers were included. This was done to counter the publication bias toward positive results

43

44

45

46


Download ppt "Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration."

Similar presentations


Ads by Google