Presentation is loading. Please wait.

Presentation is loading. Please wait.

CRJS 4466 PROGRAM & POLICY EVALUATION LECTURE #3 Evaluation projects Resume preparation Job hunting Questions? In-class test #1 – next week!

Similar presentations


Presentation on theme: "CRJS 4466 PROGRAM & POLICY EVALUATION LECTURE #3 Evaluation projects Resume preparation Job hunting Questions? In-class test #1 – next week!"— Presentation transcript:

1 CRJS 4466 PROGRAM & POLICY EVALUATION LECTURE #3 Evaluation projects Resume preparation Job hunting Questions? In-class test #1 – next week!

2 16. Targets: be clear as to the appropriate ‘units of analysis’ - beware the ‘ecological fallacy’ targets are the objects of a program intervention targets can be individuals, groups, organizations, political areas, physical units…..almost anything! operationalization of the target definition is an important step in the program design targets can be direct (e.g. students) or indirect (e.g. ‘broken windows’ theory)

3 the problem of specifying the location and boundaries of program targets (e.g. ‘ADD children) the need for clear operational rules specifying who/what is or is not a target (e.g. ‘sexual offenders’ or ‘contingent worker’) target measures must be both inclusive and mutually exclusive

4 17. Key Concepts incidence - the number of new cases identified during a specified period in a specified area (e.g. annual incidence of prostate cancer in Canada) prevalence - the number of existing cases in a specified area during a specified time (e.g. the number of illiterate people residing in North Bay during 2000)

5 population at risk - the segment of the population that is subject to developing a given condition - can be defined probabilistically (e.g. health screening programs) sensitivity - the likelihood of including the correct targets in the program (‘true positives’) specificity - the likelihood of excluding targets who do not have the condition (‘false positives’)

6 need - a population of targets who currently manifest the condition that requires attention demand - the population of targets who are able or willing to participate in the program rates - the proportion of the manifesting a condition - use of age/sex specific rates

7 18. Program Logic Models as a diagnostic tool: what are the general goals of the program what are the graduated steps (objectives) that must be accomplished to reach the goal – how are these specified? what activities are performed as part of the program what is the process through which resources, activities are converted to outcomes Program Goal Objective 1 Objective 2Objective 3 Activity 1 Activity 2Activity 3 Activity 4Activity 5 Activity 6 I 1 I 2 I 3 I 4 I 5 I 6 I 7 I 8 I 11 I 12 I 9 I 10

8 program logic models outline the ideal model of the program’s operation – they are like causal models of how certain causal factors ‘X’ (inputs, activities) are presumed to lead to certain causal effects ‘Y’ (outputs) the program logic modeling exercise can serve to identify blockages, inefficiencies in program functioning the program logic model is a descriptive, diagnostic tool “if you don’t know how the program operates, how can you tell if you are doing things the right way” can form the basis for development of a performance measurement system

9 Key concepts in program logic modeling program inputs program components (activities) implementation objectives program outputs linking constructs

10 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 2-2:A Framework for Modeling Program Logics

11 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Figure 2-1: Income Self-Sufficiency: Logic Model

12 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Figure 2-2:Logic Model for the Alcohol and Drug Services Program

13 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Figure 2-3:Flow Chart for Fire Codes Inspection Program

14 Program Technologies combination of knowledge, technique and experience an organization has available to accomplish objectives and goals what are the practices, the ‘best practices’ in use (technologies) that effect desired changes? note: in some areas (e.g. engineering) perfect technologies work perfectly every time: but in other areas (e.g. social problems, crime) even perfect technologies will not work every time

15 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 2-3:Program Technologies and the Probability that Outcomes Will be Achieved

16 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 2-1:Program Logic Model of Laurel House

17 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 2-4:Examples of Factors in the Environments of Programs that Can Offer Opportunities and Constraints to the Success of Programs

18 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Logic Model for Nova Scotia COMPASS Program

19 Research Designs in Evaluation use of both quantitative and qualitative research designs sometimes, though rarely now, it is possible to use of true experimental design to assess whether a program had a true effect or ‘impact’ in changing behaviour more typically, use of quasi-experimental research designs (comparison groups) and correlational (no comparison) designs coupled with qualitative methods

20 strongest methodological approach to assessing impact of a program is the use of the randomized experimental model Exp -R0X0 Con -R00 note variations – pre.post, and post-test only designs, also multifactorial designs

21 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 3-1:Two Experimental Designs

22 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Table 3-2:Research Design for the Elmira Nurse Program Home Visitation Pr

23 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Figure 3-5:The Four Kinds of Validity in Research Designs

24 Copyright Sage Publications, 2006. From Program Evaluation and Performance Measurement: An Introduction to Practice. James C. McDavid and Laura R.L. Hawthorn. Figure 3-9:Implementation and Withdrawal of Neighbourhood Watch and Team Policing

25 the three criteria of causality – and the experimental method: 1. correlation 2. temporal asymmetry 3. non-spuriousness note the difficulty in demonstrating that a program intervention is the “cause” of a specific outcome - the issue of causation versus correlation - bias in selection of targets - “history” - intervention (Hawthorne) effects - poor measurement

26 Campbell versus Cronbach: perfect versus good enough evaluation assessments – and the issue of the validity of the research design in use gross versus net outcomes Gross= Effects of + Effects of+ Design outcomeintervention other processes Effects (net effect) (extraneous factors)

27 Establishing validity of a research design: statistical conclusion validity internal validity of the design - history - maturation - testing - instrumentation - statistical regression - selection - mortality - ambiguous temporal sequence - selection-based interactions

28 Quasi-experimental research designs: Comparison group design Exp -Comp0X0 Con -Comp00 note: no randomization of subjects takes place – comparison groups constructed by matching

29 Quasi-experimental research designs: before – after designs 0 X 0 single time series designs000000 X 000000 comparative time series designs 000000 X 000000 000000 000000 case study designsX 0

30 Construct validity: the fit between ‘measurement’ and ‘reality’ the operationalization process as the key link to construct validity diffusion of treatments compensatory equalization of treatments compensatory rivalry resentful demoralization Hawthorne effect

31 External validity: interaction between causal results and specific groups of participants interaction between causal results and treatment variations interaction between causal results and outcome variations interaction between causal results and the setting context-dependent mediation


Download ppt "CRJS 4466 PROGRAM & POLICY EVALUATION LECTURE #3 Evaluation projects Resume preparation Job hunting Questions? In-class test #1 – next week!"

Similar presentations


Ads by Google