Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.

Similar presentations


Presentation on theme: "Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose."— Presentation transcript:

1 Evaluation Research Dr. Guerette

2 Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose is to evaluate the impact of policies Evidence – based policy analysis Evidence – based policy analysis Is used to help public officials examine and select from alternative actions. Is used to help public officials examine and select from alternative actions.

3 Appropriate Topics Policy analysis and evaluation are used to develop justice policy and determine its impact Policy analysis and evaluation are used to develop justice policy and determine its impact Policy analysis helps officials evaluate alternative actions, choose among them and formulate practices for implementing policy. Policy analysis helps officials evaluate alternative actions, choose among them and formulate practices for implementing policy. Program evaluation is conducted at a later point in time than policy analysis for the purpose of determining if policies are implemented as planned and are they achieving their goals. Program evaluation is conducted at a later point in time than policy analysis for the purpose of determining if policies are implemented as planned and are they achieving their goals.

4 Steps of Evaluation In order to do evaluation research you must learn the goals as the initial step. In order to do evaluation research you must learn the goals as the initial step. Evaluability assessment – Evaluability assessment – A pre-evaluation where a researcher determines whether conditions necessary for conducting an evaluation are present. A pre-evaluation where a researcher determines whether conditions necessary for conducting an evaluation are present.

5 Steps of Evaluation Problem formulation – Problem formulation – Identify and specify program goals in concrete, measurable form. Identify and specify program goals in concrete, measurable form. Measurement – Measurement – How the program is doing in meeting its goals. How the program is doing in meeting its goals. Specifying outcomes – Specifying outcomes – Program goals represent desired outcomes, while outcome measures are empirical indicators of whether or not those desired outcomes are achieved. Program goals represent desired outcomes, while outcome measures are empirical indicators of whether or not those desired outcomes are achieved.

6 Steps of Evaluation Measuring program contexts – Measuring program contexts – Measuring the context within which the program is conducted. Measuring the context within which the program is conducted. Measuring program delivery – Measuring program delivery – Measuring both the dependent and independent variables are necessary. Measuring both the dependent and independent variables are necessary.

7 Designs for Program Evaluation Randomized evaluation designs – may be limited by legal, ethical and practical reasons Randomized evaluation designs – may be limited by legal, ethical and practical reasons 1. Program and agency acceptance – it is necessary to explain to the agency why random assignment is vital for this type of research. 2. Minimize exceptions to random assignments – recognize that some exceptions are necessary but too many exceptions threatens the statistical equivalence of experimental and control groups.

8 Designs for Program Evaluation 3. Adequate Case Flow for Sample Size – The larger the sample size the more accurate the estimates of the population characteristics which will reduce threats to things like statistical conclusion validity. The larger the sample size the more accurate the estimates of the population characteristics which will reduce threats to things like statistical conclusion validity. 4. Maintaining treatment integrity – It is important to maintain treatment consistency (homogeneity) because it will impact measurement reliability. It is important to maintain treatment consistency (homogeneity) because it will impact measurement reliability.

9 Designs for Program Evaluation Quasi-experimental Designs – used when one is not able to use random assignment of subjects to an experimental and a control group. Quasi-experimental Designs – used when one is not able to use random assignment of subjects to an experimental and a control group. 1. Ex Post evaluations – done after (retrospectively) an experimental program has gone into effect.

10 Designs for Program Evaluation 2. Full Coverage programs – Usually national or statewide in nature where it is not possible to identify subjects who are not exposed to the intervention and cannot randomly assign persons to receive or not receive treatment. Usually national or statewide in nature where it is not possible to identify subjects who are not exposed to the intervention and cannot randomly assign persons to receive or not receive treatment. 3. Larger treatment units – Incorporating a great number of people thus limiting the ability to use random assignment. Incorporating a great number of people thus limiting the ability to use random assignment.

11 Designs for Program Evaluation 4. Non-equivalent groups design – Where treatment and control subjects are not statistically equivalent. Where treatment and control subjects are not statistically equivalent. 5. Time series designs – Multiple measures at different points in time. Multiple measures at different points in time. 6. Other types of designs – Often tailored to practical constraints of the research circumstances. Often tailored to practical constraints of the research circumstances.

12 Policy Analysis & Scientific Realism Policy analysis coupled with scientific realism helps public officials use research to select and assess alternative courses of action. Policy analysis coupled with scientific realism helps public officials use research to select and assess alternative courses of action. Modeling studies – example of prison population forecasting. Modeling studies – example of prison population forecasting. Use of mathematical models to predict future events or numbers. Use of mathematical models to predict future events or numbers.

13 Political Context of Applied Research When doing evaluation research and policy analysis, it is done in a political arena. When doing evaluation research and policy analysis, it is done in a political arena. Evaluation and stakeholders – recognize that many people have a direct or indirect interest in the program or its evaluation. Evaluation and stakeholders – recognize that many people have a direct or indirect interest in the program or its evaluation. Politics and objectivity – politics and ideology can color or impact evaluation research. Politics and objectivity – politics and ideology can color or impact evaluation research.

14 In Class Exercise

15


Download ppt "Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose."

Similar presentations


Ads by Google