Presentation is loading. Please wait.

Presentation is loading. Please wait.

Unit 8.  Program improvement or appraisal  Assessing the value of a program  Measuring the efficacy of particular components of a program  Meeting.

Similar presentations


Presentation on theme: "Unit 8.  Program improvement or appraisal  Assessing the value of a program  Measuring the efficacy of particular components of a program  Meeting."— Presentation transcript:

1 Unit 8

2  Program improvement or appraisal  Assessing the value of a program  Measuring the efficacy of particular components of a program  Meeting accountability requirements.  Data used for decisions about whether to maintain a program, advance it, introduce comparable programs elsewhere, allocate resources among rival programs, or accept or reject a program approach or hypothesis.

3  Head Start or Title I – US Dept of Education  Math curriculum in one district  Four Questions of Program Evaluation (Posavac & Carey, 1997) ◦ Needs Is an agency or organization meeting the needs of the people it serves? ◦ Process How is a program being implemented (is it going as planned)? ◦ Outcome Has a program been effective in meeting its stated goals? ◦ Efficiency Is a program cost-efficient relative to alternative programs?

4  Evaluators use many of the same qualitative and quantitative methodologies used by researchers in other fields.  Primary purpose of evaluation is to provide information for decision-making about particular programs, not to advance more wide-ranging knowledge or theory. ◦ Evaluation is more client-focused than traditional research, in that evaluators work closely with program staff to create and carry-out an evaluation plan that attend to the particular needs of their program.

5  Evaluation serves to aid in a program's development, execution, and improvement by examining its processes and/or results  Assessment measures individuals or group's performances by measuring their skill level on a variable of interest (e.g., reading comprehension, math or social skills).

6  An experiment is any study in which a treatment is introduced. ◦ A new method of teaching, different behavioral intervention,  A non-experimental study does not introduce a treatment. ◦ Comparing opinions from natural groups

7  Any study in which a treatment is introduced is an experiment.  Control: Researchers investigate the effect of various factors one at a time in an experiment.  An experiment has at least one independent variable and at least one dependent variable.  A true experiment involves random assignment of participants to treatment groups.

8  An intervention or a treatment is implemented.  True experiments have a control group ◦ Two groups have the same tx, except for the independent variable of interest.  In true experiments, confounding variables are well controlled by the experimenter. ◦ Random assignment

9  Experimental Group: group receiving treatment  Control Group: group not receiving treatment ◦ Represents expected results for experimental group if no treatment is given ◦ Represents population before treatment or if no treatment.

10  Concerns about usefulness of results ◦ School board presidents and government and business leaders are hesitant to allow “poking around”.  Access to participants ◦ “Wait list” ◦ Random assignment

11  Potential causes for a research finding.  Researchers must rule out these alternative explanations.  Eight confound categories - “threats to internal validity”: ◦ history ◦ maturation ◦ testing ◦ instrumentation ◦ regression ◦ subject attrition (mortality) ◦ selection ◦ interactions with selection

12  When there is no comparison group in the study, the following threats to internal validity must be considered: ◦ history, maturation, testing, instrumentation, regression, subject mortality, selection  When a comparison group is added, the following threats to internal validity must be considered: ◦ selection, interactions with selection

13  Because of contamination, expectancy effects, and novelty effects, researchers may have difficulty concluding whether a treatment was effective.

14  Contamination: Happens due to communication about the experiment between groups of participants.  Three possible outcomes of contamination: ◦ resentment: some participants’ performance may worsen because they resent being in a less desirable condition; ◦ rivalry: participants in a less desirable condition may boost their performance so they don’t look bad; and ◦ diffusion of treatments: control participants learn about a treatment and apply it to themselves.

15  Expectancy Effects: researcher unintentionally influences the results of an experiment. ◦ Researchers can make systematic errors in their interpretation of participants’ performance based on their expectations. ◦ Researchers can make errors in recording data based on their expectations for participants’ performance.

16  Novelty Effects: This refers to changes in people’s behavior simply because an innovation (e.g., a treatment) produces excitement, energy, and enthusiasm ◦ Hawthorne effect: performance changes when people know “significant others” (e.g., researchers, company bosses) are interested in them or care about their living or work conditions.

17  GOVERNMENT WARNING: ◦ (1) According to the Surgeon General, women should not drink alcoholic beverages during pregnancy because of the risk of birth defects. ◦ (2) Consumption of alcoholic beverages impairs your ability to drive a car or operate machinery, and may cause health problems.  The U.S. National Institute on Alcohol Abuse and Alcoholism funded the Alcohol Research Group to conduct a series of cross-sectional surveys in the United States and Ontario, Canada (Greenfield et al., 1999).

18


Download ppt "Unit 8.  Program improvement or appraisal  Assessing the value of a program  Measuring the efficacy of particular components of a program  Meeting."

Similar presentations


Ads by Google