Presentation is loading. Please wait.

Presentation is loading. Please wait.

EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011.

Similar presentations


Presentation on theme: "EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011."— Presentation transcript:

1 EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011

2 Agenda Stage One theories – Donald T. Campbell Questions and discussion Encyclopedia of Evaluation entries

3 “We would improve program evaluation if we were alert to opportunities to move closer to the experimental model” — Donald T. Campbell

4 Biographical Sketch Born in 1917, died in 1996 Ph.D. in Psychology, University of California, Berkeley Author or more than 235 publications Recipient of numerous honorary degrees, awards, and prizes Intellectual work included psychological theory, methods, sociology of science, and epistemology

5 Campbell’s View of Evaluation Evaluation should be a part of a rational society in which decisions depend on the results of rigorous tests of bold attempts to improve social problems Evaluators should play a servant- methodologist role rather than an advisory role commensurate with democratic values

6 Campbell’s Influence Lionized as the father of scientific evaluation Developed and legitimated scientific methods of evaluation The utopian view of an ‘experimenting society’

7 Campbell’s Major Contributions Evolutionary epistemology Validity theory and threats to validity Experimental and quasi-experimental methods Open, mutually reinforcing but critical commentary on knowledge claims (a disputatious community of truth seekers)

8 Randomized Experiments Provide ‘best’ scientific evidence of cause-and-effect relationships Premised on expectancy of equivalence of units through randomly assigning units to two or more conditions Priority is to reduce internal validity threats

9 Validity The approximate truthfulness or correctness of an inference or conclusion – Supported by relevant evidence as being true or correct – Such evidence comes from both empirical findings and the consistency of those findings coupled with other sources of knowledge – Is a human judgment and fallible – Not an either or claim, it is one of degree

10 Major Types of Validity Internal validity: The validity of inferences about whether the relationship between two variables is causal. Construct validity: The degree to which inferences are warranted from the observed persons, settings, treatments, and cause-effect operations sampled within a study to the constructs that these samples represent. External validity: The validity of inferences about whether a causal relationship holds over variations in persons, settings, treatment variables, and measurement variables. Statistical conclusion validity: The validity of inferences about the covariation between two variables.

11 Threats to Internal Validity Ambiguous temporal precedence: Lack of clarity about which variable occurred first may yield confusion about which variable is the cause and which is the effect Selection: Systematic differences over conditions in respondent characteristics that could also cause the observed effect History: Events occurring concurrently with treatment that could cause the observed effect Maturation: Naturally occurring changes over time could be confused with a treatment effect

12 Threats to Internal Validity Regression: When units are selected for their extreme scores, they will often have less extreme scores on other variables, an occurrence that can be confused with a treatment effect Attrition: Loss of respondents to treatment of to measurement can produce artifactual effects if that loss is systematically correlated with conditions Testing: Exposure to a test can affect scores on subsequent exposures to that test, an occurrence that can be confused with a treatment effect Instrumentation: The nature of a measure may change over time or conditions in a way that could be confused with a treatment effect Additive and interactive threats: The impact of a threat can be additive to that of another threat or may depend on the level of another threat

13 Flow of units through a typical randomized experiment

14 Basic Design Notation RRandom assignment NRNonrandom assignment OObservation XTreatment XRemoved treatment X+X+ Treatment expected to produce an effect in one direction X-X- Conceptually opposite treatment expected to reverse an effect CCutting score - - -Non-randomly formed groups …Cohort

15 Campbell’s Theory of Social Programming Three worlds 1.The current world: Client needs are not the driving force behind political and administrative behavior 2.The current world as it can be marginally modified: Improvement through demonstrations 3.The utopian world: Critical reality checks and the experimenting society

16 Campbell’s Theory of Knowledge Construction Grounded in epistemological relativism (knowledge is impossible without active knowers) Never knowing what is true and imperfectly knowing what is false Evolutionary theory of knowledge growth Not all methods yield equally strong inferences

17 Campbell’s Theory of Valuing Valuing should be left to the political process, not researchers (descriptive valuing) Evaluators are not the arrogant guardians of truth Multidimensional measurement that is inclusive of democratic values

18 Campbell’s Theory of Knowledge Use Use is the concern of the political process, not evaluators Evaluations are only worth using if they have withstood the most rigorous tests Most concerned with misuse – Methodological biases – Control of content or dissemination

19 Campbell’s Theory of Evaluation Practice Application of experimental design to answer summative questions Priority given to internal validity Theoretical explanation is best left to basic researchers Evaluation resources should be focused on pilot and demonstration projects

20 Encyclopedia Entries Bias Causation Checklists Chelimsky, Eleanor Conflict of Interest Countenance Model of Evaluation Critical Theory Evaluation Effectiveness Efficiency Empiricism Independence Evaluability Assessment Evaluation Use Fournier, Deborah Positivism Relativism Responsive evaluation Stake, Robert Thick Description Utilization of Evaluation Weiss, Carol Wholey, Joseph


Download ppt "EVAL 6000: Foundations of Evaluation Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2011."

Similar presentations


Ads by Google