Thinking About Program Evaluation HUS 3720 Instructor Terry Wimberley, Ph.D.

Slides:



Advertisements
Similar presentations
Focusing an Evaluation Ben Silliman, Youth Development Specialist NC 4-H Youth Development.
Advertisements

Donald T. Simeon Caribbean Health Research Council
Cross Cultural Research
Mywish K. Maredia Michigan State University
Part II Sigma Freud & Descriptive Statistics
Designing an Effective Evaluation Strategy
Designing Research Concepts, Hypotheses, and Measurement
Evaluation Research Kodi D. Havins AED 615 Fall 2006 Dr. Franklin.
Concept of Measurement
PPA 502 – Program Evaluation
PPA 502 – Program Evaluation
PPA 415 – Research Methods in Public Administration Lecture 1 – Research Design.
Program Evaluation In A Nutshell 1 Jonathan Brown, M.A.
Evaluation. Practical Evaluation Michael Quinn Patton.
Types of Evaluation.
Formulating the research design
CORRELATIO NAL RESEARCH METHOD. The researcher wanted to determine if there is a significant relationship between the nursing personnel characteristics.
RESEARCH DESIGN.
Measurement and Data Quality
Chapter 1: Introduction to Statistics
REC 375—Leadership and Management of Parks and Recreation Services Jim Herstine, Ph.D., CPRP Assistant Professor Parks and Recreation Management UNC Wilmington.
INTRODUCTION TO OPERATIONS RESEARCH Niranjan Saggurti, PhD Senior Researcher Scientific Development Workshop Operations Research on GBV/HIV AIDS 2014,
McGraw-Hill © 2006 The McGraw-Hill Companies, Inc. All rights reserved. The Nature of Research Chapter One.
Research and Evaluation Center Jeffrey A. Butts John Jay College of Criminal Justice City University of New York August 7, 2012 How Researchers Generate.
Variation, Validity, & Variables Lesson 3. Research Methods & Statistics n Integral relationship l Must consider both during planning n Research Methods.
Chapter 1: Introduction to Statistics
Evaluation Test Justin K. Reeve EDTECH Dr. Ross Perkins.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
Too expensive Too complicated Too time consuming.
Program Evaluation. Program evaluation Methodological techniques of the social sciences social policy public welfare administration.
Chapter 1: Introduction to Statistics. 2 Statistics A set of methods and rules for organizing, summarizing, and interpreting information.
Elizabeth Godfrey 1.  Periodic assessment of results Appropriateness, effectiveness, efficiency, impact, sustainability  Identifies intended and unintended.
Reasoning in Psychology Using Statistics Psychology
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
TYPES OF EVALUATION Types of evaluations ask different questions and focus on different purposes. This list is meant to be illustrative rather than exhaustive.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
ScWk 242 Course Overview and Review of ScWk 240 Concepts ScWk 242 Session 1 Slides.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
An Introduction to Statistics and Research Design
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
Selecting a Sample. Sampling Select participants for study Select participants for study Must represent a larger group Must represent a larger group Picked.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
CHAPTER 2 Research Methods in Industrial/Organizational Psychology
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
 2007 Johns Hopkins Bloomberg School of Public Health Introduction to Program Evaluation Frances Stillman, EdD Institute for Global Tobacco Control Johns.
Evaluation: Methods & Concerns Otojit Kshetrimayum V.V. Giri National Labour Institute, Noida
Evaluation and Designing
Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)
Evaluation Research Dr. Guerette. Introduction Evaluation Research – Evaluation Research – The purpose is to evaluate the impact of policies The purpose.
Formulating the Research Design
Evaluating Engagement Judging the outcome above the noise of squeaky wheels Heather Shaw, Department of Sustainability & Environment Jessica Dart, Clear.
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
EVALUATION RESEARCH To know if Social programs, training programs, medical treatments, or other interventions work, we have to evaluate the outcomes systematically.
ABRA Week 3 research design, methods… SS. Research Design and Method.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
Lesson 3 Measurement and Scaling. Case: “What is performance?” brandesign.co.za.
Measurement Chapter 6. Measuring Variables Measurement Classifying units of analysis by categories to represent variable concepts.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Evaluation What is evaluation?
Research Design. How do we know what we know? The way we make reasoning Deductive logic Begins with one or more premises, reasoning then proceeds logically.
CHAPTER ONE EDUCATIONAL RESEARCH. THINKING THROUGH REASONING (INDUCTIVELY) Inductive Reasoning : developing generalizations based on observation of a.
Chapter 3 Designing Research Concepts, Hypotheses, and Measurement.
DATA COLLECTION METHODS IN NURSING RESEARCH
Designing Effective Evaluation Strategies for Outreach Programs
Module 2 Basic Concepts.
Right-sized Evaluation
Fundamentals of Monitoring and Evaluation
Qualitative vs. Quantitative research
Presentation transcript:

Thinking About Program Evaluation HUS 3720 Instructor Terry Wimberley, Ph.D.

Evaluation Evaluation is the systematic assessment of the worth or merit of some object

Evaluation Evaluation is the systematic acquisition and assessment of information to provide useful feedback about some object.

Goals of Evaluation The goal of evaluation is to provide useful feedback to a variety of stakeholders, including sponsors, donors, client groups, administrators, staff, & other relevant constituencies

Goals of Evaluation The major goal of evaluation should be to influences decision-making or policy formulation through the provision of empirically-driven feedback.

Evaluation Strategies Evaluation strategies means broad, overarching perspectives on evaluation, encompassing most general groups or “camps” of evaluators (although the best evaluation work borrowing from a variety of evaluation “camps”)

Evaluation Strategies Scientific Experimental Models: Prioritize the desirability of impartiality, accuracy, objectivity, and the validity of information generated.

Evaluation Strategies Scientific Experimental Models: experimental & quasi-experimental designs objective based research (education) econometric models –cost effectiveness, –cost/ benefit analysis theory driven evaluation

Evaluation Strategies Management - Oriented Models: emphasize comprehensiveness in evaluation, placing evaluation within the context of organizational activities. Program Evaluation & Review Technique (PERT) Critical Path Method (CPM) Units for Treatments for Observing Observations & Settings (UTOS) Context Input for Process and Product (CIPP)

Evaluation Strategies Qualitative Models: emphasize the importance of observation and the need to attend to the evaluation of the evaluation context, to include the human interpretation of the evaluation process.

Evaluation Strategies Qualitative Models include: naturalistic evaluation critical theory & art criticism “grounded theory”

Evaluation Strategies Participant - Oriented Approaches: Emphasizes the importance of evaluation participation by program stakeholders. Total Quality Management (TQM) The Learning Organization Covey Approaches

Types of Evaluation Formative: Seek to strengthen or improve the object being evaluated. Typically focus upon program delivery, quality, and organizational context (personnel, procedures, etc.)

Types of Evaluation Summative:Examine the effects of outcomes of some object or objects; describing what happens subsequent to delivery of the program or technology; assessing whether the object can be said to have caused the outcome; determining the overall impact of the causal factor beyond only the immediate target outcomes; and, estimating the relative costs associated with the object.

Formative Evaluation Types: evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness

Formative Evaluation Types: needs assessment determines who needs the program, how great the need is, and what might work to meet the need

Formative Evaluation Types: structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes

Formative Evaluation Types : implementation evaluation monitors the fidelity of the program or technology delivery

Formative Evaluation Types : process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures

Summative Evaluation outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes

Summative Evaluation impact evaluation is broader and assesses the overall or net effects (intended or unintended) of the program or technology as a whole

Summative Evaluation cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values

Summative Evaluation secondary analysis reexamines existing data to address new questions or use methods not previously employed

Summative Evaluation meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question

Formative Evaluation Questions

What is the definition and scope of the problem or issue, or what's the question? Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, delphi methods, brain-writing, stakeholder analysis, synectics, lateral thinking, input- output analysis, and concept mapping.

Where is the problem and how big or serious is it? The most common method used here is "needs assessment" which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups.

How should the program or technology be delivered to address the problem? Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multi-attribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling.

How well is the program or technology delivered? Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here.

Summative Evaluation Questions

What type of evaluation is feasible? Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design.

What was the effectiveness of the program or technology? One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi- experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources.

What is the net impact of the program? Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts.

Key Concepts in Evaluation Research Policy Space: Issues and forces which define the range of pertinent dialogue that is possible on any particular “problem” recognized by a significant number of stakeholders.

Key Concepts Stakeholders: Persons, groups, agencies or interest groups that have a vested interest in the resolution of a social problem.

Program Effectiveness: Three Meanings Marginal Effectiveness: Program performance compared to some benchmark. Relative Effectiveness: Effectiveness of the program compared to no intervention or change. Cost Effectiveness: Comparison’s made in terms of unit outcome per dollar spent.

Program Credibility: Validity Validity: The extent to which a variable measures what it is supposed to measure.

Internal Validity Internal Validity: The approximate truth about inferences regarding cause-effect or causal relationships.

External Validity External validity involves generalizing from your study context to other people, places or times, whereas construct validity involves generalizing from your program or measures to the concept of your program or measures.

Construct Validity Construct Validity refers to the degree to which inferences can legitimately be made from the operations in your study to the theoretical constructs on which those operations were conceptually based. Like external validity, construct validity is related to generalizing.

Chance & Construct Validity Statistical Conclusion Validity : Asks the question of whether “statistical inference” has been done property.

Reliability Reliability: Achieving the same outcome when the intervention is performed repeatedly. Predictability!

Measurement DEFINITION: Measurement consists of rules for assigning numbers to attributes of objects based upon rules.

In mathematical terms measurement is a functional mapping from the set of objects to the set of real numbers.

Properties of Measurement Magnitude: The property of magnitude exists when an object that has more of the attribute than another object, is given a bigger number by the rule system. This relationship must hold for all objects in the "real world".

Properties of Measurement Intervals: The property of intervals is concerned with the relationship of differences between objects. If a measurement system possesses the property of intervals it means that the unit of measurement means the same thing throughout the scale of numbers.

Properties of Measurement Rational Zero: A measurement system possesses a rational zero if an object is assigned the number zero by the system of rules. The object does not need to really exist in the "real world", as it is somewhat difficult to visualize a "man with no height

Property of Rational Zero The property of rational zero is necessary for ratios between numbers to be meaningful. Only in a measurement system with a rational zero would it make sense to argue that a person with a score of 30 has twice as much of the attribute as a person with a score of 15. In many application of statistics this property is not necessary to make meaningful inferences.

Data Types Nominal Scales: Nominal scales are measurement systems that possess none of the three properties discussed earlier. Nominal scales are subdivided into two groups: Renaming & Categorical

Data Types Nominal-Renaming occurs when each object in the set is assigned a different number, that is, renamed with a number. Examples of nominal-renaming are social security numbers or numbers on the back of a baseball player.

Data Types Nominal-categorical occurs when objects are grouped into subgroups and each object within a subgroup is given the same number. The subgroups must be mutually exclusive, that is, an object may not belong to more than one category or subgroup. An example of nominal- categorical measurement is grouping people into categories based upon stated political party preference (Republican, Democrat, or Other,) or upon sex (Male or Female.)

Data Types Ordinal Scales: Ordinal Scales are measurement systems that possess the property of magnitude, but not the property of intervals. The property of rational zero is not important if the property of intervals is not satisfied. Any time ordering, ranking, or rank ordering is involved, the possibility of an ordinal scale should be examined. As with a nominal scale, computation of most of the statistics is not appropriate when the scale type is ordinal.

Data Types Interval Scales: Interval scales are measurement systems that possess the properties of magnitude and intervals,but not the property of rational zero. It is appropriate to compute the statistics described in the rest of the book when the scale type is interval.

Data Types Ratio Scales: Ratio scales are measurement systems that possess all three properties: magnitude, intervals, & rational zero. The added power of a rational zero allows ratios of numbers to be meaningfully interpreted; i.e. the ratio of John's height to Mary's height is 1.32, whereas this is not possible with interval scales.