Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)

Slides:



Advertisements
Similar presentations
Donald T. Simeon Caribbean Health Research Council
Advertisements

Mywish K. Maredia Michigan State University
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
Estimation in Sampling
Process Evaluation: Considerations and Strategies CHSC 433 Module 4/Chapter 8 L. Michele Issel UIC School of Public Health.
Sampling Distributions (§ )
What is a sample? Epidemiology matters: a new introduction to methodological foundations Chapter 4.
Taejin Jung, Ph.D. Week 8: Sampling Messages and People
Evaluating Hypotheses
PPA 502 – Program Evaluation
PPA 502 – Program Evaluation
Chapter Sampling Distributions and Hypothesis Testing.
Business research methods: data sources
PROBABILITY AND SAMPLES: THE DISTRIBUTION OF SAMPLE MEANS.
Types of Evaluation.
The Research Process. Purposes of Research  Exploration gaining some familiarity with a topic, discovering some of its main dimensions, and possibly.
The Research Problem and Objectives Lecture 6 1. Organization of this lecture Research Problem & Objectives: Research and Decision/Action Problems Importance.
FLCC knows a lot about assessment – J will send examples
QA 233 PRACTICE PROBLEMS PROBABILITY, SAMPLING DISTRIBUTIONS CONFIDENCE INTERVALS & HYPOTHESIS TESTING These problems will give you an opportunity to practice.
RESEARCH DESIGN.
CHAPTER 4 Research in Psychology: Methods & Design
Slide 9-1 © 1999 South-Western Publishing McDaniel Gates Contemporary Marketing Research, 4e Understanding Measurement Carl McDaniel, Jr. Roger Gates Slides.
Ch 6 Validity of Instrument
RESEARCH A systematic quest for undiscovered truth A way of thinking
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
Sociology 3322a. “…the systematic assessment of the operation and/or outcomes of a program or policy, compared to a set of explicit or implicit standards.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Analyzing Reliability and Validity in Outcomes Assessment (Part 1) Robert W. Lingard and Deborah K. van Alphen California State University, Northridge.
DrugEpi 1-4 Counting HS Marijuana Use Module 1 Overview Context Content Area: Descriptive Epidemiology & Surveillance Essential Question (Generic): How.
Chapter 4 Statistics. 4.1 – What is Statistics? Definition Data are observed values of random variables. The field of statistics is a collection.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Chapter Twelve Census: Population canvass - not really a “sample” Asking the entire population Budget Available: A valid factor – how much can we.
Chapter 1: Research Methods
Review of Chapters 1- 5 We review some important themes from the first 5 chapters 1.Introduction Statistics- Set of methods for collecting/analyzing data.
KEYWORDS REFRESHMENT. Activities: in the context of the Logframe Matrix, these are the actions (tasks) that have to be taken to produce results Analysis.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research.
AP STATISTICS LESSON SAMPLE MEANS. ESSENTIAL QUESTION: How are questions involving sample means solved? Objectives:  To find the mean of a sample.
Chapter 1 Introduction to Statistics. Statistical Methods Were developed to serve a purpose Were developed to serve a purpose The purpose for each statistical.
ScWk 242 Course Overview and Review of ScWk 240 Concepts ScWk 242 Session 1 Slides.
Understanding Research Design Can have confusing terms Research Methodology The entire process from question to analysis Research Design Clearly defined.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
1 Chapter 10: Introduction to Inference. 2 Inference Inference is the statistical process by which we use information collected from a sample to infer.
Section 5.4 Sampling Distributions and the Central Limit Theorem Larson/Farber 4th ed.
Chapter 7 Sampling Distributions Statistics for Business (Env) 1.
Chapter 9 Three Tests of Significance Winston Jackson and Norine Verberg Methods: Doing Social Research, 4e.
Evaluating Ongoing Programs: A Chronological Perspective to Include Performance Measurement Summarized from Berk & Rossi’s Thinking About Program Evaluation,
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Stat 1510: Sampling Distributions
Question paper 1997.
Evaluating Impacts of MSP Grants Ellen Bobronnikov Hilary Rhodes January 11, 2010 Common Issues and Recommendations.
The Theory of Sampling and Measurement. Sampling First step in implementing any research design is to create a sample. First step in implementing any.
Fall 2002Biostat Statistical Inference - Proportions One sample Confidence intervals Hypothesis tests Two Sample Confidence intervals Hypothesis.
Securing External Federal Funding Janice F. Almasi, Ph.D. Carol Lee Robertson Endowed Professor of Literacy University of Kentucky
The Research Problem and Objectives Lecture 6 1. Organization of this lecture Research Problem & Objectives: Research and Decision/Action Problems Importance.
Principles of Assessment and Outcome Measurement for Physical Therapists ksu. edu. sa Dr. taher _ yahoo. com Mohammed TA, Omar,
.  Evaluators are not only faced with methodological challenges but also ethical challenges on a daily basis.
Chapter 9 Introduction to the t Statistic
Class Six Turn In: Chapter 15: 30, 32, 38, 44, 48, 50 Chapter 17: 28, 38, 44 For Class Seven: Chapter 18: 32, 34, 36 Chapter 19: 26, 34, 44 Quiz 3 Read.
Chapter 6: Sampling Distributions
Statistical Inference
CHAPTER 4 Research in Psychology: Methods & Design
Introduction to Program Evaluation
Chapter 1: Introduction to Scientific Thinking
Analyzing Reliability and Validity in Outcomes Assessment Part 1
What are their purposes? What kinds?
Lecture 1: Descriptive Statistics and Exploratory
Sampling Distributions (§ )
Analyzing Reliability and Validity in Outcomes Assessment
Monitoring and Evaluating FGM/C abandonment programs
Presentation transcript:

Designing New Programs Design & Chronological Perspectives (Presentation of Berk & Rossi’s Thinking About Program Evaluation, Sage Press, 1990)

Research Design & Program Evaluation

Decisions in Creating a Research Design for Program Evaluation Which observation units will be selected? How will measurement be made? How will treatments (interventions) be delivered?

Decision One: Which Observation Units will be Selected? Random selection of sample Non - random selection of sample Selecting the entire population as one large sample.

Selecting Units: Repeatedly Drawing Samples from a Population Suppose many different samples of the same size are obtained by repeatedly sampling from a population for each sample a mean is calculated and a histogram of these mean values is drawn

Selecting Units: Repeatedly Drawing Samples from a Population The shape of the histogram depends on the size n of the sample, and approximates to the sampling distribution.

Sampling Distribution

Central Limit Theorem Even if the distribution of the X values was not Normal, as n increases the distribution for becomes more like the Normal distribution - this is the Central Limit Theorem

Properties of the Sampling Distribution As n increases, the distribution of the sample means becomes narrower - that is, they cluster more tightly around µ. In fact the variance is inversely proportional to n

Sampling Distribution: Standard Error The standard error of the mean is designated as the standard deviation of the sampling distribution of the mean.

Decision Two: How Will Measurement be Undertaken? Identify the unit of data: e.g. arrest report, meal served, transportation miles, hours of service etc. Determine what type of data you are dealing with: Ordinal, Nominal, Interval, Ratio

Decision Three: What Will the Program Intervention be Delivered? Random Assignment Non - Random Assignment

Additional Considerations Relevance: Is the program intervention relevant to the needs of the stakeholders? Implementation: Can the program be effectively implemented?

Chronology of Program Development & Evaluation

Chronological Sequence of Program Evaluation Identification of Policy Issues (Problem) Formulation of Policy Responses Design of Programs Improvement of Programs Assessment of Program Impact Determination of Cost Effectiveness

Principles for Fitting the Evaluation Strategy to the Identified Problem There is no “one best plan”, just best among alternatives (incremental planning) Methods dependent upon nature of problem Problem should be broken down into component policy problems. Evaluation should be linked to each separate policy problem.

Principles for Fitting the Evaluation Strategy to the Identified Problem Evaluation methods tempered by available resources (time, money, staff, priorities). Evaluation costs can’t exceed program costs or value. Evaluation strategy must be congruent with the importance of the problem being addressed.

Contexts for Evaluation

Evaluation Context One : Policy & Program Evaluation Questions raised about the nature and amount of an identified problem, whether appropriate policy actions are feasible, & whether options proposed are both appropriate & effective. RULE: Look to the future to inform the present

Evaluation Context Two : Examining Existing Policies & Programs Do extant programs & policies accomplish what they were intended to accomplish RULE: Review the past to inform the future

Policies Change!

Policies Change Because……. times change and new policy issues emerge resources change programs don’t work as expected stakeholder criticize and complain competition emerges politics change

Six Steps………... In Problem Identification & Resolution

Step One : Defining the Problem Social Problem as Social Construction Need to Define Problem from Several Perspectives Need to Evaluate & Consider Each Problem Definition and Consider the Social and Methodological Implications of Each Problem Alternative.

Step One: Defining the Problem (An Example) Drug Abuse Problem Definitions: Problem of Supply Problem of Use Problem of Poverty Problem of Racism Problem of Lack of Morality Problem of Legal Status of Drug Use

Step Two: Determining the Extent of the Problem Determining the Extent of the Problem Using Available Data to Determine the Extent of the Problem Creating New Data (preliminary assessment) to Determine How Wide Spread the Problem is.

Step Two: Determining the Extent of the Problem Quality of Data: Some sources are more valid and reliable than others, & some studies are methodologically sounder than others Conflictual Data: Often data from various sources on the same subject will contradict one another (hint: look for points of agreement)

Step Two: Determining the Extent of the Problem Creating New Data: Methods of Needs Assessment Expert Informants Surveys Qualitative Approaches (e.g. participant observer)

Step Three: Determining Whether the Problem Can be Ameliorated Defining Remedies Basing Remedies Upon a Clear Problem Definition Basing Remedies in Theory Basing Remedies Upon the Current Realities of the Organizational, Social & Political Environment

Step Four: Translating Promising Ideas Into Promising Programs Programs are Essentially Activities Undertaken by Individuals & Organizations Understanding Stakeholders, Including their Interests, Actions & Activities is Essential to Effective Programming Translating Information Regarding Interests & Activities to Programming is as Much Art as Science

Step Four: Translating Promising Ideas Into Promising Programs Use of Pilot Studies: Can Fall Short of Full Scientific Rigor Alternately, Some Meet Full Scientific Rigor Pre-testing in Pilot Studies is Essential Generalization to Non - Test Subjects Can be Problematic

Step Five: Can YOAA Dot It? (Implementation) YOAA = Your Ordinary American Agency Need Details on How Programs are Implemented in Agencies Again, Pilot Programs or Demonstration Projects will Help Evaluators Understand How Agencies & Organizations Implement Your Particular Program

Step Six: Assuring Program Effectiveness Program Effectiveness is Difficult to Determine: Typically it is Difficult to Impossible to Discriminate Between Outcomes Related to the Program from Other Causative Factors Also, It is Difficult to Discriminate from Program - Related Outcomes & Chance

Step Six: Assuring Program Effectiveness Determining Program Effectiveness: Making Random Samples of Program Outcome Look at Relative Effectiveness of Several Interventions Are Opportunity Costs/Benefits Greater than Program Outcome Costs/Benefits