Presentation is loading. Please wait.

Presentation is loading. Please wait.

Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9 L. Michele Issel, PhD UIC School of Public Health.

Similar presentations


Presentation on theme: "Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9 L. Michele Issel, PhD UIC School of Public Health."— Presentation transcript:

1 Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9 L. Michele Issel, PhD UIC School of Public Health

2 Objectives 1. Develop appropriate effect evaluation questions 2. List pros and cons for various data collection methods 3. Distinguish between types of variables

3 Involve Evaluation Users so they can: l Judge the utility of the design l Know strengths and weaknesses of the evaluation l Identify differences in criteria for judging evaluation quality l Learn about methods l Have debated BEFORE have data

4 Terminology The following terms are used in reference to basically the same set of activities and for the same purpose:  Impact evaluation  Outcome evaluation  Effectiveness evaluation  Summative evaluation

5 Differences between Research - Evaluation l Nature of problem addressed:new knowledge vs assess outcomes l Goal of the research: new knowledge for prediction vs social accounting l Guiding theory: theory for hypothesis testing vs theory for the problem l Appropriate techniques: sampling, statistics, hypothesis testing, etc. vs fit with the problem

6 CharacteristicResearchEvaluation Goal or PurposeGenerate new knowledge for prediction Social accounting and program or policy decision making The questionsScientist’s own questions Derived from program goals and impact objectives Nature of problem addressed Areas where knowledge lacking Assess impacts and outcomes related to program Guiding theoryTheory used as base for hypothesis testing Theory underlying the program interventions, theory of evaluation Research-Evaluation Differences

7 CharacteristicResearchEvaluation Appropriate techniques Sampling, statistics, hypothesis testing, etc. Whichever research techniques fit with the problem SettingAnywhere that is appropriate to the question Usually where ever can access the program recipients and non- recipient controls DisseminationScientific journalsInternal and externally viewed program reports, scientific journals AllegianceScientific communityFunding source, policy preference, scientific community Research-Evaluation Differences

8 Evaluation Questions… l What questions do the stakeholders want answered by the evaluation? l Do the questions link to the impact and outcome objectives? l Do the questions link to the effect theory?

9 From Effect Theory to Effect Evaluation l Consider the effect theory as source of variables l Consider the effect theory as guidance on design l Consider the effect theory as informing the timing of data collection

10

11 From Effect Theory to Variables The next slide is an example of using the the effect theory components to identify possible variables on which to collect evaluation data.

12

13 Impact vs Outcome Evaluations l Impact is more realistic because it focuses on the immediate effects and participants are probably more accessible. l Outcomes is more policy, longitudinal, population based and therefore more difficult and costly. Also, causality (conceptual hypothesis) is fuzzier.

14 Effect Evaluation Draws upon and uses what is known about how to conduct rigorous research: Design Design --overall plan, such as experimental, quasi-experimental, longitudinal, qualitative Method Method -- how collect data, such as telephone survey, interview, observation

15 Methods --> Data Source s l Observational--> logs, video l Record review--> Client records, patient chart l Survey--> participants/not, family l Interview--> participants/not, l Existing records --> birth & death certificates, police reports

16 Comparison of Data Collection Methods Characteristics of each method to be considered when choosing a method: 1. Cost 2. Amount of training required for data collectors 3. Completion time 4. Response rate

17 Validity and Reliability l Method must use valid indicators/measures l Method must use reliable processes for data collection l Method must use reliable measures

18 Variables, Indicators, Measures l Vv l Variable is the “thing” of interest, variable is how that thing gets measured l Some agencies use “indicator” to mean the number that indicates how well the program is doing l Measure the way that the variable is known It’s all just language…. Stay focused on what is needed.

19 Levels of Measurement LevelExamplesAdvantageDisadvantage Nominal, Categorical Zip code, race, yes/no Easy to understand. Ordinal, Rank Social class, Lickert scale, “top ten” list (worst to best) Limited information from the data Interval, Ratio: continuous Temperature, IQ, distances, dollars, inches, dates of birth Gives most information; can collapse into nominal or ordinal categories. Used as a continuous variable. Can be difficult to construct valid and reliable interval variable

20 Types of Effects as documented through Indicators Indicators of physical change Indicators of knowledge change Indicators of psychological change Indicators of behavioral change Indicators of resources change Indicators of social change

21 Advise It is more productive to focus on a few relevant variables than to go on a wide ranging fishing expedition. Carol Weiss (1972)

22 Variables l Intervening variable: any variable that forms a link between the independent variable, AND without which the independent variable is not related to the dependent variable (outcome).

23 Variables l Confounding variable is an extraneous variable which accounts for all or part of the effects on the dependent variable (outcome); mask underlying true assumptions. l Must be associated with the dependent variable AND the independent variable.

24 Confounders l Exogenous confounding l Exogenous (outside of individuals) confounding factors are uncontrollable (selection bias, coverage bias). l Endogenous confounding l Endogenous (within individuals) confounding factors equally important: secular drift in attitudes/knowledge, maturation (children or elderly), seasonality, interfering events that alter individuals.

25 Variable story… To get from Austin to San Antonio, there is one highway. Between Austin and San Antonio there is one town, San Marcus. San Marcus is the intervening variable because it not possible to get to San Antonio from Austin without going through San Marcus. The freeway is often congested, with construction and heavy traffic. The highway conditions is the confounding variable because it is associated with both the trip (my car, my state of mind) and with arriving (alive) in San Antonio.

26 Measure Program Impact Across the Pyramid


Download ppt "Health Program Effect Evaluation Questions and Data Collection Methods CHSC 433 Module 5/Chapter 9 L. Michele Issel, PhD UIC School of Public Health."

Similar presentations


Ads by Google