2 Evaluation Designs Quantitative versus Qualitative Combination Quantitative – deductive – apply general principle to a specific case, uses hard dataQualitative – inductive – individual cases are studied to formulate a general principle, narrative dataCombination – You can use qualitative data to help support quantitative data, or you can start with qualitative information and then use quantitative methods (for example, instrument development – content validity. . .send around to a panel of experts, get feedback, modify the instrument, and then collect quantitative data to validate in another way
3 Categories of Research Designs Nonexperimental or Pre-experimental designsOne group, little validity controlQuasi-experimentalExperimental and comparison group, but no random assignment or selectionExperimentalRandom assignment of experimental and control groupsExperimental exerts the greatest control over internal validity. . .many exercise science studies are conducted this way. . .but very few health promotion studies are(larger groups, sometimes in schools, etc)
4 Terminology Internal validity External validity Extent to which an observed outcome can be attributed to a planned interventionExternal validityExtent to which an observed outcome can be attributed to a replicable intervention and generalized to other settings and populationsExtraneous variables – did you control for all other variables that might have an impact on the outcome?
5 Internal validity threats HistoryAn event that occurs during the intervention that could have an impact on the resultsMaturationBias from biological, natural, or social events that can bias resultsInternal validity threats can bias results or interpretationHistory – ex. Doing a smoking cessation program and having the state impose a tax increase on tobaccoMaturation – ex. Trying to decrease weight of adolescents. . .they are maturing and changing rapidly. . .they will naturally gain weight as they age, or people can become more skilled or educated about topics through their health program at school or work
6 Internal validity threats TestingTesting might cue a person in to change behavior, regardless of the programInstrumentationBias in data collection instrumentsTesting – people could also be cued in on your focus from a pre-test, might react in a certain wayInstrumentation – make sure you use the same instrument from pretest to post-test, make sure you use valid and reliable instruments.
7 Internal validity threats Statistical RegressionBias from selecting a group with unusually high or low scores on somethingSelectionComparison groups are unequalStatistical regression – extreme scores (high or low) on pretest closer to mean on post-test. Think about educational settings – say you use a test to “diagnose” a learning disability. You find that 10 students scored really low on the test. You work with these students, and their mean score seems to increase on the post-test. . .can you think of something that might have occurred? Some of the students might not have put forth effort on the first exam, were misdiagnosed, and then put forth effort on the second, thus increasing the mean. This was not a function of the program, but a function of the inaccuracy of the results.Selection – nonequivalent groups
8 Internal validity threats Attrition/subject mortalityDropouts of subjects; if there is more than one group, then unequal dropouts between groupsInteractive effectsCombinations of the aboveAttrition – why did people drop out? Are respondents different than non-respondents? Did you lose more people in your intervention group than in your control group.Interactive effects – history and maturation, etc
9 Other internal validity issues DiffusionContamination of comparison condition or intervention conditionDemoralizationSubjects upset they are not receiving the other conditionIf you have two classrooms of students – one is an intervention group, one is a control group, and they are right next to each other. . .chances are, the students will talk between the two conditions, and people in the intervention group might share what they are learning, doing, etc. You are then contaminating your control group.
10 External Validity Threats Social desirability Expectancy effect Hawthorne effectPlacebo effectNovelty effectExternal validity is important for health promotion – we are ultimately trying to develop programs that work in different populations and settingsThreats:Social desirability – person or group tries to please the researcher and answers how they think they should answerExpectancy effect – attitudes projected by the researcher can influence participantsHawthorne effect – people react to attention they are getting – if they are getting increased attention, they may act more favorably, regardless of the conditionPlacebo – change in behavior because people believe in the treatment (placebo means there was no treatment)Novelty effect – people may initially react favorably to an innovation. . .this may wear off, and the program may not be as effective as originally thoughtControl for these – blind participants (they don’t know which group they are in), blind the researcher (they don’t know which group the people are in)
11 Research Designs Key to abbreviations: O = data collection X = treatment/interventionR = random assignmentSolid line separating groups – equal groupsDashed line separating groups – unequal groupsThis is all taken from reading 8b in your packet.
12 Pre-experimental designs One group, pretest, post-testO X OGood for pilot testingDoes not control for IV threatsBe certain to use valid and reliable instruments
13 Pre-experimental designs One shot case studyX ONo control for validity threats, no pretest measuresPerhaps the weakest of all designs.
14 Quasi-experimental designs Nonequivalent comparison groupO X OO OA comparison group is added, but they are not equalNot equal according to size, demographics, other variables. . .you could be comparing two very different groupsMany times researchers try to control for as many factors as possible when selecting comparison groups. . .age, gender, SES, variables of interest
15 Quasi-experimental designs Time seriesO O O O X O O O OSeveral measures to assess if there is a trendNo comparison groupSometimes you might want to know if there is a trend. . .for example, you might want to see if smoking rates are already decreasing before your program. . .then it is difficult to say that your program was responsible for a decline (if it was already part of a trend)
16 Quasi-experimental designs Multiple Time seriesO O O O X O O O OO O O O O O O OAdded a comparison group
17 Experimental Designs Pre-test, post-test, control group design R O X O R O ORandomly assigned to groupsRandomly assigned subjects to groups – this makes the groups equal in the eyes of probability theory. . .everyone has an equal chance of being assigned to either group
18 Experimental Designs post-test only, control group design R X O R O Randomly assigned to groupsPost-test only – how do you know the groups are equal at pre-test? If you use random assignment, then you can say that they are equal. . .this controls for the testing threat
19 Experimental Designs Solomon Four Group Design R O X O R O O R X O R O Here you combine the two main experimental designs. This is one of the most rigorous designs in controlling for internal validity. External validity, though, is another story.
20 Why do you care?Having this knowledge will help you determine the quality of research studies, which will impact your conclusions regarding the results.Resources – you only have a small amount of money allocated to the program. . .you might not be able to have two groups. . .or you might not have enough personnel to collect all the data needed for a time-series designIntervention – you might be doing a school-based intervention. . .you might not have the option of random assignment!Statistical control – there are statistical tests that can control for pre-test differences (ANCOVA – use pre-test scores as a covariate)Also realize that not all comparison groups get nothing. . .sometimes they get something, or they are offered the intervention after the initial study period