Presentation is loading. Please wait.

Presentation is loading. Please wait.

Single Variable, Independent Groups Designs

Similar presentations


Presentation on theme: "Single Variable, Independent Groups Designs"— Presentation transcript:

1 Single Variable, Independent Groups Designs
Importance of Planning: Planning is extensive, involving all aspects of experimentation: Must develop clear experimental design at the outset. With planning we build solid controls. We plan: The sample. Ethical considerations. Selection and assignment of participants. Observational procedures. Selecting controls. Data analysis.

2 Single Variable, Independent Groups Designs
We must carry out the plan precisely and exactly. Variance: We must have variance. If no variance then nothing to study, nothing to predict and hypothesize. With experiments we manipulate the IV. The idea is to introduce variance, experimental variance. Variance in the IV will generate variance in the DV. Random assignment equates the groups. Manipulation of the IV will disrupt this equality. Causing variation between the groups. But, extraneous variance hurts us: Threatens internal validity by creating possible alternative explanations.

3 Single Variable, Independent Groups Designs
Hence, our confidence is lessened. Controlling extraneous variance is of utmost importance. Accomplished through experimental design. Sources and Forms of Variance: Systematic Between Groups Variance: This is systematic or planned variance. This is variance between the groups caused by manipulation of the IV. The variance that we predicted would occur and are testing for. We are looking for significant differences in the variance between groups. Meaning that the variability between the groups is larger than expected on the basis of sampling error or chance.

4 Single Variable, Independent Groups Designs
But, must keep in mind that difference may be due to experimental variance or extraneous variance. But, just because a difference doesn’t mean the IV is responsible. Just indicates that systematic effects existed to create the difference. Two sources for systematic effects: Experimental variance – what we force into the system. Extraneous variance – from uncontrolled variables, confounds. Together these are known as systematic between groups variance. But, there is also always the influence of sampling error.

5 Single Variable, Independent Groups Designs
Sampling error? Significant differences in group means indicates that variability is larger than would be expected due to chance. Due to the natural variation that occurs when drawing samples from a population. Extraneous variance? Extraneous variables are uncontrolled, possible alternative explanations. Extraneous variance is the effect that these variables have on the results. Stats can only tell us if significant differences exist. Will not say if the difference is due more to experimental or extraneous variance.

6 Single Variable, Independent Groups Designs
Nonsystematic Within Groups Variance: Also known as error variance. This is due to random factors that affect participants differentially within the same group. Systematic reflects variance among all subjects, across groups. Nobody is the same, we are all snowflakes. Hence, natural variability will always exist between means, i.e. sampling error. This is true even if zero systematic effects. The means will never be exactly the same. Systematic variance, remember, increases between group variance beyond the variability due to sampling error.

7 Single Variable, Independent Groups Designs
Again, nonsystematic within groups influences are random. Some will score lower and others higher than normal. Random influences will cancel each other out. But, it will affect variance, it will become larger. i.e. more platykurtic? And yet again! Systematic between groups variance is a combination of both systematic between groups variance (experimental and extraneous) and nonsystematic variance due to sampling error. There will always be some differences due to sampling error. The ratio of these two types of variance defines the F ratio, our statistic.

8 Single Variable, Independent Groups Designs
Thus: Systematic Effects + Error Variance Error Variance So, look at the ratio we are calculating. If no systematic effects from the systematic between groups variance. Then only thing left is error variance. Thus, F ratio equals around 1.00 Thus thus, between groups variation is no more than would be expected from sampling error alone. But, take another look. If there are systematic effects, then the ratio increases. However, we still don’t know if from experimental or extraneous variance. You must use controls to figure that out. F =

9 Single Variable, Independent Groups Designs
Controlling Variance in Research: The name of the game! Here is the key!!!!!!!!! We want to maximize experimental variance, control extraneous variance, and minimize error variance. Want to make causal inferences? Then must show that experimental variance is due to manipulation of the IV. Experimental variance must be high, and not washed out or distorted from too much extraneous variance or error variance. The more extraneous and/or error variance you have the more difficult it is to show the effects of systematic experimental variance.

10 Single Variable, Independent Groups Designs
Maximizing Experimental Variance: Experimental variance due to effects of IV on the DV. We need to be sure our manipulation had it’s intended affects. We want to be sure the IV really varied. To do so use manipulation check. Manipulation check ensures that our manipulation created a difference, had it’s intended affect. A what-a-check? If interested in psychophysiological effects of emotional arousal, say my infamous mirth study. Did they really think it was funny angry? Who knows? You had better, that’s who!

11 Single Variable, Independent Groups Designs
Controlling Extraneous Variance: Again, this is due to extraneous, confounding, variables. These are the between groups variables, other than our intended IV manipulation, that have effects on the groups. Two basic ideas here: Experimental and control groups need to be as similar as possible at the outset. Groups are treated exactly the same (except for the manipulation, of course). The IV manipulation must be the only difference! How do we accomplish this? Plan and conduct a really good study. Haven’t you been taking notes!

12 Single Variable, Independent Groups Designs
Seriously now, specific procedures: Reduce extraneous variance by ensuring that the manipulation is the only difference. Extraneous variance nicely controlled by RATG. This will make the groups equitable, remember? Make the variables constant – make participants homogenous. But, this may limit generalizability. Build the confound into the study as another IV. This can only go so far, don’t want a 12-way interaction to interpret! Match the participants or use a within subjects design (we’ll get to that in the next section). Run a tightly controlled study! This increases our power and confidence.

13 Single Variable, Independent Groups Designs
Minimizing Error Variance: Remember, these are due to chance factors, individual differences. There will always be some error variance. Too much error variance will make it hard to detect differences due to manipulation. A couple sources: Measurement error – variations in the way participants respond, may come from unreliability of the instruments, for instance. Individual differences – remember the snow flake idea? What do to? Maintain carefully controlled study, controlled and reliable measurements.

14 Single Variable, Independent Groups Designs
Individual differences controlled through randomization, whenever possible randomize. Also controlled through repeated measures design, we’ll get to that soon. Experimental Variance Extraneous Variance + Systematic Between Groups Variance Error Variance F = Error Variance

15 Single Variable, Independent Groups Designs
Nonexperimental Approaches: I thought we were going to talk about true experimentation? Yes, we will. First, let’s cover nonexperimental approaches to understand the advantages of true experiments. Remember, we want to: Maximize experimental variance. Control extraneous variance. Minimize error variance. We can accomplish these with true experiments. Let’s see how well nonexperimental approaches do.

16 Single Variable, Independent Groups Designs
Ex Post Facto Studies: Observe present behavior and attempt to relate to prior experiences. But, little confidence in validity due to lack of controls. There is no manipulation. Hence, we are not creating systematic variance. As an example: As clinicians we see patients and they often report experiencing sexual abuse in their past. So, sexual abuse leads to psychopathology. Remember the ex post facto fallacy? Can we then conclude that sexual abuse leads to psychopathology? May individual experience such abuse and never experience psycholological problems.

17 Single Variable, Independent Groups Designs
We cannot control for possible confounds. Hence alternative hypotheses cannot be ruled out. Single Group Posttest Only Studies: Somewhat higher level of constraint here. Because there is manipulation of an IV. But, there is no control group, no comparison. There is also only one measurement taken after the manipulation. The procedure? Draw a sample. Conduct some manipulation. Then take a measurement. Hence, many factors left uncontrolled, including:

18 Single Variable, Independent Groups Designs
Placebo effect. History. Maturation. Regression to the mean. Single Group Pretest-Posttest Studies: Again, an improvement on the previous type of study. Here there is now a pretest taken prior to the manipulation. We can now assess, or verify, that a real change occurred. But, the same factors are still uncontrolled. Pretest-Posttest Natural Control Group Studies: Yet even higher constraint here. Now, we have added a no-treatment control group. A group that does not receive the manipulation. But, the control group is naturally occurring.

19 Single Variable, Independent Groups Designs
Hence, there is no RATG. Doesn’t this sound a little familiar? Differential? The problem? We cannot know whether the groups are equal at the outset. We could test on some variables to determine equality. But, we cannot possibly know all of the potentially confounding variables. This is why RATG is so important. It will equate the groups on all those unknown factors, the extraneous variables.

20 Single Variable, Independent Groups Designs
Experimental Approaches: What is different here? Inclusion of a control group. RATG. The combination of these controls many types of confounds. Control group helps control against: History. Maturation. Regression to the mean. Placebo effect. But, to be effective we need to randomly assign participants to either the control or to the treatment group. Whenever you can randomize, do it!

21 Single Variable, Independent Groups Designs
Randomized Posttest Only Control Group Design: This is most basic level. Includes randomization and control group. Subjects are randomly assigned to either a control group or a treatment group. Then a manipulation or treatment occurs. A measurement is then taken. The critical comparison is the posttest measurement between the two groups. We can have confidence the groups are equal at the outset since RATG included. Hence, we know that the manipulation caused the change and not some other confounding variable. Threats to internal validity controlled, should be equal across groups thanks to RATG.

22 Single Variable, Independent Groups Designs
Randomized Pretest-Posttest Control Group Design: This is essentially the same as the pretest-posttest natural control group design. The difference? RATG. This is the classic. What to compare? The posttest measurements between the two groups. What about the pretest? With the pretest we can be assured that the groups were equal at the outset. Give us greater confidence. The posttest comparison is actually critical.

23 Single Variable, Independent Groups Designs
You could calculate a difference score (posttest – pretest measurement). But, if you compare changes as a function of time, then you’ve gone beyond a single variable design. This is why, at this level of single variable independent groups, you must either make the comparison for the posttest or for the difference score. Look at the paradigm, what other IV could we have? Thus, examination of difference or posttest is the defining feature, otherwise is factorial (we’ll get to that later). Multilevel Completely Randomized Between Subjects Design: So far the designs have only had two levels of the IV. Multilevel completely randomized between subjects design is an extension of this.

24 Single Variable, Independent Groups Designs
Here, participants randomly assigned to three or more conditions. So, we examine several different emotions, not just two. Solomon’s Four Group Design: Pretest is a double edge sword: Solves some problems but creates new ones. Specifically, the pretest may affect responses to the treatment or to the posttest. Or, there could be some interaction involved. Either way, the results may be confounded. The pretest may sensitize the subjects, it may affect later responses. The pretest may be a type of “pretreatment.”

25 Single Variable, Independent Groups Designs
The Solomon design is a combination of the randomized pretest posttest control group design and the posttest only control group design. Group A Pretest Tx Posttest Group B Pretest Posttest Group C Tx Posttest Group D Posttest What comparisons to make? Group A vs Group B on posttest measure to examine effects of treatment. Group A vs Group C on posttest measure to examine effect of pretest condition. Group D allows us to see how not giving a pretest or a manipulation can affect score.

26 Single Variable, Independent Groups Designs
Statistical Analyses: Level of data on scale of measurement will partly determine the type of statistic. Nominal use chi-square. Ordinal use Mann-Whitney U. Interval and ratio (score) use t-test or ANOVA. Assumptions must also be met. For ANOVA: Normally distributed data. Homogeneity of variance. What if these assumptions not met? Use Mann-Whitney U.

27 Single Variable, Independent Groups Designs
May also use one of Winer’s (1971) transformations. For instance, log transformation of the data. He had several different versions. Select the one that reduces heterogeneity of variance the most. t-test: Examines differences between two means. Very commonly used. As either primary analysis. Or for follow-up analyses (multiple comparisons). Analysis of Variance (ANOVA): We often have more than two means to compare. Our IV may have more than two levels.

28 Single Variable, Independent Groups Designs
ANOVA can test for differences between a set of two or more means. “oneway” ANOVA means that there is just one IV. But, can have multiple levels. What goes into the ANOVA? Within groups variance. Measure of nonsystematic variation within a group. Error or chance variation. Average variability within the group. Between groups variance. Represents how variable the groups means are. Sum of Squares. Each source of variance has a sum of squares.

29 Single Variable, Independent Groups Designs
Remember, this is the sum of squared deviations from the mean. This is how the variances are calculated. How is the ANOVA calculated? Sum of squares for each source of variance calculated. But, as with variance, is not scale specific due to squaring. Hence, the sum of squares is divided by the degrees of freedom (df). The result is the mean square (MS). The MS for the Between Groups is divided by the MS for the Within Groups. The result is the F ratio.

30 Single Variable, Independent Groups Designs
Is it significant? Large F ratio means that the experimental manipulation may have had an effect. Remember, could be from extraneous variance. The p value indicates that probability of obtaining an F that big if there were no systematic effects. The probability of finding that F given the null hypothesis is true. If significant then we can conclude that at least one mean is different from at least one other mean. So, we’re not done yet. The F is significant. All that tells us is that a significant difference exists somewhere. We now have to find where.

31 Single Variable, Independent Groups Designs
Multiple Comparisons: We now have to find where the significant lies. Any of the means could be different. Which ones? Use t-tests to find out. Two classes of multiple comparisons: A priori are when we specified at the outset which comparisons we were interested in making. We then make these comparisons after finding a significant F ratio. Post hoc are when we did not specify which comparisons were of interest before the study. We just go in and make a bunch of comparisons.

32 Single Variable, Independent Groups Designs
But, we must be mindful of the Type I error rate. Need to control for experiment wise error rate. Each comparison we add .05. Thus, our chances of making Type I error increases. Need to control for this. Tukey Honestly Significant Difference Tukey Least Significant Difference Newman-Keuls Sheffe Bonferroni


Download ppt "Single Variable, Independent Groups Designs"

Similar presentations


Ads by Google