Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter Twelve. Alternative Research Designs. Protecting Internal Validity Revisited Internal validity.

Similar presentations


Presentation on theme: "Chapter Twelve. Alternative Research Designs. Protecting Internal Validity Revisited Internal validity."— Presentation transcript:

1 Chapter Twelve. Alternative Research Designs

2 Protecting Internal Validity Revisited Internal validity

3 Protecting Internal Validity Revisited Internal validity A type of evaluation of your experiment.

4 Protecting Internal Validity Revisited Internal validity A type of evaluation of your experiment. It asks the question of whether your IV is the only possible explanation of the results shown for your DV.

5 Protecting Internal Validity Revisited Confounding

6 Protecting Internal Validity Revisited Confounding Confounding is caused by an uncontrolled extraneous variable that varies systematically with the IV.

7 Protecting Internal Validity Revisited Extraneous variables

8 Protecting Internal Validity Revisited Extraneous variables Extraneous variables are variables that may unintentionally operate to influence the dependent variable.

9 Protecting Internal Validity Revisited Cause-and-effect relation

10 Protecting Internal Validity Revisited Cause-and-effect relation A cause-and-effect relation occurs when we know that a particular IV (cause) leads to specific changes in a DV (effect).

11 Protecting Internal Validity Revisited Internal validity revolves around the question of whether your IV actually created any change that you observe in your DV.

12 Protecting Internal Validity Revisited Internal validity revolves around the question of whether your IV actually created any change that you observe in your DV. When you have an internally valid experiment, you are reasonably certain that your IV is responsible for the changes you observed in your DV.

13 Protecting Internal Validity Revisited Good experimental control leads to internally valid experiments.

14 Protecting Internal Validity Revisited Good experimental control leads to internally valid experiments. See toothpaste test p. 300-301 of your text.

15 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966):

16 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966): Random Assignment

17 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966): Random Assignment Experimental participants are distributed into various groups on a random (nonsystematic) basis.

18 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966): Random Assignment Experimental participants are distributed into various groups on a random (nonsystematic) basis. All participants have an equal chance of being assigned to any of our treatment groups.

19 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966): Random Assignment Experimental participants are distributed into various groups on a random (nonsystematic) basis. All participants have an equal chance of being assigned to any of our treatment groups. The only drawback to random assignment is that we cannot guarantee equality through its use.

20 Protecting Internal Validity with Research Designs Experimental design recommendations from Campbell (1957) and Campbell and Stanley (1966): Random Assignment Experimental participants are distributed into various groups on a random (nonsystematic) basis. All participants have an equal chance of being assigned to any of our treatment groups. The only drawback to random assignment is that we cannot guarantee equality through its use. Random assignment is not the same as random selection (choosing participants from a population in such a way that all possible participants have an equal opportunity to be chosen.

21 Protecting Internal Validity with Research Designs Campbell and Stanley (1966) recommended three experimental designs as being able to control the threats to internal validity we listed in Chapter 6.

22 Protecting Internal Validity with Research Designs Campbell and Stanley (1966) recommended three experimental designs as being able to control the threats to internal validity we listed in Chapter 6. These designs are:

23 Protecting Internal Validity with Research Designs Campbell and Stanley (1966) recommended three experimental designs as being able to control the threats to internal validity we listed in Chapter 6. These designs are: The Pretest-Posttest Control Group Design

24 Protecting Internal Validity with Research Designs Campbell and Stanley (1966) recommended three experimental designs as being able to control the threats to internal validity we listed in Chapter 6. These designs are: The Pretest-Posttest Control Group Design The Solomon Four-Group Design

25 Protecting Internal Validity with Research Designs Campbell and Stanley (1966) recommended three experimental designs as being able to control the threats to internal validity we listed in Chapter 6. These designs are: The Pretest-Posttest Control Group Design The Solomon Four-Group Design The Posttest-Only Control-Group Design

26 The Pretest-Posttest Control Group Design This design consists of two randomly assigned groups of participants, both of which are pretested, with one group receiving the IV.

27 The Pretest-Posttest Control Group Design The random assignment of participants to groups allows us to assume that the two groups are equated before the experiment thus ruling out selection as a problem.

28 The Pretest-Posttest Control Group Design Selection

29 The Pretest-Posttest Control Group Design Selection A threat to internal validity.

30 The Pretest-Posttest Control Group Design Selection A threat to internal validity. If we choose participants in such a way that our groups are not equal before the experiment, we cannot be certain that our IV caused any difference we observe after the experiment.

31 The Pretest-Posttest Control Group Design Using a pretest and posttest for both groups allows us to control the effects of history, maturation, and testing because they should affect both groups equally.

32 The Pretest-Posttest Control Group Design Using a pretest and posttest for both groups allows us to control the effects of history, maturation, and testing because they should affect both groups equally. If the control group shows a change between the pretests and posttests, then we know that some factor other than the IV is at work.

33 The Pretest-Posttest Control Group Design Using a pretest and posttest for both groups allows us to control the effects of history, maturation, and testing because they should affect both groups equally. If the control group shows a change between the pretests and posttests, then we know that some factor other than the IV is at work. Statistical regression is controlled as long as we assign our experimental and control groups from the same extreme pool of participants.

34 The Pretest-Posttest Control Group Design Using a pretest and posttest for both groups allows us to control the effects of history, maturation, and testing because they should affect both groups equally. If the control group shows a change between the pretests and posttests, then we know that some factor other than the IV is at work. Statistical regression is controlled as long as we assign our experimental and control groups from the same extreme pool of participants. If any of the interactions with selection occur, they should affect both groups equally, thus equalizing those effects on internal validity.

35 The Pretest-Posttest Control Group Design History

36 The Pretest-Posttest Control Group Design History A threat to internal validity.

37 The Pretest-Posttest Control Group Design History A threat to internal validity. History refers to events that occur between the DV measurements in a repeated measures design.

38 The Pretest-Posttest Control Group Design Maturation

39 The Pretest-Posttest Control Group Design Maturation A threat to internal validity.

40 The Pretest-Posttest Control Group Design Maturation A threat to internal validity. Maturation refers to changes in participants that occur over time during an experiment.

41 The Pretest-Posttest Control Group Design Maturation A threat to internal validity. Maturation refers to changes in participants that occur over time during an experiment. These changes could include actual physical maturation or tiredness, boredom, hunger, and so on.

42 The Pretest-Posttest Control Group Design Testing

43 The Pretest-Posttest Control Group Design Testing A threat to internal validity that occurs because measuring the DV causes changes in the DV.

44 The Pretest-Posttest Control Group Design Statistical regression

45 The Pretest-Posttest Control Group Design Statistical regression A threat to internal validity that occurs when low scorers improve or high scorers fall on a second administration of a test due solely to statistical reasons.

46 The Pretest-Posttest Control Group Design Interactions with selection

47 The Pretest-Posttest Control Group Design Interactions with selection Threats to internal validity that can occur if there are systematic differences between or among selected treatment groups based on maturation, history, or instrumentation.

48 The Pretest-Posttest Control Group Design Instrumentation

49 The Pretest-Posttest Control Group Design Instrumentation A threat to internal validity that occurs if the equipment or human measuring the DV changes its measuring criterion over time.

50 The Pretest-Posttest Control Group Design Diffusion or imitation of treatment

51 The Pretest-Posttest Control Group Design Diffusion or imitation of treatment A threat to internal validity that can occur if participants in one treatment group become familiar with the treatment of another group and copy that treatment.

52 The Solomon Four-Group Design This design is identical to the pretest-posttest control-group design with the first two groups but adds an additional two groups.

53 The Solomon Four-Group Design This design is identical to the pretest-posttest control-group design with the first two groups but adds an additional two groups. Because the Solomon four-group design has the same two groups as the pretest-posttest control-group design, it has the same protection against the threats to internal validity.

54 The Solomon Four-Group Design This design is identical to the pretest-posttest control-group design with the first two groups but adds an additional two groups. Because the Solomon four-group design has the same two groups as the pretest-posttest control-group design, it has the same protection against the threats to internal validity. The main advantage gained by adding the two additional groups relates to external validity.

55 The Solomon Four-Group Design This design is identical to the pretest-posttest control-group design with the first two groups but adds an additional two groups. Because the Solomon four-group design has the same two groups as the pretest-posttest control-group design, it has the same protection against the threats to internal validity. The main advantage gained by adding the two additional groups relates to external validity. One problem with the Solomon design is that there is no statistical test that can treat all six sets of data at the same time.

56 The Posttest-Only Control- Group Design The posttest-only control-group design is a copy of the pretest- posttest control-group design but without the pretests included and is a copy of the two added groups in the Solomon four- group design.

57 The Posttest-Only Control- Group Design The posttest-only control-group design is a copy of the pretest- posttest control-group design but without the pretests included and is a copy of the two added groups in the Solomon four- group design. Random assignment to groups equates the two groups and withholding the IV from one group to make it a control group make it a powerful experimental design that covers the threats to internal validity.

58 The Posttest-Only Control- Group Design The posttest-only control-group design is a copy of the pretest- posttest control-group design but without the pretests included and is a copy of the two added groups in the Solomon four- group design. Random assignment to groups equates the two groups and withholding the IV from one group to make it a control group make it a powerful experimental design that covers the threats to internal validity. The posttest-only control-group design can be extended by adding additional treatment groups as shown in Figure 12-6.

59 The Posttest-Only Control- Group Design The posttest-only control-group design is a copy of the pretest- posttest control-group design but without the pretests included and is a copy of the two added groups in the Solomon four- group design. Random assignment to groups equates the two groups and withholding the IV from one group to make it a control group make it a powerful experimental design that covers the threats to internal validity. The posttest-only control-group design can be extended by adding additional treatment groups as shown in Figure 12-6. Finally, we could create a factorial design from the posttest-only control group design by combining two of these designs simultaneously so that we ended up with a block diagram similar to those from Chapter 11.

60 Conclusion How important is internal validity?

61 Conclusion How important is internal validity? It is the most important property of any experiment.

62 Conclusion How important is internal validity? It is the most important property of any experiment. If you do not concern yourself with the internal validity of your experiment, you are wasting your time.

63 Single-Case Experimental Designs Single-case experimental design

64 Single-Case Experimental Designs Single-case experimental design An experiment that consists of one participant (also known as N = 1 designs).

65 Single-Case Experimental Designs Single-case experimental design An experiment that consists of one participant (also known as N = 1 designs). Includes controls just as in a typical experiment

66 Single-Case Experimental Designs Single-case experimental design An experiment that consists of one participant (also known as N = 1 designs). Includes controls just as in a typical experiment Precautions are taken to insure internal validity.

67 Single-Case Experimental Designs Case-study approach

68 Single-Case Experimental Designs Case-study approach An observational technique in which we compile a record of observations about a single participant.

69 Single-Case Experimental Designs Case-study approach An observational technique in which we compile a record of observations about a single participant. Often used in clinical settings

70 Single-Case Experimental Designs Case-study approach An observational technique in which we compile a record of observations about a single participant. Often used in clinical settings A case-study is merely a descriptive or observational approach.

71 Single-Case Experimental Designs Case-study approach An observational technique in which we compile a record of observations about a single participant. Often used in clinical settings A case-study is merely a descriptive or observational approach. The researcher does not manipulate or control variables, but simply records observations.

72 Single-Case Experimental Designs Case-study approach An observational technique in which we compile a record of observations about a single participant. Often used in clinical settings A case-study is merely a descriptive or observational approach. The researcher does not manipulate or control variables, but simply records observations.

73 History of Single-Case Experimental Designs In the 1860’s Gustav Fechner explored sensory processes on an in-depth basis with a series of individuals.

74 History of Single-Case Experimental Designs In the 1860’s Gustav Fechner explored sensory processes on an in-depth basis with a series of individuals. Wilhelm Wundt (founder of the first psychology laboratory) conducted his pioneering work on introspection with highly trained individual participants.

75 History of Single-Case Experimental Designs In the 1860’s Gustav Fechner explored sensory processes on an in-depth basis with a series of individuals. Wilhelm Wundt (founder of the first psychology laboratory) conducted his pioneering work on introspection with highly trained individual participants. Hermann Ebbinghaus pioneered research in verbal learning and memory using himself as subject in a single-case design.

76 History of Single-Case Experimental Designs In the 1860’s Gustav Fechner explored sensory processes on an in-depth basis with a series of individuals. Wilhelm Wundt (founder of the first psychology laboratory) conducted his pioneering work on introspection with highly trained individual participants. Hermann Ebbinghaus pioneered research in verbal learning and memory using himself as subject in a single-case design. Single-case designs declined in popularity as statistical tests for group experiments were developed in the 1920’s.

77 Uses of Single-Case Experimental Designs There are still researchers who use single-case designs.

78 Uses of Single-Case Experimental Designs There are still researchers who use single-case designs. Founded by B.F. Skinner, the experimental analysis of behavior approach continues to employ this technique.

79 Uses of Single-Case Experimental Designs Skinner (1966) summarized his philosophy in this manner:

80 Uses of Single-Case Experimental Designs Skinner (1966) summarized his philosophy in this manner: “Instead of studying a thousand rats for one hour each, or a hundred rats for ten hours each, the investigator is likely to study one rat for a thousand hours” (p. 21).

81 Uses of Single-Case Experimental Designs Skinner (1966) summarized his philosophy in this manner: “Instead of studying a thousand rats for one hour each, or a hundred rats for ten hours each, the investigator is likely to study one rat for a thousand hours” (p. 21). The Society for the Experimental Analysis of Behavior was formed and began publishing its own journals

82 Uses of Single-Case Experimental Designs Skinner (1966) summarized his philosophy in this manner: “Instead of studying a thousand rats for one hour each, or a hundred rats for ten hours each, the investigator is likely to study one rat for a thousand hours” (p. 21). The Society for the Experimental Analysis of Behavior was formed and began publishing its own journals Journal of the Experimental Analysis of Behavior (in 1958) Journal of Applied Behavior Analysis (in 1968)

83 Uses of Single-Case Experimental Designs Why use a single-case design in the first place?

84 Uses of Single-Case Experimental Designs Why use a single-case design in the first place? Dukes (1965) provided a number of convincing arguments for and situations that require single-case designs.

85 Uses of Single-Case Experimental Designs Dukes (1965) arguments:

86 Uses of Single-Case Experimental Designs Dukes (1965) arguments: A sample of one is all you can manage if that sample exhausts the population.

87 Uses of Single-Case Experimental Designs Dukes (1965) arguments: A sample of one is all you can manage if that sample exhausts the population. If you can assume perfect generalizability, then a sample of one is appropriate.

88 Uses of Single-Case Experimental Designs Dukes (1965) arguments: A sample of one is all you can manage if that sample exhausts the population. If you can assume perfect generalizability, then a sample of one is appropriate. A single-case design would be most appropriate when a single negative instance would refute a theory or an assumed universal relation.

89 Uses of Single-Case Experimental Designs Dukes (1965) arguments: A sample of one is all you can manage if that sample exhausts the population. If you can assume perfect generalizability, then a sample of one is appropriate. A single-case design would be most appropriate when a single negative instance would refute a theory or an assumed universal relation. You may simply have limitations on your opportunity to observe a particular behavior (e.g., H. M. ).

90 Uses of Single-Case Experimental Designs Dukes (1965) arguments: A sample of one is all you can manage if that sample exhausts the population. If you can assume perfect generalizability, then a sample of one is appropriate. A single-case design would be most appropriate when a single negative instance would refute a theory or an assumed universal relation. You may simply have limitations on your opportunity to observe a particular behavior (e.g., H. M. ). When research is extremely time consuming and expensive, requires extensive training, or has difficulties with control, an investigator may choose to study only one participant (e.g., ape language studies).

91 General Procedures of Single- Case Experimental Designs Hersen (1982) listed three procedures that are characteristic of single-case designs:

92 General Procedures of Single- Case Experimental Designs Hersen (1982) listed three procedures that are characteristic of single-case designs: Repeated measures

93 General Procedures of Single- Case Experimental Designs Hersen (1982) listed three procedures that are characteristic of single-case designs: Repeated measures Baseline measurement

94 General Procedures of Single- Case Experimental Designs Hersen (1982) listed three procedures that are characteristic of single-case designs: Repeated measures Baseline measurement Changing one variable at a time

95 General Procedures of Single- Case Experimental Designs Repeated Measures

96 General Procedures of Single- Case Experimental Designs Repeated Measures When you are dealing with only one participant, it is important to make sure the behavior you are measuring is consistent. Therefore, you would repeatedly measure the participant’s behavior.

97 General Procedures of Single- Case Experimental Designs Repeated Measures When you are dealing with only one participant, it is important to make sure the behavior you are measuring is consistent. Therefore, you would repeatedly measure the participant’s behavior. Hersen and Barlow (1976) noted that the procedures for measurement “must be clearly specified, observable, public, and replicable in all respects (p. 71).

98 General Procedures of Single- Case Experimental Designs Repeated Measures When you are dealing with only one participant, it is important to make sure the behavior you are measuring is consistent. Therefore, you would repeatedly measure the participant’s behavior. Hersen and Barlow (1976) noted that the procedures for measurement “must be clearly specified, observable, public, and replicable in all respects (p. 71). Repeated measurements “must be done under exacting and totally standardized conditions with respect to measurement devices used, personnel involved, time or times of day measurements are recorded, instructions to the subject, and the specific environmental conditions” (p. 71).

99 General Procedures of Single- Case Experimental Designs Baseline Measurement

100 General Procedures of Single- Case Experimental Designs Baseline Measurement A measurement of behavior that is made under normal conditions (e.g., no IV is present); a control condition.

101 General Procedures of Single- Case Experimental Designs Baseline Measurement A measurement of behavior that is made under normal conditions (e.g., no IV is present); a control condition. Baseline measurement serves as the control condition against which to compare the behavior as affected by the IV.

102 General Procedures of Single- Case Experimental Designs Baseline Measurement A measurement of behavior that is made under normal conditions (e.g., no IV is present); a control condition. Baseline measurement serves as the control condition against which to compare the behavior as affected by the IV. Barlow and Hersen (1973) recommend that you collect at least three observations during the baseline period in order to establish a trend in the data.

103 General Procedures of Single- Case Experimental Designs Changing one variable at a time

104 General Procedures of Single- Case Experimental Designs Changing one variable at a time In a single-case design it is vital that as the experimenter, you change only one variable at a time when you move from one phase of the experiment to the next.

105 General Procedures of Single- Case Experimental Designs Changing one variable at a time In a single-case design it is vital that as the experimenter, you change only one variable at a time when you move from one phase of the experiment to the next. If you allow variables to change simultaneously, then you have a confounded experiment and cannot tell which variable has caused the change in behavior that you observe.

106 General Procedures of Single- Case Experimental Designs Changing one variable at a time In a single-case design it is vital that as the experimenter, you change only one variable at a time when you move from one phase of the experiment to the next. If you allow variables to change simultaneously, then you have a confounded experiment and cannot tell which variable has caused the change in behavior that you observe. If you record your baseline measurement, change several aspects of the participant’s environment, and then observe the behavior again, you have no way of knowing which changed aspect affect the behavior.

107 Statistics and Single-Case Experimental Designs Traditionally, researchers have not computed statistical analyses of results from single-case designs.

108 Statistics and Single-Case Experimental Designs Traditionally, researchers have not computed statistical analyses of results from single-case designs. The development of statistical tests for single-case designs has lagged behind multiple-case analyses.

109 Statistics and Single-Case Experimental Designs Traditionally, researchers have not computed statistical analyses of results from single-case designs. The development of statistical tests for single-case designs has lagged behind multiple-case analyses. There is also controversy about whether statistical analyses of single-case designs are even appropriate (Kazdin, 1976).

110 Statistics and Single-Case Experimental Designs The case against statistical analysis

111 Statistics and Single-Case Experimental Designs The case against statistical analysis The tradition has been to visually inspect (“eyeball”) the data to determine whether or not change has taken place.

112 Statistics and Single-Case Experimental Designs The case against statistical analysis The tradition has been to visually inspect (“eyeball”) the data to determine whether or not change has taken place. Statistical significance is not always the same as clinical significance.

113 Statistics and Single-Case Experimental Designs The case against statistical analysis The tradition has been to visually inspect (“eyeball”) the data to determine whether or not change has taken place. Statistical significance is not always the same as clinical significance. Even though statistical tests can help find effects that visual inspection cannot, such subtle effects may not be replicable (Kazdin, 1976).

114 Statistics and Single-Case Experimental Designs The case for statistical analysis The argument for using statistical analyses of single-case designs revolves primarily around increased accuracy of conclusions.

115 Statistics and Single-Case Experimental Designs The case for statistical analysis The argument for using statistical analyses of single-case designs revolves primarily around increased accuracy of conclusions. Jones, Vaught, and Weinrott (1977) found that sometimes conclusions drawn from visually inspected data were sometimes correct and sometimes incorrect.

116 Statistics and Single-Case Experimental Designs The case for statistical analysis The argument for using statistical analyses of single-case designs revolves primarily around increased accuracy of conclusions. Jones, Vaught, and Weinrott (1977) found that sometimes conclusions drawn from visually inspected data were sometimes correct and sometimes incorrect. Both Type I and Type II errors occurred.

117 Statistics and Single-Case Experimental Designs The case for statistical analysis The argument for using statistical analyses of single-case designs revolves primarily around increased accuracy of conclusions. Jones, Vaught, and Weinrott (1977) found that sometimes conclusions drawn from visually inspected data were sometimes correct and sometimes incorrect. Kazdin (1976) pointed out that statistical analyses are particularly likely to uncover findings that do not show up in visual inspection when a stable baseline is not established, new areas of research are being investigated, or testing is done in the real world, which tends to increase extraneous variation.

118 Representative Single-Case Experimental Designs Researchers use standard notation for single-case designs that makes the information easier to present and conceptualize.

119 Representative Single-Case Experimental Designs Standard Notation:

120 Representative Single-Case Experimental Designs Standard Notation: A

121 Representative Single-Case Experimental Designs Standard Notation: A Refers to the baseline measurement in a single-case design.

122 Representative Single-Case Experimental Designs Standard Notation: B

123 Representative Single-Case Experimental Designs Standard Notation: B Refers to the outcome (treatment) measurement in a single- case design.

124 Representative Single-Case Experimental Designs The A-B design

125 Representative Single-Case Experimental Designs The A-B design The A-B design is the simplest of the single-case designs.

126 Representative Single-Case Experimental Designs The A-B design The A-B design is the simplest of the single-case designs. We make baseline measurements, apply a treatment, and then take a second set of measurements.

127 Representative Single-Case Experimental Designs The A-B design The A-B design is the simplest of the single-case designs. We make baseline measurements, apply a treatment, and then take a second set of measurements. We compare the B (treatment) measurement to the A (baseline) measurement in order to determine whether a change has occurred.

128 Representative Single-Case Experimental Designs The A-B design The A-B design is the simplest of the single-case designs. We make baseline measurements, apply a treatment, and then take a second set of measurements. We compare the B (treatment) measurement to the A (baseline) measurement in order to determine whether a change has occurred. In the A-B design, the participant’s A measurements serve as the control for the B measurements.

129 Representative Single-Case Experimental Designs The A-B design The A-B design is the simplest of the single-case designs. We make baseline measurements, apply a treatment, and then take a second set of measurements. We compare the B (treatment) measurement to the A (baseline) measurement in order to determine whether a change has occurred. In the A-B design, the participant’s A measurements serve as the control for the B measurements. The A-B design is poor for determining causality because of many of the threats to internal validity (e.g., history, maturation, and instrumentation).

130 Representative Single-Case Experimental Designs The A-B-A design

131 Representative Single-Case Experimental Designs The A-B-A design In the A-B-A design, the treatment phase is followed by a return to the baseline condition.

132 Representative Single-Case Experimental Designs The A-B-A design In the A-B-A design, the treatment phase is followed by a return to the baseline condition. If a change in behavior during B is actually due to the experimental treatment, the change should disappear when B is removed and you return to the baseline condition.

133 Representative Single-Case Experimental Designs The A-B-A design In the A-B-A design, the treatment phase is followed by a return to the baseline condition. If a change in behavior during B is actually due to the experimental treatment, the change should disappear when B is removed and you return to the baseline condition. If, on the other hand, a change in B was due to some extraneous variable, the change will not disappear when B is removed.

134 Representative Single-Case Experimental Designs The A-B-A design In the A-B-A design, the treatment phase is followed by a return to the baseline condition. If a change in behavior during B is actually due to the experimental treatment, the change should disappear when B is removed and you return to the baseline condition. If, on the other hand, a change in B was due to some extraneous variable, the change will not disappear when B is removed. Thus, the A-B-A design allows a causal relation to be drawn.

135 Representative Single-Case Experimental Designs The A-B-A design In the A-B-A design, the treatment phase is followed by a return to the baseline condition. If a change in behavior during B is actually due to the experimental treatment, the change should disappear when B is removed and you return to the baseline condition. If, on the other hand, a change in B was due to some extraneous variable, the change will not disappear when B is removed. Thus, the A-B-A design allows a causal relation to be drawn. If you end your experiment on an A phase, this leaves the participant “hanging” without the treatment.

136 Representative Single-Case Experimental Designs The A-B-A-B design The A-B-A-B design begins with a baseline period followed by treatment, baseline, and treatment periods consecutively.

137 Representative Single-Case Experimental Designs The A-B-A-B design The A-B-A-B design begins with a baseline period followed by treatment, baseline, and treatment periods consecutively. This design adds a final treatment period to the A-B-A design, thereby completing the experimental cycle with the participant in a treatment phase.

138 Representative Single-Case Experimental Designs The A-B-A-B design The A-B-A-B design begins with a baseline period followed by treatment, baseline, and treatment periods consecutively. This design adds a final treatment period to the A-B-A design, thereby completing the experimental cycle with the participant in a treatment phase. Hersen and Barlow (1976) point out that this design gives two transitions (B to A and A to B) that can demonstrate the effect of the treatment variable.

139 Representative Single-Case Experimental Designs The A-B-A-B design The A-B-A-B design begins with a baseline period followed by treatment, baseline, and treatment periods consecutively. This design adds a final treatment period to the A-B-A design, thereby completing the experimental cycle with the participant in a treatment phase. Hersen and Barlow (1976) point out that this design gives two transitions (B to A and A to B) that can demonstrate the effect of the treatment variable. Thus, our ability to draw a cause-and-effect conclusion is further strengthened.

140 Representative Single-Case Experimental Designs Design and the real world

141 Representative Single-Case Experimental Designs Design and the real world From the preceding sections it should be clear that the A-B-A-B design is the preferred design for single-case research.


Download ppt "Chapter Twelve. Alternative Research Designs. Protecting Internal Validity Revisited Internal validity."

Similar presentations


Ads by Google