Presentation is loading. Please wait.

Presentation is loading. Please wait.

Experimental and Single-Subject Design PSY440 May 27, 2008.

Similar presentations


Presentation on theme: "Experimental and Single-Subject Design PSY440 May 27, 2008."— Presentation transcript:

1 Experimental and Single-Subject Design PSY440 May 27, 2008

2 Definitions: Consider each of the following terms and generate a definition for each: Research Empirical Data Experiment Qualitative Research

3 Definitions Research 1. Scholarly or scientific investigation or inquiry 2. Close and careful study (American Heritage Dictionary) 3. Systematic (step-by-step) 4. Purposeful (identify, describe, explain, predict)

4 Definitions Empirical:Relying upon or derived from observation or experiment; capable of proof or verification by means of observation or experiment. (American Heritage Dictionary) Data: Information; esp. information organized for analysis or used as the basis of a decision. (American Heritage Dictionary) Experiment:A method of testing an hypothesized causal relationship between two variables by manipulating one variable and observing the effect of the manipulation on the second variable.

5 Overview of Experimental Design Based on Alan E. Kazdin. (1982). Single-Case Research Designs: Methods for Clinical and Applied Settings. Chapter IV.

6 Independent & Dependent Variables The independent variable (IV) is the variable that is manipulated in an experiment. The dependent variable (DV) is the variable that is observed to assess the effect of the manipulation of the IV. What are some examples of IV’s and DV’s that might be studied experimentally?

7 Internal and External Validity Internal validity refers to the extent to which a study is designed in a way that allows a causal relation to be inferred. Threats to internal validity raise questions about alternative explanations for an apparent association between the IV and DV. External validity refers to the generalizability of the findings beyond the experimental context (e.g. to other persons, settings, assessment devices, etc).

8 Threats to Internal Validity History Maturation Testing Instrumentation Statistical Regression Attrition Diffusion of Treatment

9 History Any event other than the intervention occurring at the time of the experiment that could influence the results. Example in intervention research: Participant is prescribed medication during the time frame of the psychosocial treatment Other examples? How can this threat be ruled out or reduced by the experimental design?

10 Maturation Any change over time that may result from processes within the subject (as opposed to the IV) Example: Client learns how to read more effectively, so starts behaving better during reading instruction. Other examples? How can this threat be ruled out or reduced by the experimental design?

11 Testing Any change that may be attributed to effects of repeated assessment Example: Client gets tired of filling out weekly symptom checklist measures, and just starts circling all 1’s or responding randomly. Other examples? How can this threat be ruled out or reduced by the experimental design?

12 Instrumentation Any change that takes place in the measuring instrument or assessment procedure over time. Example: Teacher’s report of number of disruptive incidents drifts over time, holding the student to a higher (or lower) standard than before. Other examples? How can this threat be ruled out or reduced by the experimental design?

13 Statistical Regression Any change from one assessment occasion to another that might be due to a reversion of scores toward the mean. Example: Clients are selected to be in a depression group based on high scores on a screening measure for depression. When their scores (on average) go down after the intervention, this could be due just to statistical regression (more on this one later in the course : ) How can this threat be ruled out or reduced by the experimental design?

14 Selection Biases Any differences between groups that are due to the differential selection or assignment of subjects to groups. Example: Teachers volunteer to have their classes get social skills lessons, and their students are compared to students in classrooms where the teachers did not volunteer (both teacher and student effects may be present). Other examples? How can this threat be ruled out or reduced by the experimental design?

15 Attrition Any change in overall scores between groups or in a given group over time that may be attributed to the loss of some of the participants. Example: Clients who drop out of treatment are not included in posttest assessment - may inflate treatment group posttest score. Other examples? How can this threat be ruled out or reduced by the experimental design?

16 Diffusion of Treatment The intervention is inadvertently provided to part or all of the control group, or at the times when the treatment should not be in effect. Example: Teacher starts token economy before finishing the collection of baseline data. Other examples? How can this threat be ruled out or reduced by the experimental design?

17 Internal validity and single-subject designs In single-subject research, the participant is compared to him/herself under different conditions (rather than comparing groups). The participant is his/her own control Selection biases and attrition are automatically ruled out by these designs Well designed single-subject experiments can rule out (or reduce) history, maturation, testing, instrumentation, and statistical regression

18 Threats to External Validity Generality Across Participants Settings Response Measures Times Behavior Change Agents Reactive Experimental Arrangements Reactive Assessment Pretest Sensitization Multiple Treatment Interference

19 Generality Across Subjects, Settings, Responses, & Times Results do not extend to participants, settings, behavioral responses, and times other than those included in the investigation Example: Couple uses effective communication skills in session, but not at home. Other examples? How can this threat be ruled out or reduced by the experimental design?

20 Generality Across Behavior Change Agents Intervention results do not extend to other persons who can administer the intervention (special case of previous item) Example: Parents are able to use behavior modification techniques successfully but child care providers are not (child responds differently to different person) Other Examples? How can this kind of threat to external validity be ruled out or reduced?

21 Reactive Experimental Arrangements Participants may be influenced by their awareness that they are participating in an experiment or special program (demand characteristics) Example: Social validity of treatment is enhanced by the association with a university Other examples? How can this threat be ruled out or reduced by the experimental design?

22 Reactive Assessment The extent to which participants are aware that their behavior is being assessed and that this awareness may influence how they respond (Special case of reactive arrangements). Example: Child complies with adult commands when the experimenter is observing, but not at other times. Other examples? How can this threat be ruled out or reduced by the experimental design?

23 Pretest Sensitization Assessing participants before treatment sensitizes them to the intervention that follows, so they are affected differently by the intervention than persons not given the pretest. Example: Pretest administered before parenting group makes participants pay attention to material more closely and learn more Other examples? How can this threat be ruled out or reduced by the experimental design?

24 Multiple Treatment Interference When the same participant(s) are exposed to more than one treatment, the conclusions reached about a particular treatment may be restricted. Example: Clients are getting pastoral counseling at church and CBT at a mental health center. Other examples? How can this threat be ruled out or reduced by the experimental design?

25 Evaluating a Research Design No study is perfect, but some studies are better than others. One way to evaluate an experimental design is to ask the question: How well does the design minimize threats to internal and external validity? (In other words, how strong a causal claim does it support, and how generalizable are the results?)

26 Internal/External Validity Trade-Off Many designs that are well controlled (good internal validity), are more prone to problems with external validity (generality across settings, behavioral responses, interventionists, etc. may be more limited).

27 Random selection/assignment Random: Chance of being selected is equal for each participant and not biased by any systematic factor Group designs can reduce many of the threats to internal validity by using random assignment of participants to conditions. They can (in theory) also limit some threats to external validity by using a truly random sample of participants (but how often do you actually see this?)

28 Single-Subject Designs More modest in generalizability claims Can be very strong (even perfect) in reducing threats to internal validity Examples: Reversal (ABAB) Multiple Baseline Changing Criterion Multiple Treatment

29 Reversal Design (ABAB) A=Baseline B=Treatment A=Return to Baseline B=Reintroduce Treatment

30 Reversal Design Baseline: Stable baseline allows stronger causal inference to be drawn Stability refers to a lack of slope, and low variability (show examples on white board)

31 ABAB Design: Baseline If trend is in reverse direction from expected intervention effect, that’s OK If trend is not too steep and a very strong effect is expected, that may be OK For reversal design, relatively low variability makes it easier to draw strong conclusions

32 Threats to internal validity? History & maturation (return to baseline rules these out) Testing, instrumentation, and statistical regression (return to baseline rules these out, but use of reliable measures, trained & monitored observers strengthens the design) Selection bias & attrition (not an issue with single- subject research!) Diffusion of treatment (a concern but can be controlled in some cases)

33 What about generalizability? Can’t easily generalize beyond the case. Need to replicate under different circumstances and with different participants, or follow-up with group design.

34 Disadvantages to Reversal Design Diffusion of treatment: If the intervention “really worked” the removal of the intervention should not result in a return to baseline behavior! Ethical concerns: If the behavior change is clinically significant, it may be unethical to remove it, even temporarily!

35 Multiple Baseline Design Collect baseline data on more than one dependent measure simultaneously Introduce interventions to target each dependent variable in succession

36 Multiple Baseline Design Each baseline serves as a control for the other interventions being tested (Each DV is a “mini” AB experiment of the previous baseline) May be done with one participant when multiple behaviors are targeted, or with multiple participants, each receiving the same intervention in succession.

37 Example with one participant Multiple baseline design across behaviors to increase assertive communication: –Increase eye contact –Increase speech volume –Increase number of requests made

38 Example across participants More than one client receives the same intervention, beginning baseline all at one time, and introducing the intervention in succession, so that each participant serves as a control for the others. Example from textbook: intervention to increase appropriate mealtime behavior in preschoolers

39 Example across situations, settings, and time Measure target behavior in multiple settings, and introduce the intervention into each setting in succession, while collecting baseline data on all settings

40 Advantages of Multiple Baseline + No need to remove intervention + Interventions can be added gradually (practical utility in clinical setting)

41 Disadvantages of Multiple Baseline -Interdependence of baselines (change in one may result in change in another) -Inconsistent effects of interventions (some are followed by changes in behavior and others are not) -Prolonged baselines (ethical & methodological concerns)

42 Changing Criterion Designs Intervention phase requires increasing levels of performance at specified times. If the performance level increases as expected over several intervals, this provides good evidence for a causal relationship between intervention and outcome.

43 Multiple Treatment Designs More than one treatment is implemented during the same phase, in a manner that allows the effects of the treatments to be compared to each other


Download ppt "Experimental and Single-Subject Design PSY440 May 27, 2008."

Similar presentations


Ads by Google