WITHDRAWAL—THE PAIN, THE AGONY—THE TRIUMPH! Tristram Jones, Ph.D. PS 512 Unit III Kaplan University.

Slides:



Advertisements
Similar presentations
Overview of Withdrawal Designs
Advertisements

Chapter 7 Flashcards. overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or.
Chapter 9 Overview of Alternating Treatments Designs.
Cross Sectional Designs
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Defining Characteristics
Non-experimental Designs
Experimental Research Designs
Other single subject designs part 2
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 4: An Overview of Empirical Methods 1.
Single -Subject Designs - Ch 5 “Data collection allows teachers to make statements about the direction and magnitude of behavioral changes” (p. 116). In.
Quasi-Experimental Designs
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Non-Experimental designs: Developmental designs & Small-N designs
Non-Experimental designs: Developmental designs & Small-N designs
1 Chapters 10 & 11 (Richards text) CHANGING CRITERION Designs in Single-Subject Research Ps534 Dr. Ken Reeve Caldwell College Graduate Programs in ABA.
1 Chapter 4 – Issues in Single- Subject Research Ps534 Dr. Ken Reeve Caldwell College Post-Bac Program in ABA.
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
PSYC512: Research Methods PSYC512: Research Methods Lecture 15 Brian P. Dyre University of Idaho.
Psych 231: Research Methods in Psychology
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Single-Case Designs. AKA single-subject, within subject, intra-subject design Footnote on p. 163 Not because only one participant (although might sometimes)
1 Single Subject Experimental Design The Evidence in EBP.
Non-Experimental designs
Single-Subject Research
Validity, Reliability, & Sampling
PSYC512: Research Methods PSYC512: Research Methods Lecture 14 Brian P. Dyre University of Idaho.
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Statistical Analyses & Threats to Validity
Copyright © 2011 Pearson Education, Inc. All rights reserved. Doing Research in Behavior Modification Chapter 22.
Doing Research in Behavior Modification
Chapter 11 Research Methods in Behavior Modification.
Rich Gallagher Point of Contact Group
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Experimental and Single-Subject Design PSY440 May 27, 2008.
Experimental Research Validity and Confounds. What is it? Systematic inquiry that is characterized by: Systematic inquiry that is characterized by: An.
Research Strategies Chapter 6. Research steps Literature Review identify a new idea for research, form a hypothesis and a prediction, Methodology define.
Graduate Programs in ABA
Research Methods in Psychology (Pp ). IB Internal Assessment The IB Psychology Guide states that SL students are required to replicate a simple.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Single-Subject Experimental Research
Changing Criteria Design Tristram Jones, Ph. D Kaplan university PS512, Unit VI.
Experiment Basics: Variables Psych 231: Research Methods in Psychology.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Non-Experimental designs
Quasi Experimental and single case experimental designs
Reversal Designs. Overview One of the most important designs you can use Can be used in a variety of settings Can be very powerful in detecting changes.
ALTERNATING TREATMENT DESIGNS and YOU! Tristram Jones, Ph.D. Kaplan University PS512 Unit V.
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
Non-Experimental designs Psych 231: Research Methods in Psychology.
Single Subject Jesus Valdez Purpose To study the changes in behavior of an individual exhibits after exposure to an intervention or treatment of some.
Welcome to Seminar! PS 512 Unit 2 Any questions to start??
Single-Subject and Correlational Research Bring Schraw et al.
ALTERNATING TREATMENT DESIGNS and YOU! Tristram Jones, Ph.D. Kaplan University PS512 Unit V.
SINGLE SUBJECT RESEARCH PREPARED FOR: DR EDDY LUARAN PREPARED BY: AFZA ARRMIZA BINTI RAZIF [ ] HANIFAH BINTI RAMLEE IZYAN NADHIRAH BINTI.
Can you hear me now? Keeping threats to validity from muffling assessment messages Maureen Donohue-Smith, Ph.D., RN Elmira College.
Chapter 11: Quasi-Experimental and Single Case Experimental Designs
Damned if you do and Damned if you don’t
Experimental Design Dependent variable (DV): Variable observed to determine the effects of an experimental manipulation (behavior) Independent variable.
Experimental Design.
Experimental Design.
Quasi-Experimental Design
Experimental Clinical Psychology Session II
Internal Validity - The extent to which all explanations for changes in the DV between conditions have been eliminated -- other than the IV. ie(7a)
Single Subject design.
Inferential Statistics
Non-Experimental designs
Presentation transcript:

WITHDRAWAL—THE PAIN, THE AGONY—THE TRIUMPH! Tristram Jones, Ph.D. PS 512 Unit III Kaplan University

What the heck is WITHDRAWAL in research? To find out we must turn to the expert researchers! NO NOT THOSE EXPERTS!! It’s often called a “REVERSAL” Design—but “withdrawal” sounds sexier! It is a cross-disciplinary design applied as a basic experi- mental method for demonstrating treatment effects! That may sound a bit lame, but think of it as a good way to demonstrate that TREAT- MENT is what is actually affecting the DEPENDENT VARIABLE!

“Vait a minute, that sounds dull! Vott on earth is being WITHDRAWN??” Good question Dr. Ruth! What is being withdrawn is TREATMENT! But ach du lieber, I thought ziss was to proof treatment vorks!!???But ach du lieber, I thought ziss was to proof treatment vorks!!??? Right! That’s exactly what it does! Vell, dotz the most dummkopf ting I effer heard!Vell, dotz the most dummkopf ting I effer heard! Well, maybe it would help if I explained! Forget it, you are giffing me ein headache!!!Forget it, you are giffing me ein headache!!!

Well, here’s the explanation anyway! The reason you can prove treatment works by withdrawing treatment, is that repeatedly doing so tests the efficacy of the IV versus all possible extraneous variables! The simplest way to illustrate the concept of withdrawal is an ABA DESIGN! Here is what this looks like:

“Well, that doesn’t prove anything! You withdrew and the subject calmed down! Big deal! Happens all the time!” No, no, no, Dr. Kinsey, we withdrew TREATMENT and the chart returned to the baseline measurement—no treatment, no effect! Borrrr-RING! But even your dull, lackluster experiment could still have a zillion validity flaws! Maybe your subject just got bored! No, Dr. Kinsey! See, we keep repeating and repeating the process to see the effect! NOW you’re talking!

So, you can have an ABAB result that looks like this: No, I don’t know who Bob is, but whoever he is, the IV is getting him to remain in his seat! Get the idea?

To explain further: The beauty of the single-subject withdrawal design is that it rules out history and maturation as internal validity confounds! And the more times the design is repeated, the more certain the exclusion of those concerns becomes! You have probably done this yourself, in some respect!

Almost all SSRD has the inbuilt genius of using the subject as its own control, usually through repetition! In ABAB withdrawal designs the repetition comes from the continuous application and withdrawal of the IV– until the certitude that the DV is responding to the IV and the absence of the IV becomes unarguably greater than chance!

“ Do you mean to tell us that WITHDRAWAL is foolproof? Every one knows that’s just ridiculous!” That’s right, Masters and Johnson! Here are some definite drawbacks to withdrawal! THE ETHICAL OBJECTIONS CAN BE HUGE! _______ AND THEN THERE IS RESENTFUL DEMORALIZATION! _______ DOESN’T WORK IF TREATMENT EFFECT IS NATURALLY PROLONGED!

“Can we take that one example at a time?” Let’s start with ethics! THE ETHICAL OBJECTIONS CAN BE HUGE! MY PLAN IS SIMPLE—WE STOP THE TREATMENT AND SEE IF YOU DIE!

AND THEN THERE IS RESENTFUL DEMORALIZATION! FIRST THEY MAKE ME HAPPY, THEN THEY MAKE ME SAD, THEN THEY MAKE ME HAPPY, THEN THEY MAKE ME SAD…YOU KNOW WHAT? I’M ACTING BORED NO MATTER WHAT THEY DO!!!!

DOESN’T WORK IF TREATMENT EFFECT IS NATURALLY PROLONGED! I’M NOT LIKING THIS DOCTOR! CAN YOU WITHDRAW TREATMENT? NOT SO MUCH!

So what are maturation and history confounds? They are typical threats to INTERNAL VALIDTY ! Internal validity essentially means that you are measuring what you think you’re measuring! WOW! THIS DIET SUCKS! A CLASSIC HISTORY CONFOUND!

Maturation confounds: Normal development throws your measurements off! The Hazards of Longitudinal IQ Testing!

INTERNAL VALIDITY The most common “Internal Validity” threats are:  History  Maturation  Testing—scores improve on posttest owing to pretest  Instrumentation—your instrument is crummy  Selection—differences exist prior to selection  Attrition—subjects drop out, representativeness erodes  Statistical regression—regression toward the mean

Timing can also be a problem— thus we have the MULTIPLE BASELINE DESIGN! If we’d only tested subject A, we might think the treatment effected a change. Subjects B and C however, show that treatment actually had no effect on the behavioral change. Rather, the behavior changed at the same time for each subject regardless of when we applied the IV! This time- lagged multiple baseline design, makes it obvious, but if we only did an AB test on subject A, we would think we were geniuses!

More to the point--- The B-A-B design begins with the intervention and then introduces the withdrawal– this is useful when there is no ethical opportunity to collect baseline data….

Problems with BAB? While it can be (somewhat lamely) claimed that a BAB design is good because it ends with the treatment phase intact, it is a weak design because it proceeds without a pre-intervention assessment of the target behavior. Thus, it’s ethical superiority is its main asset!

What about when you want to try a lot of different stuff? There’s the ABC (changing conditions) design, which is crummy because you cannot really verify the effect of the IV on the DV, but good old Aberto and Troutman figured this out and came up with the reinstatement of baseline before each new treatment—sheer genius, right?

And when you just can’t decide… There is the A-B-A-C method---

While we’re at it, what is EXTERNAL VALIDITY? The extent to which your findings will generalize to your target population! Obviously, screwed up INTERNAL VALIDITY will often effect EXTERNAL VALIDITY. Sometimes adjustments (like homogeneity) that promote internal validity will mess up external validity! Don’t worry about this too much!

Horner et al. (2005) insist that: “External validity of results from single- subject research is enhanced through replication of the effects across different participants, different conditions, and/or different measures of the dependent variable.” (But not all that much!)

“And that’s all I know about withdrawal, Jenny!” Is it always manipulative—is it ever ethical?