Chapter 12 Single-Case Evaluation Designs PowerPoint presentation developed by: Sarah E. Bledsoe & Jennifer Manuel
Overview Introduction to Single-Case Evaluation Designs Logic of Single-Case Designs Single-Case Designs in Social Work Practice Single-Case Designs as Part of Evidence-Based Practice Measurement Issues Data Gathering Alternative Single-Case Designs Data Analysis
Introduction to Single-Case Evaluation Designs Single-Case Evaluation is a type of time-series design with individual cases or systems By identifying stable trends through repeated measures internal validity is enhanced A researcher is able to identify when change in the dependent variable occurs and whether or not that change coincides with change in the independent variable A pattern of coincidence may be established that makes alternative explanations unlikely
Logic of Single-Case Designs Control Phase: Baseline (repeated measure of outcome) is obtained in a target population Experimental Phase: An intervention is introduced and repeated outcomes measures are continued Control and Experimental Phase data are examined to identify coinciding shifts and trends in data to make inferences about the effectiveness of the intervention
Single-Case Designs in Social Work Distinguishing feature is a sample size = 1 Chief limitation is dubious external validity High internal validity makes single-case designs a useful tool for identifying promising interventions for testing in subsequent studies Replication results in the accumulation of evidence to support generalizability Advances scientific basis of an intervention Useful in evaluating an agency or program
Use of Single-Case Designs as Part of Evidence-Based Practice Rigorous method used in implementing the evidence-based practice process Allows practitioner to monitor client progress Obstacles: Client crises may not allow practitioners to collect sufficient baseline data Heavy case loads increase difficulty of collecting repeated measures Peers and supervisors may not recognize the value of single-case research Clients may resent extensive monitoring
Measurement Issues Operationally defining target population and goals Choosing what to measure The use of triangulation of measurement
Operationally Defining Target Populations and Goals Identification of the specific indicators of the target problem May necessitate the selection and observation of indirect indicators of the target problem (i.e., symptoms in the target problem of depression) Negative and positive indicators can be observed Goal should include the direction of change in indicators: To increase positive indicators and/or decrease negative indicators
What to Measure Operational indicators should occur frequently enough to be measured on a regular basis Infrequent events are not useful outcomes in single-case designs
Triangulation Use of more than one imperfect data-collection alternative in which each option is vulnerable to different potential sources of error For example, instead of relying only on a client’s self-report of a particular target behavior, a significant other (teacher, cottage parent, and so on) is asked to monitor the behavior as well Maximizes the chance that hypothesized changes in the dependent variable will be detected
Data Gathering Potential Data Sources Available records Interviews Self-report scales Direct observation of behavior
Who Should Measure? Practitioner Clients Significant others Risk of observer bias Clients Risk of bias to please themselves, appear socially desirable, or avoid disappointing the practitioner Significant others Neither objectivity nor commitment to the study is guaranteed Triangulation provides the opportunity for assessing measurement reliability
Sources of Data Available records: Self report scales: May be useful in obtaining a retrospective baseline Reliability of available records may be questionable Self report scales: Convenient, brief, ensure uniformity Clients may lose interest in repeating scales Social desirability bias
Reliability and Validity Affected by differences in testing of instruments and use of instruments in single-case designs: In validity and reliability studies: respondent is anonymous with no special, ongoing relationship with the researcher instrument is not completed more than 1-2 times score has no bearing on whether respondent is benefiting from services In single-case experiments: respondent is not anonymous and has a special relationship with service provider answers may become less valid with repetition differences between baseline and intervention phase may be apparent
Direct Behavioral Observation Time may be an obstacle for busy practitioners Resources for an additional observer may be unavailable Many target problems need to be observed outside of visits with the practitioner Client observation (self-monitoring) Vulnerable to measurement bias and research reactivity
Unobtrusive vs. Obtrusive Observation Unobtrusive observation: Participant does not notice observation and is less influenced to behave in socially desirable ways and ways that meet experimenter expectancies Obtrusive observation: Participant is keenly aware of being observed and may be predisposed to behave in socially desirable ways and ways that meet experimenter expectancies
Data Quantification Procedures Direct observation: Quantified in terms of frequency, duration, or magnitude Interval recording Spot-check recording
The Baseline Phase Internal validity is enhanced when baseline has enough measurement points to show a stable trend and to establish the unlikelihood that extraneous events will coincide with the onset of intervention 5 to 10 measurements should be planned Severity of client problem may interfere
Alternative Single-Case Designs AB: The basic single-case design ABAB: Withdrawal/reversal design Multiple-baseline design Multiple-component design
AB: The Basic Single-Case Design
ABAB: Withdrawal/Reversal Design
Multiple-Baseline Designs
Multiple-Component Designs
Data Analysis 3 questions should be asked: Is there a visual pattern that depicts a series of coincidences in which the frequency, level or trend of the target problem changes only after the intervention is introduced or withdrawn? What is the statistical probability that the data observed during the intervention phase(s) are part of the normal, chance fluctuations in the target problem? If change in the target problem is associated with the tested intervention, is the amount of change important from a substantive, or clinical, standpoint?