Presentation is loading. Please wait.

Presentation is loading. Please wait.

Modeling “The Cause”: Assessing Implementation Fidelity and Achieved Relative Strength in RCTs David S. Cordray Vanderbilt University IES/NCER Summer Research.

Similar presentations


Presentation on theme: "Modeling “The Cause”: Assessing Implementation Fidelity and Achieved Relative Strength in RCTs David S. Cordray Vanderbilt University IES/NCER Summer Research."— Presentation transcript:

1 Modeling “The Cause”: Assessing Implementation Fidelity and Achieved Relative Strength in RCTs David S. Cordray Vanderbilt University IES/NCER Summer Research Training Institute, 2010

2 Overview Define implementation fidelity and achieved relative strength A 4-step approach to assessment and analysis of implementation fidelity (IF) and achieved relative strength (ARS): –Model(s)-based –Quality Measures of Core Causal Components –Creating Indices –Integrating implementation assessments with models of effects (next week) Strategy: –Describe each step –Illustrate with an example –Planned Q&A segments

3 Caveat and Precondition Caveat –The black box (ITT model) is still #1 priority Implementation fidelity and achieve relative strength are supplemental to ITT-based results. Precondition –But, we consider implementation fidelity in RCTs that are conducted on mature (enough) interventions –That is, the intervention is stable enough to describe an underlying— Model/theory of change, and Operational (logic and context) models.

4 Dimensions Intervention Fidelity Operative definitions: –True Fidelity = Adherence or compliance: Program components are delivered/used/received, as prescribed With a stated criteria for success or full adherence The specification of these criteria is relatively rare –Intervention Exposure: Amount of program content, processes, activities delivered/received by all participants (aka: receipt, responsiveness) This notion is most prevalent –Intervention Differentiation: The unique features of the intervention are distinguishable from other programs, including the control condition A unique application within RCTs

5 Linking Intervention Fidelity Assessment to Contemporary Models of Causality Rubin’s Causal Model: –True causal effect of X is (Y i Tx – Y i C ) –In RCTs, the difference between outcomes, on average, is the causal effect Fidelity assessment within RCTs also entails examining the difference between causal components in the intervention and control conditions. Differencing causal conditions can be characterized as achieved relative strength of the contrast. –Achieved Relative Strength (ARS) = t Tx – t C –ARS is a default index of fidelity

6 Achieved Relative Strength =.15 Infidelity “Infidelity” (85)-(70) = 15 t C t tx T Tx TCTC.45.40.35.30.25.20.15.10.05.00 Treatment Strength Expected Relative Strength = (0.40-0.15) = 0.25 100 90 85 80 75 70 65 60 55 50 Outcome

7 Why is this Important? Statistical conclusion validity Construct Validity: –Which is the cause? (T Tx - T C ) or (t Tx – t C ) Poor implementation: essential elements of the treatment are incompletely implemented. Contamination: The essential elements of the treatment group are found in the control condition (to varying degrees). Pre-existing similarities between T and C on intervention components. External validity – generalization is about (t Tx - t C ) –This difference needs to be known for proper generalization and future specification of the intervention components

8 So what is the cause? …The achieved relative difference in conditions across components Augmentation of Control Infidelity PD= Professional Development Asmt=Formative Assessment Diff Inst= Differentiated Instruction

9 Time-out for Questions

10 In Practice…. Step 1: Identify core components in the intervention group –e.g., via a Model of Change –Establish bench marks (if possible) for T TX and T C Step 2: Measure core components to derive t Tx and t C –e.g., via a “Logic model” based on Model of Change Step 3: Deriving indicators Step 4: Indicators of IF and ARSI Incorporated into the analysis of effects

11 Focused assessment is needed What are the options? (1) Essential or core components (activities, processes); (2) Necessary, but not unique, activities, processes and structures (supporting the essential components of T); and (3) Ordinary features of the setting (shared with the control group) Focus on 1 and 2.

12 Step 1: Specifying Intervention Models Simple version of the question: What was intended? Interventions are generally multi-component, sequences of actions Mature-enough interventions are specifiable as: –Conceptual model of change –Intervention-specific model –Context-specific model Start with a specific example  MAP RCT

13 Example –The Measuring Academic Progress (MAP) RCT The Northwest Evaluation Association (NWEA) developed the Measures of Academic Progress (MAP) program to enhance student achievement Used in 2000+ school districts, 17,500 schools No evidence of efficacy or effectiveness

14 Measures of Academic Progress (MAP): Model of Change MAP Intervention: 4 days of training On-demand consultation Formative Testing Student Reports On-line resources Formative Assessment Differentiated Instruction Achievement Delivery – NWEA trainers Receipt – Teachers and School Leaders Enactment -- TeachersOutcomes -- Students Implementation Issues:

15 Logic Model for MAP Resources Testing System Multiple Assessment Reports NWEA Trainers NWEA Consultants On-line teaching resources Activities 4 training sessions Follow-up Consultation Access resources Outputs Use of Formative Assessment Differentiated Instruction Outcomes State tests MAP tests Impacts Improved Student Achievement Program-specific implementation fidelity assessments: MAP only Comparative implementation assessments: MAP and Non-MAP classes Focus of implementation fidelity and achieved relative strength

16 Context-Specific Model: MAP AcademicSchedule Fall Semester Spring Semester AugSeptOctNovDecJanFebMarAprMay Major MAP Program Components and Activities PD1 Con PD2 Con PD3 Con PD4 Con Data SysUse Data Diff Instr Change State Testing Implementation Two POINTS: 1. This tells us when assessments should be undertaken; and 2. Provides as basis for determining the length of the intervention study and the ultimate RCT design.

17 Step 2: Quality Measures of Core Components Measures of resources, activities, outputs Range from simple counts to sophisticated scaling of constructs Generally involves multiple methods Multiple indicators for each major component/activity Reliable scales (3-4 items per sub-scale)

18 Measuring Program-Specific Components MAP Resources Testing System Multiple Assessment Reports NWEA Trainers NWEA Consultants On-line teaching resources MAP Activities 4 training sessions Follow-up Consultation Access resources Criterion: Present or Absent Source or Method: MAP Records Criterion: Attendance Source or method: MAP records Criterion: Use Source or Method: Web- records MAP Outputs

19 Measuring Outputs: Both MAP and Non-MAP Conditions MAP Outputs Use of Formative Assessment Data Differentiated Instruction Method: End of year Teacher Survey Methods: End or year teacher survey Observations (3) Teacher Logs (10) Criterion: Achieved Relative Strength Indices: Difference in differentiated instruction (high v. low readiness students) Proportion of observations segments with any differentiated instruction

20 Fidelity and ARS Assessment Plan for the MAP Program

21 Step 3: Indexing Fidelity and Achieved Relative Strength True Fidelity – relative to a benchmark; Intervention Exposure – amount of sessions, time, frequency Achieved Relative Strength (ARS) Index Standardized difference in fidelity index across Tx and C Based on Hedges’ g (Hedges, 2007) Corrected for clustering in the classroom

22 Calculating ARSI When There Are Multiple Components Augmentation of Control Infidelity PD= Professional Development Asmt=Formative Assessment Diff Inst= Differentiated Instruction

23 Weighted Achieved Relative Strength

24 Time-out for Questions

25 Some Program-Specific Results

26 Achieved Relative Strength: Some Results

27 Achieved Relative Strength: Teacher Classroom Behavior

28 Preliminary Conclusions for the MAP Implementation Assessment The developer (NWEA) –Complete implementation of resources, training, and consultation Teachers: Program-specific implementation outcomes –Variable attendance at training and use of training sessions –Moderate use of data, differentiation activities services –Training extended through May, 2009 Teachers: Achieved Relative Strength –No between-group differences in enactment of differentiated instruction

29 Step 4: Indexing Cause-Effect Linkage Analysis Type 1: –Congruity of Cause-Effect in ITT analyses Effect = Average difference on outcomes  ES Cause = Average difference in causal components  ARS (Achieved Relative Strength) Descriptive reporting of each, separately Analysis Type 2: –Variation in implementation fidelity linked to variation in outcomes –Hierarchy of approaches (ITT  LATE/CACE  Regression  Descriptive) TO BE CONTINUED ……

30 Questions?


Download ppt "Modeling “The Cause”: Assessing Implementation Fidelity and Achieved Relative Strength in RCTs David S. Cordray Vanderbilt University IES/NCER Summer Research."

Similar presentations


Ads by Google