Presentation is loading. Please wait.

Presentation is loading. Please wait.

Achieved Relative Intervention Strength: Models and Methods Chris S. Hulleman David S. Cordray Presentation for the SREE Research Conference Washington,

Similar presentations


Presentation on theme: "Achieved Relative Intervention Strength: Models and Methods Chris S. Hulleman David S. Cordray Presentation for the SREE Research Conference Washington,"— Presentation transcript:

1 Achieved Relative Intervention Strength: Models and Methods Chris S. Hulleman David S. Cordray Presentation for the SREE Research Conference Washington, DC March 5, 2010

2 Overview Conceptual Framework –Definitions and Importance –Indexing Fidelity as Achieved Relative Strength (ARS) Three examples –Lab and Field Experiments –Reading First Practical Considerations and Challenges Questions and discussion

3 Definitions and Implications Fidelity –The extent to which the implemented Tx (t Tx ) was faithful to the intended Tx (T Tx ) –Measure core intervention components Achieved Relative Strength (ARS) –The difference between implemented causal components in the Tx and C –t Tx – t C –ARS is a default index of fidelity Implications –Infidelity reduces construct, external, and statistical conclusion validity

4 Achieved Relative Strength = 0.15 Infidelity “Infidelity” (85)-(70) = 15 t C t tx T Tx TCTC.45.40.35.30.25.20.15.10.05.00 Treatment Strength Expected Relative Strength = T Tx - T C = (0.40-0.15) = 0.25 100 90 85 80 75 70 65 60 55 50 Outcome

5 Indexing Fidelity as Achieved Relative Strength Intervention Strength = Treatment – Control Achieved Relative Strength (ARS) Index Standardized difference in fidelity index across Tx and C Based on Hedges’ g (Hedges, 2007) Corrected for clustering in the classroom (ICC’s from.01 to.08) See Hulleman & Cordray (2009)

6 Indexing Fidelity Average –Mean levels of observed fidelity (t Tx ) Absolute –Compare observed fidelity (t Tx ) to absolute or maximum level of fidelity (T Tx ) Binary –Yes/No treatment receipt based on fidelity scores –Requires selection of cut-off value

7 Assessing Implementation Fidelity in the Lab and in Classrooms: The Case of a Motivation Intervention Examples 1 and 2

8 PERCEIVED UTILITY VALUE INTEREST PERFORMANCE MANIPULATED RELEVANCE Model Adapted from: Eccles et al. (1983); Hulleman et al. (2009) The Theory of Change Fidelity Measure: Quality of participant responsiveness (0 to 3 scale)

9 Achieved Relative Strength Indices Observed Fidelity Lab vs. Class Contrasts LabClassLab - Class AverageTx 1.730.74 C 0.000.04 g 2.521.32 1.20 AbsoluteTx0.580.25 C0.000.01 g1.720.80 0.92 BinaryTx0.650.15 C0.00 g1.880.801.08

10 Achieved Relative Strength = 1.32 Fidelity Infidelity T Tx TCTC 100 66 33 0 Treatment Strength tCtC t tx 32103210 Average ARS Index (0.74)-(0.04) = 0.70

11 Assessing Implementation Fidelity in a Large-Scale Policy Intervention: The Case of Reading First Example 3

12 In Education, Intervention Models are Multi-faceted (from Gamse et al., 2008) Use of research-based reading programs, instructional materials, and assessment, as articulated in the LEA/school application Teacher professional development in the use of materials and instructional approaches 1)Teacher use of instructional strategies and content based on five essential components of reading instruction 2) Use of assessments to diagnose student needs and measure progress 3) Classroom organization and supplemental services and materials that support five essential components

13 From Major Components to Indicators… Professional Development Reading Instruction Support for Struggling Readers Assessment Instructional Time Instructional Material Instructional Activities/Strategies Block Actual Time Scheduled block? Reported time Major Components Sub- components Facets Indicators

14 Reading First Implementation: Specifying Components and Operationalization ComponentsSub-componentsFacetsIndicators (I/F) Reading Instruction Instructional Time 2 2 (1) Instructional Materials 4 12 (3) Instructional Activities /Strategies 8 28 (3.5) Support for Struggling Readers (SR) Intervention Services 3 12 (4) Supports for Struggling Readers 2 16 (8) Supports for ELL/SPED 2 5 (2.5) AssessmentSelection/Interpretation 5 12 (2.4) Types of Assessment 3 9 (3) Use by Teachers 1 7 (7) Professional development Improved Reading Instruction 11 67 (6.1) 4 10 41170 (4) Adapted from Moss et al. 2008

15 Reading First Implementation: Some Results ComponentsSub- components Performance Levels (% of Absolute Standard) Absolute Standard ARSI RFNon-RF Reading Instruction Daily (min.)105 (117%)87 (97%)900.63 Daily in 5 components (min.) 5950.8--0.35 Daily with High Quality practice 18.1316.2--0.11 Professional Development Hours of PD25.813.7--0.51 Five reading dimensions 4.3 (86%)3.7 (74%)50.31 0.38 Adapted from Gamse et al. (2008) and Moss et al. (2008)

16 Linking Fidelity to Outcomes

17 ARS: How Big is Big Enough? Effect Size StudyFidelity ARS Outcome Motivation – Lab 1.880.83 Motivation – Field 0.800.33 Reading First* 0.350.05 *Averaged over 1 st, 2 nd, and 3 rd grades (Gamse et al., 2008).

18 What Do I Do With Fidelity Indices? Start with: –Scale construction, aggregation over model sub-components and components Use as: –Descriptive analyses –Causal analyses (Intent-to-Treat: ITT) –Explanatory (AKA exploratory) analyses E.g., LATE, Instrumental variables, TOT Except for descriptive analyses, most approaches are relative new and not fully tested

19 In Practice…. Identify core intervention components –e.g., via a Model of Change Establish bench marks for T TX and T C Measurement –Determine indicators of core components –Derive t Tx and t C –Develop scales –Convert to ARS Incorporate into intervention analyses –Multi-level analyses (Justice, Mashburn, Pence, & Wiggins, 2008)

20 Some Challenges Intervention models –Often unclear –Scripted vs. Unscripted Measurement –Novel constructs –Multiple levels –Aggregation (within and across levels) Analyses –Weighting of components –Uncertainty about psychometric properties –Functional form not always known

21 Summary of Key Points Identify and measure core components Fidelity assessment serves two roles: –Average causal difference between conditions –Using fidelity measures to assess the effects of variation in implementation on outcomes Post-experimental (re)specification of the intervention ARS: How much is enough? –Need more data!

22 Thank You Questions and Discussion


Download ppt "Achieved Relative Intervention Strength: Models and Methods Chris S. Hulleman David S. Cordray Presentation for the SREE Research Conference Washington,"

Similar presentations


Ads by Google