Laying the Foundation for Scaling Up During Development.

Slides:



Advertisements
Similar presentations
Performance Assessment
Advertisements

Using Student Data as a Basis for Feedback to Teachers Ronnie Detrich Wing Institute Cal-ABA, 2011.
M & E for K to 12 BEP in Schools
PORTFOLIO.
Donald T. Simeon Caribbean Health Research Council
UTILIZING FORMATIVE EVALUATION IN A PROFESSIONAL DEVELOPMENT PROGRAM Tiah Alphonso Louisiana State University Department of Educational Theory, Policy,
+ Evidence Based Practice University of Utah Presented by Will Backner December 2009 Training School Psychologists to be Experts in Evidence Based Practices.
Ohhhhh, Christopher Robin, I am not the right one for this job... The House on Pooh Corner, A.A. Milne.
Funding Opportunities at the Institute of Education Sciences Elizabeth R. Albro, Ph.D. Acting Commissioner, National Center for Education Research.
Spending Public Money Wisely Scaling-Up Educational Interventions Barbara Schneider John A. Hannah University Distinguished Professor College of Education.
Evaluation is a professional and ethical responsibility and is a core part of PHN professional practice Commitment to evaluation helps build the PHN intelligence.
PPA 502 – Program Evaluation
National Science Foundation: Transforming Undergraduate Education in Science, Technology, Engineering, and Mathematics (TUES)
Types of Evaluation.
What should be the basis of
Standards and Guidelines for Quality Assurance in the European
performance INDICATORs performance APPRAISAL RUBRIC
Coaching Workshop.
How to Develop the Right Research Questions for Program Evaluation
Product Evaluation the outcome phase. Do the magic bullets work? How do you know when an innovative educational program has “worked”? How do you know.
STUDY PLANNING & DESIGN TO ENHANCE TRANSLATION OF HEALTH BEHAVIOR RESEARCH Lisa Klesges, Russell Glasgow, Paul Estabrooks, David Dzewaltowski, Sheana Bull.
Overall Teacher Judgements
Striving for Quality Using continuous improvement strategies to increase program quality, implementation fidelity and durability Steve Goodman Director.
Margaret Heritage, CRESST Raymond Yeagley, NWEA. National Forum on Education Statistics  Mission: improve the quality, usefulness, timeliness, and comparability.
LEARNING DIFFERENCES - AGENCY SELF-ASSESSMENT GUIDE Program Year A tool for identifying program improvement and professional development needs.
Defining, Conceptualizing, and Measuring Fidelity of Implementation and Its Relationship to Outcomes in K–12 Curriculum Intervention Research Prepared.
August 3,  Review “Guiding Principles for SLO Assessment” (ASCCC, 2010)  Review Assessment Pulse Roundtable results  Discuss and formulate our.
Iowa Support System for Schools and Districts in Need of Assistance (SINA & DINA) Phase I: Audit Keystone AEA January 28, 2008.
Prof. György BAZSA, former president Hungarian Accreditation Committee (HAC) CUBRIK Workshop IV Beograd, 13 March, 2012 European Standards and Guidelines.
Monitoring and Evaluation in MCH Programs and Projects MCH in Developing Countries Feb 10, 2011.
1 Introduction to Evaluating the Minnesota Demonstration Program Paint Product Stewardship Initiative September 19, 2007 Seattle, WA Matt Keene, Evaluation.
Improving Implementation Research Methods for Behavioral and Social Science Working Meeting Measuring Enactment of Innovations and the Factors that Affect.
“Current systems support current practices, which yield current outcomes. Revised systems are needed to support new practices to generate improved outcomes.”
Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Discussant Comments Prepared by Carol.
2 The combination of three concepts constitutes the foundation for results: 1) meaningful teamwork; 2) clear, measurable goals; and 3) regular collection.
Program Fidelity Influencing Training Program Functioning and Effectiveness Cheryl J. Woods, CSW.
Also referred to as: Self-directed learning Autonomous learning
Implementation and process evaluation: developing our approach Ann Lendrum University of Manchester Neil Humphrey University of Manchester Gemma Moss Institute.
KATEWINTEREVALUATION.com Education Research 101 A Beginner’s Guide for S STEM Principal Investigators.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
What makes great teaching? An overview of the CEM/Durham University/The Sutton Trust paper (published October 2014)......making... Capturing... promoting...
Math and Science Partnership Program Approaches to State Longitudinal Evaluation March 21, 2011 San Francisco MSP Regional Meeting Patty O’Driscoll Public.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
Researchers Without Borders Webinar 4 A Framework and Suite of Instruments for Examining Fidelity of Implementation Jeanne Century Center for Elementary.
Differentiation What is meant by differences between learners?
Changing Teaching Behaviors: The Road to Student Achievement Powell et al: Technology as a potentially cost-effective alternative to on-site coaching Research.
Securing External Federal Funding Janice F. Almasi, Ph.D. Carol Lee Robertson Endowed Professor of Literacy University of Kentucky
The Ohio STEM Learning Network: A Study of Factors Affecting Implementation Spread and Sustainability.
Considering the Roles of Research and Evaluation in DR K-12 Projects December 3, 2010.
Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER
Functional Behavioural Assessment (FBA) Sarah Casey.
Using Classroom Data to Monitor Student Progress Lani Seikaly, Project Director School Improvement in Maryland Web Site.
The Cause…or the “What” of What Works? David S. Cordray Vanderbilt University IES Research Conference Washington, DC June 16, 2006.
1 DEMONSTRATION PROJECTS TO ENSURE STUDENTS WITH DISABILITIES RECEIVE A QUALITY HIGHER EDUCATION PROGRAM Performance Measurement, Program and Project Evaluation.
Chapter 5 Population Health Quality and Safety Learning Objectives 1. Explain why it is difficult to monitor healthcare quality and safety at the population.
Dr. Kathleen Haynie Haynie Research and Evaluation November 12, 2010.
Instructional Leadership and Application of the Standards Aligned System Act 45 Program Requirements and ITQ Content Review October 14, 2010.
Research And Evaluation Differences Between Research and Evaluation  Research and evaluation are closely related but differ in four ways: –The purpose.
Sandra F. Naoom, MSPH National Implementation Research Network Frank Porter Graham Child Development Institute University of North Carolina- Chapel Hill.
Stages of Research and Development
Classroom teaching observation tools

Conducting Efficacy Trials
4.2 Identify intervention outputs
Assessments: Beyond the Claims
Process Evaluation the implementation phase
MONITORING AND EVALUATION IN TB/HIV PROGRAMS
Fidelity of Implementation in Scaling-up Highly Rated Science Curriculum Units for Diverse Populations Carol O’Donnell and Sharon Lynch The George Washington.
Social Validity and Treatment Integrity
Presentation transcript:

Laying the Foundation for Scaling Up During Development

Definition Fidelity of implementation is: the extent to which a program (including its content and process) is implemented as designed; how it is implemented (by the teacher); how it is received (by the students); how long it takes to implement (duration); and, what it looks like when it is implemented (quality).

Motivation: What problems exist? Teachers have difficulty teaching with fidelity when creativity, variability, and local adaptations are encouraged. Developers often fail to identify the critical components of an intervention. Researchers often fail to measure whether components are delivered as intended.

What are the critical components of the program? If the teacher skips part of the program, why does that happen, and what effect will it have on outcomes? Is the program feasible (practical) for a teacher to use? Is it usable (are the program goals clear)? If not, what changes should I make to the program? What programmatic support must be added? What ancillary components are part of the program (e.g., professional development) and must be scaled- up with it? What questions about fidelity of implementation are asked during Development studies?

What questions about fidelity of implementation are asked during Efficacy studies? How well is the program being implemented in comparison with the original program design? To what extent does the delivery of the intervention adhere to the program model originally developed?

How can effective programs be scaled up across many sites (i.e., if implementation is a moving target, generalizability of research may be imperiled)? How can we gain confidence that the observed student outcomes can be attributed to the program? Can we gauge the wide range of fidelity with which an intervention might be implemented? Source: Lynch, O’Donnell, Ruiz-Primo, Lee, & Songer, What questions about fidelity of implementation are asked during Scale-up studies?

Definition: Efficacy Study Efficacy is the first stage of program research following development. Efficacy is defined as “the ability of an intervention to produce the desired beneficial effect in expert hands and under ideal circumstances” (RCTs) (Dorland’s Illustrated Medical Dictionary, 1994, p. 531). Failure to achieve desired outcomes in an efficacy study "give[s] evidence of theory failure, not implementation failure" (Raudenbush, 2003, p. 4).

Internal validity - determines that the program will result in successful achievement of the instructional objectives, provided the program is “delivered effectively as designed” (Gagne et al., 2005, p. 354). Efficacy entails continuously monitoring and improving implementation to ensure the program is implemented with fidelity (Resnick et al., 2005). Explains why innovations succeed or fail (Dusenbury et al., 2003); Helps determine which features of program are essential and require high fidelity, and which may be adapted or deleted (Mowbray et al., 2003). Definition: Efficacy Studies

Interventions with demonstrated benefit in efficacy studies are then transferred into effectiveness studies. Effectiveness study is not simply a replication of an efficacy study with more subjects and more diverse outcome measures conducted in a naturalistic setting (Hohmann & Shear, 2002). Effectiveness is defined as “the ability of an intervention to produce the desired beneficial effect in actual use” under routine conditions (Dorland, 1994, p. 531) where mediating and moderating factors can be identified (Aron et al., 1997; Mihalic, 2002; Raudenbush, 2003; Summerfelt & Meltzer, 1998). Definition: Scale-up Study

External validity – fidelity in effectiveness studies helps to generalize results and provides “adequate documentation and guidelines for replication projects adopting a given model” (Mowbray et al, 2003; Bybee, 2003; Raudenbush, 2003). Role of developer and researcher is minimized. Focus is not on monitoring and controlling levels of fidelity; instead, variations in fidelity are measured in a natural setting and accounted for in outcomes. Definition: Scale-up Studies

O’Donnell (2008): Steps in Measuring Fidelity 1.Start with curriculum profile or analysis; review program materials and consult with developer. Determine the intervention’s program theory. What does it mean to teach it with fidelity? 2.Using developer’s and past implementers’ input, outline critical components of intervention divided by structure (adherence, duration) and process (quality of delivery, program differentiation, participant responsiveness) and outline range of variations for acceptable use. O’Donnell, C. L. (2008).Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78, 33–84.

O’Donnell (2008): Steps in Measuring Fidelity 3.Develop checklists and other instruments to measure implementation of components (in most cases unit of analysis is the classroom). 4.Collect multi-dimensional data in both treatment and comparison conditions: questionnaires, classroom observations, self-report, student artifacts, interviews. Self-report data typically yields higher levels of fidelity than observed in the field. 5.Adjust outcomes if fidelity falls outside acceptable range. O’Donnell, C. L. (2008).Defining, conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K–12 curriculum intervention research. Review of Educational Research, 78, 33–84.

Know when & how to use fidelity data Development - Use fidelity results to inform revisions. Decide now what components are required to deliver the intervention as intended when implemented at scale. Efficacy - Monitor fidelity and relate it to outcomes to gain confidence that outcomes are due to the program (internal validity). Replication - Determine if levels of fidelity and program results under a specific structure replicate under other organizational structures. Scale-up - Understand implementation conditions, tools, and processes needed to reproduce positive effects under routine practice on a large scale (external validity). Are methods for establishing high fidelity financially feasible?

Bottom Line: If the intervention can be implemented with adequate fidelity under conditions of routine practice and yield positive results, scale it up. Source: O’Donnell, 2008