Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER 202-208-3749.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Consensus Building Infrastructure Developing Implementation Doing & Refining Guiding Principles of RtI Provide working knowledge & understanding of: -
WHO Antenatal Course Preparing the new WHO eProfessors.
Scaling-Up Early Childhood Intervention Literacy Learning Practices Maurice McInerney, Ph.D. American Institutes for Research Presentation prepared for.
Department of Education, Employment and Workplace Relations
PD Plan Agenda August 26, 2008 PBTE Indicators Track
Notes by Ben Boerkoel, Kent ISD, based on a training by Beth Steenwyk –
A Guide to Education Research in the Era of NCLB Brian Jacob University of Michigan December 5, 2007.
Title I Schoolwide Providing the Tools for Change Presented by Education Service Center Region XI February 2008.
April 6, 2011 DRAFT Educator Evaluation Project. Teacher Education and Licensure DRAFT The ultimate goal of all educator evaluation should be… TO IMPROVE.
National Human Resources for Health Observatory HRH Research Forum Dr. Ayat Abuagla.
Evaluation Research Pierre-Auguste Renoir: Le Grenouillere, 1869.
Alternative Assesment There is no single definition of ‘alternative assessment’ in the relevant literature. For some educators, alternative assessment.
Alvin Kwan Division of Information & Technology Studies
Teachers’ Use of Standards-Based Instructional Materials Karen D. King New York University Abstract The purpose of the study is: To explore the ways teachers.
TWS Aid for Scorers Information on the Background of TWS.
WHY LARGE-SCALE RANDOMIZED CONTROL TRIALS? David Myers Senior Vice President IES 2006 Research Conference David Myers Senior Vice President IES 2006 Research.
Sampling & External Validity
Spending Public Money Wisely Scaling-Up Educational Interventions Barbara Schneider John A. Hannah University Distinguished Professor College of Education.
IES Grant Writing Workshop for Efficacy and Replication Projects
Comparing the Effects of Two Versions of Professional Development on Science Curriculum Implementation and Scaling-Up Session
Refining Your Research Question. In this session, we will… Discuss guidelines for creating a ‘good’ research question Provide time to revisit and revise.
INSTRUCTIONAL LEADERSHIP FOR DIVERSE LEARNERS Susan Brody Hasazi Katharine S. Furney National Institute of Leadership, Disability, and Students Placed.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Grade 12 Subject Specific Ministry Training Sessions
Chapter One of Your Thesis
What should be the basis of
Standards and Guidelines for Quality Assurance in the European
Continuous Quality Improvement (CQI)
Making Sense, Improving Learning
What is Effective Professional Development? Dr. Robert Mayes Science and Mathematics Teaching Center University of Wyoming.
1. 2 Why is the Core important? To set high expectations –for all students –for educators To attend to the learning needs of students To break through.
Evaluation Process/Requirements for CAPP Algebra Project.
Striving for Quality Using continuous improvement strategies to increase program quality, implementation fidelity and durability Steve Goodman Director.
Moving from Development to Efficacy & Intervention Fidelity Topics National Center for Special Education Research Grantee Meeting: June 28, 2010.
My Own Health Report: Case Study for Pragmatic Research Marcia Ory Texas A&M Health Science Center Presentation at: CPRRN Annual Grantee Meeting October.
Progressing Toward a Shared Set of Methods and Standards for Developing and Using Measures of Implementation Fidelity Discussant Comments Prepared by Carol.
Semester 2: Lecture 9 Analyzing Qualitative Data: Evaluation Research Prepared by: Dr. Lloyd Waller ©
INTERNATIONAL SOCIETY FOR TECHNOLOGY IN EDUCATION working together to improve education with technology Using Evidence for Educational Technology Success.
Implementation and process evaluation: developing our approach Ann Lendrum University of Manchester Neil Humphrey University of Manchester Gemma Moss Institute.
KATEWINTEREVALUATION.com Education Research 101 A Beginner’s Guide for S STEM Principal Investigators.
Components of a national drug prevention system Ms. UNODC.
Chapter 12 Evaluating Products, Processes, and Resources.
1.3 Modeling process and outcomes Walter Sermeus, RN, PhD Catholic University Leuven Belgium Witten, Fri Session 1: 11:00-12:30 Session 2: 13:30-15:00.
1. Housekeeping Items June 8 th and 9 th put on calendar for 2 nd round of Iowa Core ***Shenandoah participants*** Module 6 training on March 24 th will.
Laying the Foundation for Scaling Up During Development.
PPA 502 – Program Evaluation Lecture 2c – Process Evaluation.
Lessons Learned about Going to Scale with Effective Professional Development Iris R. Weiss Horizon Research, Inc. February 2011.
Math and Science Partnership Program Approaches to State Longitudinal Evaluation March 21, 2011 San Francisco MSP Regional Meeting Patty O’Driscoll Public.
Evidence-based Education and the Culture of Special Education Chair: Jack States, Wing Institute Discussant: Teri Palmer, University of Oregon.
Begin at the Beginning introduction to evaluation Begin at the Beginning introduction to evaluation.
ANNOOR ISLAMIC SCHOOL AdvancEd Survey PURPOSE AND DIRECTION.
Securing External Federal Funding Janice F. Almasi, Ph.D. Carol Lee Robertson Endowed Professor of Literacy University of Kentucky
An Expanded Model of Evidence-based Practice in Special Education Randy Keyworth Jack States Ronnie Detrich Wing Institute.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
Evaluation for Eligibility l 8/26/05 3 Evaluation2 Objectives  To ensure that school staff are aware of evaluation /reevaluation procedures under IDEA.
Staff All Surveys Questions 1-27 n=45 surveys Strongly Disagree Disagree Neutral Agree Strongly Agree The relative sizes of the colored bars in the chart.
Course, Curriculum, and Laboratory Improvement (CCLI) Transforming Undergraduate Education in Science, Technology, Engineering and Mathematics PROGRAM.
SIT Professional Development- Defining the Consultation Model Permission to reprint granted with appropriate acknowledgement ©2009 DPS.
Helping Teachers Help All Students: The Imperative for High-Quality Professional Development Report of the Maryland Teacher Professional Development Advisory.
Documenting Objective Evidence & Providing Effective Feedback Stronge Teacher Performance Evaluation System ©Stronge, 2014 All Rights Reserved.
Research Philosophies, Approaches and Strategies Levent Altinay.
Integrating Monitoring and Evaluation into a Management Framework for Scaling Up January
Researching Innovation.  By definition, an innovation means that you are testing out something new to see how effective it is.  This also means that.
Stages of Research and Development
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Comprehensive Planning
Unit 7: Instructional Communication and Technology
Evidence-Based Practices Under ESSA for Title II, Part A
Fidelity of Implementation in Scaling-up Highly Rated Science Curriculum Units for Diverse Populations Carol O’Donnell and Sharon Lynch The George Washington.
Presentation transcript:

Open Forum: Scaling Up and Sustaining Interventions Moderator: Carol O'Donnell, NCER

Agenda Introductions Purpose of forum Brief background from moderator and panel –Definitions of “scale-up” –Goal 4 vs. Goal 3 –Issues of scalability –Factors that influence effectiveness at scale –What does successful scale-up look like? Open Forum - Questions from the Audience

Introductions – Quick Survey of Audience How many Goal 2? Goal 3? Goal 4 What do you hope to get out of this forum? (record audience questions; panel will address them in their comments; see last slides)

Purpose of Forum A discussion of what it takes to scale-up an intervention and what it takes to sustain the fidelity of the implementation of an intervention.

Definitions Scale-up is the transition from idiosyncratic adoption of interventions to broad, effective implementation across a large and diverse school system. Scale-up can be demonstrated by showing a plan for the gradual and systematic implementation of the intervention.

Definitions Scale-up is the practice of introducing proven interventions into new settings with the goal of producing similarly positive effects in larger, more diverse populations. Scale-up research examines factors that influence the effectiveness of interventions as they are brought to scale across settings.

Goal 4: Scale-up Evaluations Are fully developed interventions effective when they are implemented under conditions that would be typical if a school district or other education delivery setting were to implement them (i.e., without special support from the developer or the research team) across a variety of conditions (e.g., different student populations, different types of schools)?

Difference Between Goal 3 and Goal 4 Key differences have to do with delivery of the intervention (“at a distance” from the researcher or developer) and the diversity of the sample (scale-up evaluations require sufficient diversity in the sample of schools, classrooms, or students to ensure appropriate generalizability).

Difference Between Goal 3 and Goal 4 Small-scale efficacy studies provide examples of successful interventions for some groups. Do interventions with strong prior evidence of efficacy produce a net positive increase in student learning relative to the comparison group under typical conditions in real, complex, and varied educational environments?

Theoretical, Empirical, and Practical Issues of Scalability What educationally meaningful effects are expected at scale? What is the intervention's theory of change? How might experience moderate outcomes? Under what conditions will the intervention be implemented? How feasible is it to implement the critical components of the intervention with fidelity at scale?

Theoretical, Empirical, and Practical Issues of Scalability What are the practical concerns of measuring and observing fidelity of implementation in both the treatment and comparison groups at scale, in order to identify and document critical differences between the intervention and its counterfactual?

Theoretical, Empirical, and Practical Issues of Scalability Is the intervention feasible (practical? affordable—cost feasibility?) for schools and other education entities to implement under normal conditions (i.e., without any support from the researchers or developers of the intervention that would not typically be available to entities wanting to implement the intervention outside of a research study)?

Theoretical, Empirical, and Practical Issues of Scalability Interventions that are effective at scale are those that can produce the desired effects across a range of education contexts. Under what conditions will the intervention be implemented? What methods will be used to document conditions and critical variables that affect the success of a given intervention?

Factors That Influence Effectiveness of Interventions at Scale Threats to validity –Diluted professional development –Leakage –Changes in the student population within a district over time –Improved experimental procedures –Maintaining fidelity of implementation over time

Factors That Influence Effectiveness of Interventions at Scale Experience of the teacher, school, or district with the intervention and with the study. –Does scaling-up a unit incrementally (adding more schools each year) lead to increasing effects over time, or are the effects diluted? –Ask not only, “Is the intervention effective?” but, “To what degree do the refinements at the system level and school level due to experience produce better outcomes?”

Effects of Scale vs Experience The effects of scale are distinguished from the effects of experience in that the effects of scale are assessed by comparing outcomes achieved in schools at different levels of implementation-- 1 st year implementers vs 2 nd year implementers during the same year of the study. Effects of experience assessed by examining whether the intervention is effective in newly implementing schools as they go to scale--1 st time implementers in Year 2 vs Year 5 (GWU).

What does successful scale-up look like? (RAND, 2004) Widespread implementation Deep changes in classroom and school practices Sustainability – intervention is sustained over time Sense of ownership of new practices and policies among teachers and school leaders Intervention is adaptable to local contexts

What does successful scale-up look like? (RAND, 2004) District and school-wide support provided for implementation Methods for ensuring high quality implementation are in place (good PD) Financial support given (intervention is cost- feasible) Building of organizational capacity Marketing of curriculum materials so they are sustainable

What are the important characteristics of maintaining fidelity of implementation? How do you sustain the intervention after the research is finished? How many schools are needed in a scale-up study? How do I ensure power in an RCT? In a clustered RT? (see Optimal Design Software) Questions from the Participants

How do you get teachers to implement with fidelity? How do you gain access to schools, especially high numbers of schools? How do you maintain relationships with schools over the course of the 5-year scale- up study? Questions from the Participants

What are some of the data collection strategies for studying fidelity of implementation across schools/districts and at a distance? What do we need to do in a Goal 2 to prepare for a Goal 3 (identify critical components of the intervention by the end of your Goal 2)? What do we need to do in a Goal 3 to prepare for a Goal 4?– Building bridges. Questions from the Participants

What does theory-driven implementation science look like? How do fidelity of implementation measures differ when studying whole school programs versus interventions used in individual classrooms? What does successful scale-up look like? Questions from the Participants

What does Professional Development (PD) look like at full-scale? Can PD stand on its own? Can we pay for PD in a Goal 4? Others? Questions from the Participants