Presentation is loading. Please wait.

Presentation is loading. Please wait.

Leading Indicators: Evaluation for Site-Level Improvement and System- Level Planning Samantha Sugar Research Associate/Analyst David P. Weikart Center.

Similar presentations


Presentation on theme: "Leading Indicators: Evaluation for Site-Level Improvement and System- Level Planning Samantha Sugar Research Associate/Analyst David P. Weikart Center."— Presentation transcript:

1 Leading Indicators: Evaluation for Site-Level Improvement and System- Level Planning Samantha Sugar Research Associate/Analyst David P. Weikart Center for Youth Program Quality Charles Smith Vice President for Research Forum for Youth Investment Executive Director David P. Weikart Center for Youth Program Quality April 18, 2012 #readyby21

2 Agenda Welcome Opening Activity Important Questions – The Why – The What – The How Site-Level Improvement System-Level Planning – The Where Looking Forward/Next Steps Questions

3 Objectives Learn how collecting program data from staff, youth and parents help to tell the overall story about Quality for a single site. See how this data can be used to identify low capacity programs and support resources targeting decisions.

4 Opening Activity Effective performance data describes behaviors and conditions in a way that is: a. Timely – Data is available in real time as events occur or just after completion b. Objective – Data is focused on behaviors and conditions that can be identified through observation and easily named in relation to practice c. Reliable – Data is seen as precise and factual by all due to standardization of measures/methods d. Sensitive – Data describes behaviors and conditions that are likely to change in response to intervention and change can be captured on the measures e. Valid – Data describes behaviors and conditions thought to be a link in a causal chain of events desired by the actors involved f. Feasible - The minimum data necessary are collected using typical community resources g. Multi-Purpose – As they occur, BOTH data collection and data interpretation processes promote learning and coordination among actors in the organization h. Multi-level – Data designed for use by individual units (staff/sites) can be aggregated across individual units to assess collective performance Optimal Characteristics of Performance Data

5 Quality of Instruction (Point of Service Setting) Youth Voice and Program Governance Structures ENGAGEMENT INTERACTION SUPPORTIVE ENVIRONMENT SAFE ENVIRONMENT Higher order engagement through choice, planning, and reflection. Peer interaction through grouping and cooperative learning. Supportive environment through welcoming, conflict resolution, active learning, and skill building. Physical and emotional safety is provided. Why Were the Leading Indicators Developed?

6 Continuous Improvement Practices by Site Teams (Organizational Setting) Standardized Assessment of Instruction Team-based Planning with Data Coaching and Performance Feedback Training for Instructional Skills Why Were the Leading Indicators Developed?

7 Lower Stakes Accountabilities Interpretive Community Team Self Assessment Review external scores Team Planning and Implementing Improvement planning Performance coaching Higher Stakes Accountabilities Why Were the Leading Indicators Developed?

8 Point of Service Setting Organizational Setting Policy Setting Continuous Improvement Practices for Site Teams Quality Instruction & Proximal Child Outcomes Low Stakes Accountability and Supports Theory of Change: Multiple Levels of Setting

9 Effective performance data describes behaviors and conditions in a way that is: a. Timely – Data is available in real time as events occur or just after completion b. Objective – Data is focused on behaviors and conditions that can be identified through observation and easily named in relation to practice c. Reliable – Data is seen as precise and factual by all due to standardization of measures/methods d. Sensitive – Data describes behaviors and conditions that are likely to change in response to intervention and change can be captured on the measures e. Valid – Data describes behaviors and conditions thought to be a link in a causal chain of events desired by the actors involved f. Feasible - The minimum data necessary are collected using typical community resources g. Multi-Purpose – As they occur, BOTH data collection and data interpretation processes promote learning and coordination among actors in the organization h. Multi-level – Data designed for use by individual units (staff/sites) can be aggregated across individual units to assess collective performance Optimal Characteristics of Performance Data Why Were the Leading Indicators Developed?

10 Leading Indicator 5.1 – Family Satisfaction Leading Indicator 4.2 – Academic Efficacy Leading Indicator 4.1 – Socioemotional Development Leading Indicator 2.2 – Engaging Instruction Leading Indicator 1.1 – Staffing Model Leading Indicator 3.4 – Community Resources Leading Indicator 3.3 – School Alignment Leading Indicator 3.2 – Family Engagement Leading Indicator 2.1 – Academic Press Leading Indicator 1.2 – Youth Governance Leading Indicator 3.1 – System Norms Leading Indicator 1.4 – Enrollment Policy Leading Indicator 1.2 – Continuous Improvement Point of Service Setting Organizational Setting Policy Setting Continuous Improvement Practices for Site Teams Quality Instruction & Proximal Child Outcomes Low Stakes Accountability and Supports Why Were the Leading Indicators Developed?

11 Isn’t this just more data?

12 13 composite measures categorized into five different contexts: - Organizational Context - Instructional Context - External Relationships - Youth Characteristics - Family Satisfaction What Are the Leading Indicators? What are the Leading Indicators? Where did they come from? - Grantee Director/Site Coordinator Surveys - Afterschool teacher/Youth Worker Surveys - Youth Surveys (grades 4-12) - Parent Surveys - PPICS data How do we measure them? - Youth Program Quality Intervention (YPQI) - California Outcomes Measures (Vandell) - PPICS data

13 Sample Report What Are the Leading Indicators?

14 A C B Items are simply the questions that we ask on the surveys, demographic and enrollment data, or Youth PQA scores. On Figure 1 below, the Items correspond with letter A. Scales are made up of groupings of different Items that go together well. A Scale is designated by letter B in Figure 1 below. Leading Indicators are made up of groupings of different Scales, much like the Scales themselves are made up of Items. In the example below, the “ Accountability ” Scale (along with “ Collaboration ” ) make up the Leading Indicator 3.1 – System Norms, which is represented by letter C in Figure 2 below. [JB1] [JB1] Finally, all of the Leading Indicators are grouped into five overarching Domains based on the context that they represent. These Domains are color-coded for easy distinction, and include: Organizational Context (red), Instructional Context (green), External Relationships (blue), Youth Characteristics (purple) and Parent Satisfaction (brown). Figure 1 Figure 2

15 Effective performance data describes behaviors and conditions in a way that is: a. Timely – Data is available in real time as events occur or just after completion b. Objective – Data is focused on behaviors and conditions that can be identified through observation and easily named in relation to practice c. Reliable – Data is seen as precise and factual by all due to standardization of measures/methods d. Sensitive – Data describes behaviors and conditions that are likely to change in response to intervention and change can be captured on the measures e. Valid – Data describes behaviors and conditions thought to be a link in a causal chain of events desired by the actors involved f. Feasible - The minimum data necessary are collected using typical community resources g. Multi-Purpose – As they occur, BOTH data collection and data interpretation processes promote learning and coordination among actors in the organization h. Multi-level – Data designed for use by individual units (staff/sites) can be aggregated across individual units to assess collective performance Optimal Characteristics of Performance Data What Are the Leading Indicators?

16 How Have the Leading Indicators Been Used? Oklahoma Exemplar – System Characteristics History of the system Integration of QIS and required evaluation efforts 75 grantees in the first year, 77 this year Timeline Data Collection Methods Outputs

17 Site-Level Improvement

18 The report is… – A tool to help you identify the strengths of your program – A tool to help you identify the weaknesses of your program The report is not… – A mechanism to induce evaluative comparisons or competitions across grants – Something to be scared of Site-Level Improvement

19 Get a feel for the layout of the report Study the graphs - In what areas are you doing comparatively well? In what areas does it look like your site could improve? Celebrate your strengths What could you work on? Do some thinking Prepare to make a plan! How to read and interpret your report Site-Level Improvement

20 Priority Assessment Form: Leading Indicators 1.Create the story of your data (column one) – What is the message or story of your data? What do the numbers tell you? – What’s missing from the data? What important things about program quality do not come through? – Where are the gaps between what you want to provide and what the data says you’re providing? 2.Brainstorm ideas for improvement (column two) Site-Level Improvement

21

22 The profiles (clusters) in Figure A-8 may be interpreted as follows. Cluster 3: “High quality” Thirty-three percent of grantees fall into Cluster 3, where programs show high quality in all areas. Cluster 2: “High with low -- growth/mastery and family communication” This cluster represents 24% of grantees. These programs show relatively high quality in most areas, but low school alignment and parent communication. Cluster 1: “Medium” Cluster 1 represents 21% of grantees. These programs have medium levels of supervision quality, high academic press & school alignment, low program quality (growth & mastery), and low family communication. Cluster 4: “Low with high -- school alignment & family communication” Four percent of grantees fall into Cluster 4, where programs show low quality in supervision, growth & mastery, and academic press, but high school alignment & family communication. Cluster 5: “Low quality” Eleven percent of grantees fall into Cluster 5, where programs appear to demonstrate low quality in all areas. System-Level Planning

23 Intermediate & Academic Outcomes Table A-8 provides means of the satisfaction variables across cluster groups. The omnibus ANOVA indicates that the clusters produce significantly different scores in the Intermediate Outcomes set (staff job satisfaction, parent satisfaction, and parent reports of the program supporting academics). The highest quality group of grantees/sites (Cluster 3) produces the highest staff satisfaction, youth engagement, and academic efficacy, whereas the lowest quality group of grantees/sites (Cluster 5) exhibits the lowest or nearly the lowest in each area. Table A-8 Means Scores for Intermediate Outcomes by Level of Quality System-Level Planning Cluster Staff job satisfaction (S) Parent Satisfaction (P) Youth Engagement (Y) Homework completion (Y) Prog. Supports academics (P) 3: High quality4.54.74.04.44.2 2: High with low4.24.73.84.03.9 1: Medium quality4.24.63.84.33.8 4: Low with high3.54.83.74.2 5: Low quality3.64.53.74.23.6 Omnibus difference across clusters (ANOVA F) 8.3*** 2.6* 1.4 2.3+ 5.9***

24 Where Have the Leading Indicators Been Used?

25 Looking Forward/Next Steps Further Validation Work – Exploration of Leading Indicators Framework – Theoretical and Statistical Expansion to New Networks Integration of Quality Improvement Systems

26 Questions?

27 Thank You!


Download ppt "Leading Indicators: Evaluation for Site-Level Improvement and System- Level Planning Samantha Sugar Research Associate/Analyst David P. Weikart Center."

Similar presentations


Ads by Google