Gathering Feedback for Teaching Combining High-Quality Observations with Student Surveys and Achievement Gains.

Slides:



Advertisements
Similar presentations
The Dynamic Trio of Effective Teaching Measures: Classroom observations, student surveys and achievement gains Thomas Kane Harvard Graduate School of Education.
Advertisements

Briefing: NYU Education Policy Breakfast on Teacher Quality November 4, 2011 Dennis M. Walcott Chancellor NYC Department of Education.
Teacher Evaluation Model
Survey Methodology Reliability and Validity EPID 626 Lecture 12.
By: Autumn Sutton, Judith Windhausen & Anne Ploettner.
Multiple Measures of Teacher Effectiveness Tulsa Public Schools Jana Burk.
 To identify the characteristics of an effective post- conference  To identify the protocol for a TAP post-conference  To target areas to improve my.
RESEARCH METHODS Lecture 18
EDP 303 Portfolio Jill Ann Broermann Spring 2004.
Professional Growth= Teacher Growth
Our Leadership Journey Cynthia Cuellar Astrid Fossum Janis Freckman Connie Laughlin.
Recognizing Effective Teaching Thomas J. Kane Professor of Education and Economics Harvard Graduate School of Education.
CRIOP Professional Development: Program Evaluation Evaluatio Susan Chambers Cantrell, Ed.D. Pamela Correll, M.A. Victor Malo-Juvera, Ed.D.
AFT 7/12/04 Marywood University Using Data for Decision Support and Planning.
Student Engagement Survey Results and Analysis June 2011.
Evaluating Teacher Performance Daniel Muijs, University of Southampton.
Curriculum and Learning Omaha Public Schools
Data Team Training February 4, 7, 10, 2014 Data Team Training February, 2014.
Robert Kaplinsky Melissa Canham
Copyright ©2006. Battelle for Kids. Understanding & Using Value-Added Analysis.
Compass: Module 2 Compass Requirements: Teachers’ Overall Evaluation Rating Student Growth Student Learning Targets (SLTs) Value-added Score (VAM) where.
Classroom Assessments Checklists, Rating Scales, and Rubrics
OCM BOCES Day 4 Principal Evaluator Training 1. 2 Nine Components.
Laying the Groundwork for the New Teacher Professional Growth and Effectiveness System TPGES.
Measures of Effective Teaching Final Reports February 11, 2013 Charlotte Danielson Mark Atkinson.
Grading and Reporting Chapter 15
NCDPI Observation Calibration Training Pilot: Introduction & Demo November 2014.
Understanding Meaning and Importance of Competency Based Assessment
Teacher and Principal Evaluation Pilot (TPEP). Objectives & Agenda What we’re going to learn General Pilot Details …. Who, What, How, What Then Explore.
 What does 21 st century assessment look like?  How does 21 st century assessment encourage learning?  How do effective teachers use assessment?
Measuring Complex Achievement
Day 8. Agenda Aligning RTTT Growth and Value-Added Evidence Collection Inter-rater agreement and reliability Growth-Producing Feedback.
THE DANIELSON FRAMEWORK. LEARNING TARGET I will be be able to identify to others the value of the classroom teacher, the Domains of the Danielson framework.
Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation.
EDU 385 CLASSROOM ASSESSMENT Week 1 Introduction and Syllabus.
January 10, 2012 Gathering Feedback for Teaching.
Service Learning Dr. Albrecht. Presenting Results 0 The following power point slides contain examples of how information from evaluation research can.
Evaluation & Assessment: Preparing Preservice Music Teachers for the New Educational Paradigm Phillip Hash Calvin College Grand Rapids, Michigan
1 Control Students in this class treat the teacher with respect. My classmates behave the way the teacher wants them to. Our class stays busy and doesn’t.
Learning about Teaching. The Measures of Effective Teaching Project Participating Teachers.
Evaluating Survey Items and Scales Bonnie L. Halpern-Felsher, Ph.D. Professor University of California, San Francisco.
Gathering Feedback for Teaching Combining High-Quality Observations with Student Surveys and Achievement Gains.
Measures of Effective Teaching (MET) project June 20, 2011.
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov February 16, 2011.
1 Scoring Provincial Large-Scale Assessments María Elena Oliveri, University of British Columbia Britta Gundersen-Bryden, British Columbia Ministry of.
REGIONAL EDUCATIONAL LAB ~ APPALACHIA The Effects of Hybrid Secondary School Courses in Algebra 1 on Teaching Practices, Classroom Quality and Adolescent.
Chapter 6 - Standardized Measurement and Assessment
Student Growth Goals for Coaching Conversations. Requirements for Student Growth Goals Time line Reference to an Enduring Skill Proficiency and Growth.
Reliability EDUC 307. Reliability  How consistent is our measurement?  the reliability of assessments tells the consistency of observations.  Two or.
Instructional Leadership: Monitoring Insights, Patterns, & Trends.
[School Name]’s Student Perception Survey Results This presentation is a template and should be customized to reflect the needs and context of your school.
Observation System Kidderminster College January 2012.
Action Research Amanda Hoss EDU 671 Jessica Bogunovich 6/9/2016.
Survey Methodology Reliability and Validity
Classroom Assessments Checklists, Rating Scales, and Rubrics
MET Results & FfT Evolution
Evaluation Requirements for MSP and Characteristics of Designs to Estimate Impacts with Confidence Ellen Bobronnikov March 23, 2011.
Overview of Student Growth and T-TESS
When Teachers Choose: Fairness and Authenticity in Teacher-Initiated Classroom Observations American Educational Research Association, Annual Meeting.
Assessment in Language Teaching: part 1 Lecture # 23
Classroom Assessments Checklists, Rating Scales, and Rubrics
Grading
Implementing the Specialized Service Professional State Model Evaluation System for Measures of Student Outcomes.
RESEARCH METHODS Lecture 18
Teacher Effectiveness Research
Validity and Reliability II: The Basics
Rubrics for evaluation
State Mandates Teachers Edition
Jill Ann Broermann Spring 2004
The New Curriculum, Assessment and Reports
Presentation transcript:

Gathering Feedback for Teaching Combining High-Quality Observations with Student Surveys and Achievement Gains

Multiple Measures of Teaching Effectiveness 2

The MET project is unique …  in the variety of indicators tested, 5 instruments for classroom observations Student surveys (Tripod Survey) Value-added on state tests  in its scale, 3,000 teachers 22,500 observation scores (7,500 lesson videos x 3 scores) trained observers 44,500 students completing surveys and supplemental assessments  and in the variety of student outcomes studied. Gains on state math and ELA tests Gains on supplemental tests (BAM & SAT9 OE) Student-reported outcomes (effort and enjoyment in class) 3

4 Section 1 Reinventing Classroom Observations

5 Four Steps Four Steps to High-Quality Classroom Observations

Actual scores for 7500 lessons. Step 1: Define Expectations Framework for Teaching (Danielson) 6 Four Steps

Step 2: Ensure Accuracy of Observers 7 Four Steps

Step 3: Monitor Reliability 8 Four Steps

9 Multiple Observations Result in Higher Reliability NOTES: The numbers inside each circle are estimates of the percentage of total variance in FFT observation scores attributable to consistent aspects of teachers’ practice when one to four lessons were observed, each by a different observer. The total area of each circle represents the total variance in scores. These estimates are based on trained observers with no prior exposure to the teachers’ students, watching digital videos. Reliabilities will differ in practice. See the research paper, Table 11, for reliabilities of other instruments.

Step 4: Verify Alignment with Outcomes 10 Four Steps Teachers with Higher Observation Scores Had Students Who Learned More

11 Section 2 What do students say?

12 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

13 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

14 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

15 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

16 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

17 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

18 Students Distinguish Between Teachers Percent of Students by Classroom Agreeing

19 Section 3 The “Dynamic Trio”: Classroom observations, student feedback and student achievement gains.

Dynamic Trio 20 Three Criteria: Predictive power: Which measure could most accurately identify teachers likely to have large gains when working with another group of students? Reliability: Which measures were most stable from section to section or year to year for a given teacher? Potential for Diagnostic Insight: Which have the potential to help a teacher see areas of practice needing improvement? (We’ve not tested this yet.)

Dynamic Trio Measures have different strengths …and weaknesses 21

Dynamic Trio Combining Measures Improved Reliability as well as Predictive Power 22 Note: For the equally weighted combination, we assigned a weight of.33 to each of the three measures. The criterion weights were chosen to maximize ability to predict a teacher’s value-added with other students. The next MET report will explore different weighting schemes. Observation alone (FFT) Student survey alone VA alone Combined (Equal Weights) Combined (Criterion Weights) Difference in Math VA (Top 25% vs. Bottom 25%) Reliability Note: Table 16 of the research report. Reliability based on one course section, 2 observations. The Reliability and Predictive Power of Measures of Teaching:

23 Section 4 Better Information?

Compared to What? Compared to MA Degrees and Years of Experience, the Combined Measure Identifies Larger Differences 24 … on state tests

Compared to What? …and on low stakes assessments 25

Compared to What? …as well as on student-reported outcomes. 26

The MET project reporting timeline: 1.Student Perceptions 12/ Classroom Observations 1/ Weighting mid-2012 Rationale for different weighting schemes Consequences for predictive power and reliability 4.Final report using random assignment mid-2012 Do “value-added” estimates control adequately for student characteristics? Do they predict outcomes following random assignment? 27 MET project reports available at

Resources 1.Observation instruments (from developers) 2.Student surveys (MET version of Tripod survey) 3.Rater certification software (coming soon) 4.Reliability check methodology (see Research Report) 28

29 The MET Project is ultimately a research project. Nonetheless, participants frequently tell us they have grown professionally as a result of their involvement. Below is a sampling of comments we received. From Teachers: “The video-taping is what really drew me in, I wanted to see not only what I’m doing but what are my students doing. I thought I had a pretty good grasp of what I was doing as a teacher, but it is eye opening … I honestly felt like this is one of the best things that I have ever done to help me grow professionally. And my kids really benefited from it, so it was very exciting.” "With the videos, you get to see yourself in a different way. Actually you never really get to see yourself until you see a video of yourself. I changed immediately certain things that I did that I didn't like.” “I realized I learned more about who I actually was as a teacher by looking at the video. I learned of the things that I do that I think that I’m great at I was not so great at after all.” “Even the things I did well, I thought, ok that's pretty good, why do I do that, and where could I put that to make it go farther. So it was a two-way road, seeing what you do well, and seeing the things that have become habits that you don't even think about anymore." From Raters: “Being a rater has been a positive experience for me. I find myself ‘watching’ my own teaching more and am more aware of the things I should be doing more of in my classroom.” “I have to say, that as a teacher, even the training has helped me refine my work in the classroom. How wonderful!” “I have loved observing teachers, reflecting on my own teaching and that of the teachers teaching in my school.” What the Participants Said….