Presentation is loading. Please wait.

Presentation is loading. Please wait.

Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation.

Similar presentations


Presentation on theme: "Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation."— Presentation transcript:

1 Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation

2 The MET project is unique …  in the variety of indicators tested, 5 instruments for classroom observations Student surveys (Tripod Survey) Value-added on state tests  in its scale, 3,000 teachers 22,500 observation scores (7,500 lesson videos x 3 scores) 900 + trained observers 44,500 students completing surveys and supplemental assessments in year 1 3,120 additional observations by principals/peer observers in Hillsborough County, FL  and in the variety of student outcomes studied. Gains on state math and ELA tests Gains on supplemental tests (BAM & SAT9 OE) Student-reported outcomes (effort and enjoyment in class) 2

3 Two Past Reports:  Learning about Teaching (Student Surveys)  Gathering Feedback for Teaching (Classroom Observations) 3

4 Have we identified effective teachers or … teachers with exceptional students?  To find out, we randomly assigned classrooms to 1,591 teachers. 4

5 Have We Identified Effective Teachers? KEY FINDINGS  Following random assignment in Year 2, the teachers with greater measured effectiveness in Year 1 did produce higher student achievement.  The magnitude of the impacts were consistent with predictions.  They also produced higher achievement on the supplemental assessments 70 percent as large as impacts on state tests. 5

6 6

7 7

8 Organizing Observations by School Personnel KEY FINDINGS  Adding an observation by a second observer increases reliability twice as much as having the same observer score an additional lesson.  Short observations provide a time-efficient way to incorporate more than one observer per teacher.  School administrators rate their own teachers higher than do outside observers. However, (1) they rank their teachers similarly to others and (2) they discern bigger differences between teachers than peers do (which increases reliability).  Although average scores are higher across the board, letting teachers choose which lessons are observed produces similar rankings and slightly higher reliability.

9 9 There are many roads to reliability.

10 Combining Measures Using Weights KEY FINDINGS  The best way to identify teachers who produce large student achievement gains on state tests is to put 65 to 90 percent of the weight on teacher’s past history of gains on such tests. However, the resulting composite does not predict student achievement gains on more cognitively challenging assessments as well.  Balanced weights have somewhat less predictive power with respect to state achievement gains but, they offer (1) better ability to predict other outcomes and (2) improved reliability (less volatility).  It is possible to go too far. Weighting state tests less than one- third results in (1) worse predictive power with respect to other outcomes and (2) less reliability. 10

11 11 What’s the best we could do with master’s degrees and experience alone? Higher Order.14 State.13

12 12

13 Feedback for Better Teaching 13 January 2013 Steve Cantrell, Bill & Melinda Gates Foundation

14 “MOM P.” 14

15 15  Monitor validity  Ensure reliability  Assure accuracy  Make meaningful distinctions  Prioritize support and feedback  Use data for decisions at all levels  Set expectations  Use multiple measures  Balance weights

16 16 Set expectations Use multiple measures Balance weights

17 17 Monitor validity Set expectations Use multiple measures Balance weights

18 18 Monitor validity Ensure reliability Set expectations Use multiple measures Balance weights

19 19 Monitor validity Ensure reliability Assure accuracy Set expectations Use multiple measures Balance weights

20 20 Make meaningful distinctions Prioritize support and feedback Use data for decisions at all levels Set expectations Use multiple measures Balance weights Monitor validity Ensure reliability Assure accuracy

21 Actual scores for 7500 lessons. 21 Framework for Teaching (Danielson) Unsatisfactory Yes/no questions; posed in rapid succession; teacher asks all questions; same few students participate. Basic Some questions ask for explanations; uneven attempts to engage all students. Proficient Most questions ask for explanation; discussion develops, teacher steps aside; all students participate. Advanced All questions high quality; students initiate some questions; students engage other students.

22 22 Set expectations Use multiple measures Balance weights Monitor validity Ensure reliability Assure accuracy Make meaningful distinctions Prioritize support and feedback Use data for decisions at all levels

23 23

24 24

25 25

26 26

27 Achievement Gains 27

28 Achievement Gains 28 2009 Average Performance Below Above

29 Achievement Gains 29 2009 Average Performance Below Above At 2010 Predicted Performance Below Above

30 Achievement Gains 30 2009 Average Performance Below Above Below Above Very Low Prior Student Achievement

31 Student Achievement 31 2009 Average Performance Below Above Below Above Very Low Prior Student Achievement Almost All Performing at or above Prediction

32 Classroom Observation 32

33 Student Surveys 33

34 34

35  The Library of Practice  MET Longitudinal Database  Professional Development Studies  Working with Key Partners to Implement Feedback and Evaluation Systems  This Symposium and Your Good Work! 35

36 You can find this slide presentation, the current reports, and all past reports at www.metproject.org 36


Download ppt "Final Reports from the Measures of Effective Teaching Project Tom Kane Harvard University Steve Cantrell, Bill & Melinda Gates Foundation."

Similar presentations


Ads by Google