Presentation is loading. Please wait.

Presentation is loading. Please wait.

Teacher Evaluation & Music Education: What You Need to Know

Similar presentations


Presentation on theme: "Teacher Evaluation & Music Education: What You Need to Know"— Presentation transcript:

1 Teacher Evaluation & Music Education: What You Need to Know
Strategies for Assessing Student Growth in the Ensemble Setting Teacher Evaluation & Music Education: What You Need to Know

2 Introduction Phillip Hash, Calvin College

3 Scheduling & Implementation
Session Overview Michigan Laws Measurement Tools Scheduling & Implementation Questions The purpose of our session today is to discuss the new teacher evaluation system in Michigan, particularly in relation to measuring student growth in the music classroom. Review MI laws and current trends in local districts regarding teacher evaluation Discuss tools for measuring student growth in music Examine of how these tools might be used to meet individual needs Considerations of the process and share resources for becoming informed, advocating for quality evaluations for music educators, and developing your own student assessment systems. 1

4 Legislative Review: Talking Points
All Teachers Evaluated Annually Percentage of Evaluation to Relate to Student Growth National, State, And Local Assessments Evaluations vs. Seniority in Personnel Decisions Michigan Council On Educator Effectiveness All teachers are now evaluated annually Classroom observation (Evaluator must review the lesson plan & standards taught) Part of the evaluation will be based on student academic growth (sig. part); (25%); (40%); (50%) Result in ratings of Highly effective, effective, minimally effective, ineffective Stakes are high. Districts must retain teachers based on their evaluation rating and student growth rather than seniority and tenure. MI Council on Educator Effectiveness organized to facilitate process. By April 30, 2012 were to submit a report that recommended: A student growth and assessment tool State evaluation tools for teachers and administrators Parameters for effectiveness rating categories. BUT….This hasn’t happened! 2

5 Pilot Programs 2012-13 Pilot Urgency for Measures of Students Growth
14 districts 4 evaluation models Standardized tests Local measures for non-tested subjects Recommendations by school year Urgency for Measures of Students Growth Wisdom of the MCEE Interim Report – April 2012 recommended further research and pilot programs for 14 districts throughout Michigan Each piloting 1 of 4 evaluation models (5 Dimensions of Teaching & Learning; Charlotte Danielson’s Framework for Teaching; Marzano Teacher Evaluation Model; The Thoughtful Classroom) Each district is also explore standardized and local approaches for measuring student growth in tested and non-tested grades and subjects. Final recommendations on state-wide evaluation and student growth tools planned prior to 2013–14 school year. 4

6 Frameworks, Methods, Systems Used as part of Local Evaluation
Current trends in Michigan Teacher Evaluation. Information I am presenting here is from a Michigan Department of Education (MDE)‐developed survey regarding K‐12 Educator Evaluation Systems. Data represents 792 of 897 (88.3%) public school systems in Michigan. (Nov MDE) This table indicates the evaluation systems currently used. As you can see, over ½ use an evaluation based on the Danielson Framework. “Other” mostly represents districts using a local system or one of the frameworks listed combined with a local component.

7 Effectiveness Ratings & Percentage of Student Growth
% of Growth in Local Evaluation Systems Last year and this year, districts were allowed to define “significant part” when determining the % of the evaluation that would be based on student growth. The table above shows the current % of growth that local districts are counting towards evaluation during the school year. The majority (nearly 575 districts) based evaluation on less than 30% of student growth in

8 Current Trends: Effectiveness Ratings for 2011-12
Here is a break down of the effectiveness ratings issued for the academic year. Ineffective = 0.8% Minimally Effective = 2% Effective = 75% Highly Effective = 23% This trend reflects the desire of legislators to differentiate teacher quality through teacher evaluations. No longer will most teachers receive the highest rating on their evaluations (e.g., excellent, outstanding). The new system is intended to identify the strongest and weakest teachers. The new reporting system creates an incentive for districts to give the highest ratings sparingly.

9 Teacher Ratings & Student Growth
The table above breaks down the % of I, ME, E, HE teachers for local districts counting < 10%, 11-40%, and > than 40% student growth on evaluations. Notice that the higher % of growth included in an evaluation system leads to fewer teachers rated E or HE, suggesting that fewer teachers will earn the top ratings as % of growth increases over the next few years. There is currently no data that I am aware of that compare teachers of tested and non-tested subjects. But I think it will be problematic if teachers of non-tested subjects are consistently rated higher than those of tested subjects. Therefore, we will likely see similar distributions of ratings between these two groups.

10 Strategies Mitch Robinson, Michigan State University

11 Forms of Alternative Assessment
Performance-Based Assessment Student Auditions Solo/Ensemble Festivals Critiques of Student Compositions Coaching Jazz Improvisation Playing Checks Student Writing 9

12 Rating Scales & Rubrics
(Criteria-Specific) Rubrics* Two Types Continuous Rating Scales Additive Rating Scales Should include: Points that are equidistant Four or more rating points Descriptors that are valid and reliable 10 *From: K. Dirth, Instituting Portfolio Assessment in Performing Ensembles, NYSSMA Winter Conference, Dec. 3, 1997.

13 Rating Scales Should be: Criteria-specific Objective Easy to use Clear
11 Robinson

14 National Standard #7: Evaluating music and music performances.
Sample Rating Scale National Standard #7: Evaluating music and music performances. 12

15 What Does A Rubric Look Like?
Beginning Basic Proficient Advanced Breathy; Unclear; Lacks focus; Unsupported Inconsistent; Beginning to be centered and clear;  Breath support needs improvement Consistent breath support; Centered and clear; Beginning to be resonant Resonant; Centered; Vibrant; Projecting Features: Scale includes (preferably) 4 rating points Points of the scale are equidistant on a continuum Highest point represents exemplary performance Descriptors are provided for each level of student performance Types include holistic (overall performance) and analytic (specific dimensions of performance); both are necessary for student assessment The scale includes (preferably) 4 rating points The points of the scale are equidistant on a continuum The highest point represents exemplary performance Descriptors are provided for each level of student performance Descriptors are valid (meaningful) and scores are reliable (consistent) Scores are related to actual levels of students learning Can be used by students for self-assessment and to assess the performance of other students Adapted from: K. Dirth, Instituting Portfolio Assessment in Performing Ensembles, NYSSMA Winter Conference, Dec. 2, 1997. 13

16 Rubrics (cont.) Types include: Descriptors must be valid (meaningful)
Holistic (overall performance) Analytic (specific dimensions of performance) Both necessary for student assessment Descriptors must be valid (meaningful) Scores Must be reliable (consistent) Should relate to actual levels of students learning Can be used by students for self-assessment and to assess the performance of other students 14

17 Creating a Rubric – Why Bother?
Helps plan activities Focuses your objectives Aids in evaluation and grading Improves instruction Provides specific feedback to students 15 Robinson

18 The Morning After... Focuses student listening
Guides students to attend to musical aspects of performance Can be done in groups Encourages comparison and contrast judgments 16

19 Journal Keeping Stenographer’s notebooks work best
Younger students need more directed writing assignments Try to avoid the “pizza & pop” syndrome Teacher feedback is essential 17

20 Implementation Abby Butler, Wayne State University

21 Planning to Assess What aspects of student learning do you want to measure?* Skills Knowledge Understanding Decide which measurement tools best suited for outcomes to be measured Obtain or develop measurement tools *Consult the Michigan Merit Curriculum, available online at MDOE. Choose criteria that provides a balanced picture of what your students are learning Skills: singing/playing, technique, sight-singing/sight-reading, reading/writing/notating Knowledge: terminology, music history, music theory Understanding: analyzing musical passages, making connections, justifying interpretations, evaluating performances 18

22 Planning for Assessment
Build assessment into rehearsals Develop activities as a context for measuring skills or knowledge Include these activities in your lesson plans Develop and use simple procedures for recording assessments Laminated seating charts Electronic gadgets (iPads, tablets, smart phones, desk computer) Plan ahead where and how this data will be stored (filing system) In class rehearsal assessment activities: Sight reading exercises (skill) Playing/singing tests (skill) Short written quizzes (knowledge) Score marking (checklist – during or after rehearsal) Storage of data Need easy access Filing system for physical and electronic data Set up so easy to average scores or consolidate data (Excel) 19

23 Schedule Assessments Measure each of the identified goals several times throughout the year Baseline measurements Intermediate measurements (formative) Measurement at the end Make the following decisions before the school year begins: Who will be assessed How often assessments will occur When assessments will occur Build these assessments into your year long plan Assessments at beginning of the year serve as baseline against which progress can be measured. Assessments throughout the year can show growth in smaller increments, identify plateaus or unexpected difficulties. This information can be used to inform instructional decisions during the course of the year. Assessments at the end of the year can be compared to baseline measurements to show growth. Who will be assessed – all students? Only certain grades? Will you rotate which grades will be assessed? (These decisions may need to be made with consultation of your principal or supervisor) Which skills, specific information, or understandings will be assessed and which measurement tools will you use to do so? When they will be assessed and how often – every marking period? At the beginning and end of the year? Will you rotate who will be assessed by marking period? 20

24 Working with your Data Decide how assessments will be recorded
Numbers? Descriptive words? Decide how you will report the results Percentage scores with differences between beginning and end of year assessments? Percentage of students moving from one competency level to the next? Graphic charts, spreadsheets? Numbers can be counted and averaged for comparison by percentages Descriptive words can be counted to produce numbers which in turn can be compared by percentages If crafting your own assessments averaged scores will be the easiest to calculate Consult with your principal to see if they need the data reported in a specific format 21 Butler

25 Ex: Comparison of Sight Reading Competency by Ensemble
Beginning Ensemble Intermediate Ensemble Advanced Ensemble B D P A First Marking Period 55% 30% 15% 0 % 25% 50% 10% 0% 20% 65% Last Marking Period 5% 40% 70% If you used the rubric Mitch presented with the four levels representing a continuum of competency, you could keep track of the percentage of students in each category, noting changes over time. In order for this information to be meaningful, you would need to… Describe how each skill set is being assessed (checklist, rating scale, rubric) Explain the criteria for scores falling within each of the categories - basic, satisfactory, and advanced Key: B (Basic) – D (Developing) – P (Proficient) – A (Advanced) 22

26 Tips for Starting Out Develop measurement tools over the summer
Start small by limiting The number of students, grade levels, or ensembles assessed The number & frequency of assessments Build assessment into your lesson activities Simplify recording tasks Choose a system for assigning scores that is easy to average Consider the amount of time available and the number of students to be evaluated Work with your administration to develop a plan that is feasible and that measures MEANINGFUL student learning 23

27 Conclusion Phillip Hash, Calvin College

28 Resources http://www.mcede.org/
MCEE website Legislative Summary Policy Briefs NAfME and MISMTE position statements MI GLCE - Music Sample assessments in use today This PPT Please send examples of your assessments to Ottawa ISD – Feb. 14, 3:30pm MCEE website pmhmusic.weebly.com Legislative Summary Policy Briefs on Measuring Student Growth in Non-tested subjects NAfME and MISMTE position statements MI GLCE - Music Sample assessments in use today If you have assessments you are willing to share, please send them to and I’ll post them on my site. Don’t be shy. I know there are a number of teachers designing their own assessments who would like to see what others are doing. I’ll also mention that I am giving a workshop on teacher evaluation and measuring student growth at the Ottawa ISD in Holland on Thursday, Feb. 14 at 3:30pm. 25

29 Responsibilities & Considerations
Design, administer, and evaluate assessments Must be Quantitative Rubistar4teachers.org Same or very similar for every music teacher Valid Reliable (consistent) Integrity of Process Transparency Record performance tests (Vocaroo.com) Regular music staff meetings Review assessments Discuss/resolve issues Not all assessment has to demonstrate student growth 2 points in time Comparable across classrooms Growth data ultimately must quantitative – Rubistarrteachers.com Assessments of student growth used in evaluations need to be the same or very similar for every music teacher of the same specialization within the district. Data must be capable of effectively and fairly comparing one teacher to another Valid = Measures what the assessment says it measures. Represents the intended learning in the classroom. Connect assessments to district curriculum & MI GLCE – Arts First step for many districts might be to determine exactly what students should know and be able to do at each grade level. Reliable Establish same testing and grading procedures for all teachers. Periodically grade each others students to check for consistency. Integrity of Process Transparency – Be prepared to demonstrate assessments for administration, other teachers, parents, other stakeholders Record & archive all performance tests (Vocaroo.com) – even if you never use them 24

30 Questions for the Panel
26


Download ppt "Teacher Evaluation & Music Education: What You Need to Know"

Similar presentations


Ads by Google