Presentation is loading. Please wait.

Presentation is loading. Please wait.

Principles of Program Evaluation A Workshop for the Southwest Virginia Professional Education Consortium Christopher R. Gareis, Ed.D. The College of William.

Similar presentations


Presentation on theme: "Principles of Program Evaluation A Workshop for the Southwest Virginia Professional Education Consortium Christopher R. Gareis, Ed.D. The College of William."— Presentation transcript:

1 Principles of Program Evaluation A Workshop for the Southwest Virginia Professional Education Consortium Christopher R. Gareis, Ed.D. The College of William & Mary

2 Pre-Session Understandings

3 Perspective on Our Time Together Today The Profession of Education The regular and expert exercise of learned judgment and skills in service of one of an individual’s life needs and one of society’s greater purposes. Zen and the Art of Professional Development The Principle of Collective Wisdom

4 Let’s graph ‘em

5 Shared Understandings What is a “program”? A planned, multi-faceted series of activities leading to intended outcomes for individuals, groups, and/or a community. What is “evaluation”? The use of valid and reliable information/data to make judgments.

6 Definitions of “Program Evaluation” Judging the worth of a program. (Scriven, 1967) The process of systematically determining the quality of a school program and how the program can be improved. (Sanders, 2000) The deliberate process of making judgments and decisions about the effectiveness, direction, and value of educational programs in our schools. (me)

7 DO YOU KNOW OF…? …a program that you thought was effective but was discontinued without explanation? …a program that you believe is likely a waste of time, money, and effort, and yet it continues?

8 Random Acts of Improvement Our Intentions Our Efforts

9 Aligned Acts of Improvement Our Intentions Our Efforts

10 Perspective PlanningEvaluation

11 The Role of Evaluative Thinking Planning Redesigning Evaluative Thinking Implementing Frechtling, 2007

12 > 20 major models of program evaluation Seminal Model: Daniel Stufflebeam’s CIPP model (1971) Our focus:  Creating a Logic Model  Focusing the Evaluation  Identifying Performance Indicators  EVALUATIVE THINKING

13 Our Objectives Today 1.Understand basic concepts and principles of program evaluation at the school level 2.Create a logic model that accurately depicts the essential components of and relationships among input, process, and outcome variables of a school-based educational program 3.Select the appropriate focus for the evaluation of a program 4.Identify appropriate data sources aligned to the intended educational outcomes of a selected school-level program 5.Appreciate the importance of engaging in intentional program evaluation as a teacher leader

14 Talk About Your Program 1.What is your program? 2.What does your program do? 3.Who participates in your program? 4.Whom does your program serve? 5.How do they benefit? 6.How do you know, or how would you know, that you program is a success?

15 Think Visually Sketch a visual metaphor of your program.

16 Every educational program: Uses resources or INPUTS, such as… – Materials – Funds – Teachers – Students Engages in activities or PROCESSES, such as… – Instruction – Interventions Intends produce certain results or OUTCOMES, such as… – Learning – Achievement – Engendering of certain attitudes or values – Eligibility for a next step in life – Bringing about some change

17 Visualizing an Educational Program The Logic Model InputsProcessesOutcomes

18 What is a logic model? A diagram with text that illustrates and describes the reasoned relationships among program elements and intended outcomes to be attained. A visual theory of change. We use these resources… For these activities… So that these students / teachers … Can produce these results… Leading to these changes for the better… HowWhy Modified from guest lecture by John A. McLaughlin (November 16, 2009) at The College of William & Mary

19 Simple Logic Model InputsProcesses Initial Outcomes Intermediate Outcomes Ultimate Outcomes

20 Everyday Logic Get an antihistamine Take the antihistamine Feel better

21 Try This You’re hungry. Sketch a logic model to address your need. Oh, but one twist: You have no food in the house. Assumptions

22 Inputs Processes Initial Outcomes Ultimate Outcomes Core Teachers Math Resource Teachers Special Education Gifted Education Common planning by teachers Frequency Enthusiasm Student affect for instruction Student engagement in instruction Student achievement Quarterly class grades in math Quarterly class grades in math Student achievement 6 th grade Math SOL 6 th grade Math SOL 7 th grade Math SOL 7 th grade Math SOL Greenlawn Middle School Common Math Planning Initiative Intermediate Outcomes

23 Now Try This You want to take a camping trip with your family to a state park. Sketch a logic model that depicts your intentions and actions. Purpose

24 The Reach of Intended Outcomes Initial Outcomes Intermediate Outcomes Ultimate Outcomes TimeImmediately or very closely following program activity End of a term or year, typically End of a year, several years, or beyond formal K-12 schooling PeopleTargeted individual students and/or aggregate groups May have an organizational or societal impact Indicators Discrete/Finite Measurable and/or observable May be sets of discrete indicators Measurable and/or observable May be sets of discrete indicators May or may not be readily measured or observed

25 Now Try This The Great Recession has had a broad and, in many places, deep impact on Americans. One “solution” to our economic morass has been a call from policymakers for improved financial literacy of our citizens. K-12 schools are seen as the logical place to teach young citizens financial literacy. Sketch a logic model that depicts this “theory of change” Causality

26 Source: STEM Education Symposium (retrieved 9/16/12).

27

28 A Simple Logic Model: 9 th Grade Transition Program 9 th Grade Teachers Guidance Counselors Student Mentors New Students Mentor Training Summer Orientation Mentor Activities Milestone Activities Sense of Belonging of 9 th Graders Academic Success of 9 th Graders Promotion to 10 th Grade Reduced Dropout Rates Graduation Rate of 9 th Grade Cohorts

29

30 Stephen M. Millett, Susan Tave Zelman, (2005) "Scenario analysis and a logic model of public education in Ohio", Strategy & Leadership, Vol. 33 Iss: 2, pp

31 Source: SUNY NGLC Grant Proposal (retrieved 9/16/12)

32 Context & Inputs Processes of Professional Development Outcomes for Teachers Impact on Student Learning Experience & Expertise of Professional Developers Experience & Expertise of Participants - Pre-service teachers - In-service teachers - Instructional leaders - Administrators Psychometric Principles -Translated into practical terms & skills for teachers State Standardized Assessments - de facto Curriculum? State &/or School District Curriculum District Goals, Initiatives, or Imperatives Explore Alignment C=I a =A Unpack Curriculum - Content - Cognitive levels Create a Table of Specifications or “blueprint” Critique an assessment for validity & reliability Explore uses of ToS - Create an assessment - Critique & improve an assessment - Create a unit plan assessment - Plan instruction - Analyze assessment results Create good assessment items, prompts, assignments, & rubrics Understand role of C=I a =A Alignment Understand relationship between assessment & evaluation Understand & apply concepts of validity & reliability Use a ToS to: - Create an assessment Create an assessment - Critique & improve an assessment Critique & improve an assessment - Create a unit plan assessment Create a unit plan assessment - Plan instruction - Analyze assessment results Apply principles to the spectrum of assessment types & practices Create assessments that yield valid & reliable data Use assessment data to make decisions: - About student learning - For student learning - About assessments - About instruction Contribute to the design & development of common assessments Provide “opportunity to learn” through aligned instruction Exposure to the full curriculum (e.g., depth & skill development) Increased student achievement Deeper, more meaningful learning A Model for Improving Assessment Literacy

33 PROGRAM ACTION- LOGIC MODEL InputsProcessesOutcomes ActivitiesParticipationShort-termIntermediate-termLong-term → → →→ → → → → →→ → → → → → →

34

35 Developing a Logic Model Who: Groups of 3-4 Materials: Chart paper (in “landscape layout”), Post-It notes, & markers Task: Create a basic logic model for the AVID program

36 Advancement Via Individual Determination AVID The AVID Student AVID targets students in the academic middle – B, C, and even D students – who have the desire to go to college and the willingness to work hard. These are students who are capable of completing rigorous curriculum but are falling short of their potential. Occasionally, they will be the first in their families to attend college, and many are from low- income or minority families. AVID pulls these students out of their unchallenging courses and puts them on the college track: acceleration instead of remediation. The AVID Elective Not only are students enrolled in their school’s toughest classes, such as honors and Advanced Placement, but also in the AVID elective. For one period a day, they learn organizational and study skills, work on critical thinking and asking probing questions, get academic help from peers and college tutors, and participate in enrichment and motivational activities that make college seem attainable. Their self-images improve, and they become academically successful leaders and role models for other students. Curriculum The AVID curriculum, based on rigorous standards, was developed by middle and senior high school teachers in collaboration with college professors. It is driven by the WIC-R method, which stands for writing, inquiry, collaboration, and reading. AVID curriculum is used in AVID elective classes, in content area classes in AVID schools, and even in schools where the AVID elective is not offered. Results A well-developed AVID program improves school-wide standardized test scores, advanced rigorous course enrollments, and the number of students attending college. Excerpted from

37 Why AVID Works Between the remedial programs for students who lag far behind, and the gifted –and –talented programs for a school’s brightest children, lies the silent majority: average students, who do “okay” in ordinary classes but, because they don’t attract attention to themselves, are left alone. Many of these students hunger for more challenging coursework, but fear failure. Their potential lies dormant, waiting to be recognized, encouraged, and supported. First, AVID identifies these student. The selection criteria include: Ability: Are the students getting Cs and Bs but are capable of more? Desire and Determination: Do they want to attend college? Are they willing to work hard to get there? Membership in an underserved group: Are they in a low income household? Will they be the first in their family to attend college? The AVID program is tailored to the needs of this diverse group of students, and it works for them because it: Accelerates underachieving students into more rigorous courses. Offers the intensive support students need to succeed in rigorous courses. Uses Socratic methods and study groups that specifically target the needs of under-achieving students. Is a school-wide initiative, not a school within a school. Changes the belief system of an entire school by showing that students that are in the middle academically, and low-income and minority students can achieve at the highest levels and attend colleges. Redefines the role of the teacher from lecturer to advocate and guide. The role of counselor changes from gate-keeper to facilitator. Creates site teams of administrators and educators from different content areas, encouraging communication and sharing among teachers, counselors, and principals. Is based on research on tracking – the process by which some children are channeled into challenging courses and others are relegated to remedial ones and peer influences in student achievement.

38 TRY THIS Task: Create a logic model for your program Materials: Chart paper (in “landscape layout”) Post-It notes Markers Answers to the earlier questions about your program

39 Gallery Walk (with Docents) Can you read the logic model (without the docent’s assistance)? Questions that the logic model raises for you? Feature(s) you want to “borrow”? Suggestions to strengthen the logic model?

40 Why bother with logic modeling? Seems like a lot of work. Logic models are too complex—how could I realistically ever know all the variables at play in a complex program! I don’t “think” this way. Even if we created a logic model, what would we do with it?!

41 Limits of Logic Models Represent intentions not reality Focus on expected outcomes, which may mean missing out on beneficial unintended outcomes Challenging to know causal relationships Does not address whether what we’re doing is right

42 Benefits of Logic Modeling McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: a tool for telling your program’s performance story. Evaluation and Program Planning 22, Builds a shared understanding of the program and expectations for inputs, processes, and outcomes 2.Helpful for program design (e.g., identifying critical elements for goal attainment and plausible linkages among elements) 3.Points to a balanced set of key performance indicators

43 Let’s Clarify: A Logic Model… ISIS NOT

44 Are you going to New York or by train?

45 Your question IS your FOCUS 1.Implementation Fidelity Question: “Are we implementing the program as designed?” Focus: Inputs & Processes 2.Goal Attainment Question: “Are we seeing evidence that we are achieving our intended outcomes?” Focus: Initial, Intermediate, and/or Ultimate Outcomes

46 ESL Dual-endorsement Program Approved teacher preparation programs MDLL TESOL/ESL courses VDOE regulations for ESL prep ESL summer school in LEAs Recruitment of candidates Scheduling MDLL courses Advising candidates Locating field sites Program approval Arranging field experiences Orientation Transportation Supervision Coursework Field Experiences Elem/Sec/SPED program Dually-endorsed teachers Satisfaction with preparation Impact on student learning

47 Once you have a logic model and performance indicators, what do you do? 1.Determine the intent (and “audience”): – Formative evaluation (e.g., implementation fidelity and improvement—an assumption of continuation) – Summative evaluation (e.g., to continue or discontinue) 2.Articulate a limited number of relevant evaluation questions 3.Identify who is needed to conduct the evaluation 4.Determine how the evaluation will be conducted (time, money, data collection & analysis, compilation & reporting) … Where would IF evaluation occur and where would GA evaluation occur?

48 What is the focus of the program evaluation that would answer each of the following questions? a.Do methods used by teachers at Smith High School correspond to the principles of block scheduling introduced last fall? b.Did the new reading curriculum result in improved reading abilities among 2 nd graders? Increased interest in reading? c.How consistently are Instructional Teams developing standards-aligned units of instruction for each subject and grade level (IIA01)? d.Which recruitment strategies (of the three that we used) were most effective in involving fathers in Early Head Start? e.How effective was the character education program in our middle school? f.Did our new guidance program help new ESL students transition socially to our school? g.How many AVID teachers have we trained during the past three years? Adapted from Fitzpatrick, Sanders, & Worthen. (2004). Program evaluation: Alternative approaches and practical guidelines. (3 rd ed.) Boston: Pearson (pp ). i.e., “Implementation Fidelity “or “Goal Attainment”

49 What Information Do You Use to Answer Your Questions?

50 Examples of Assessment Sources in Schools Student attendance rates Teacher attendance PTA membership Achievement gap among subgroups Inequity of class enrollment (e.g., proportionally fewer minority students in AP classes) Special education referral rates Behavior referrals Reading levels SOL scores PALS scores DIBELS scores Benchmark test scores SOL recovery data SAT scores AP scores Surveys Class grades Grade distributions Algebra I enrollment / completion College acceptances Summer school rates Dropout rates Retention rates Acceleration rates Identification for gifted services Athletic championships Debate team competitions Student demographics Staff demographics (e.g., years of experience, licensure status) Family demographics (e.g., income, educational levels) Financial resources (budget) Per pupil costs

51 What assessment sources could you use to gather relevant, accurate, dependable information/data to answer each question? a.Do methods used by teachers at Smith High School correspond to the principles of block scheduling introduced last fall? b.Did the new reading curriculum result in improved reading abilities among 2 nd graders? Increased interest in reading? c.How consistently are Instructional Teams developing standards-aligned units of instruction for each subject and grade level (IIA01)? d.Which recruitment strategies (of the three that we used) were most effective in involving fathers in Early Head Start? e.How effective was the character education program in our middle school? f.Did our new guidance program help new ESL students transition socially to our school? g.How many AVID teachers have we trained during the past three years? Adapted from Fitzpatrick, Sanders, & Worthen. (2004). Program evaluation: Alternative approaches and practical guidelines. (3 rd ed.) Boston: Pearson (pp ).

52 Avoid DRIP: Align Your Data Sources to Your Logic Model Inputs Data source(s) to assess attainment or completion Processes Data source(s) to assess attainment or completion Initial Outcomes Data source(s) to assess attainment or completion Long-term Outcomes Data source(s) to assess attainment or completion

53 Aligning Focus and Assessment Sources: Process (P) or Outcome (O) Indicator? ͟Using a classroom observation protocol to determine % of student engagement ͟Advanced pass rate on high school end-of-course SOL test ͟Questionnaire of 4 th grade math students to determine degree of “mathphobia” ͟Review of teacher-made lesson plans over the course of a 9-week period to determine % including explicit phonics instruction ͟Graduation rates ͟Number of “leveled” books available in the media center tracked over a 3-year period ͟Survey of students’ attitudes about participating in an after-school tutorial program (e.g., “On a scale of 1-to-5, how much did you enjoy attending the after-school program? How helpful was your after-school tutor?”) ͟3 rd grade SOL pass rates ͟% enrollment of minority students in foreign language courses in the high school ͟Implementation of an advisory period at the middle school level ͟Change from a standard 7-period bell schedule to an alternating-day block schedule ͟Average AP scores ͟Student attendance rates ͟Teacher attendance rates ͟Review of committee meeting agendas to determine % that focus on discussion of achievement data ͟Budget allocation per student ͟Grade Point Averages

54 AVID: Advancement Via Individual Determination

55 Retrieved 9/16/12)

56 TRY THIS Tasks: 1.Imagine undertaking an Goal Attainment evaluation of your program. a.Articulate at least 2 evaluation questions b.Identify 1-2 performance indicators (aka, data sources) that could help answer each question 2.If time allows, try doing the same for an Implementation Fidelity evaluation. Materials: Your logic model

57 Sage Advice Remember that not everything that matters can be measured, and not everything that can be measured matters. Know your means from you ends

58 To make “evidence-based decisions," you must have data that are: ValidNecessary for drawing appropriate inferences ReliableNecessary for avoiding erroneous judgments Triangulated Necessary to confirm data LongitudinalNecessary to determine trends and patterns DisaggregatedNecessary to provide "fine-grain" analysis, rather than over-generalizing How do SOL results hold up as a data source?

59 “Without evaluation, change is blind and must be taken on faith” -- Sanders (2000)

60 Managing Complex Change Adapted from Knoster, T. (1991) presentation at TASH Conference, Washington, DC. VisionSkillsIncentivesResources Action Plan CHANGE VisionSkillsIncentivesResourcesAction PlanCONFUSION VisionSkillsIncentivesResources Action Plan ANXIETY VisionSkillsIncentivesResourcesAction PlanRESISTANCE VisionSkillsIncentivesResourcesAction PlanFRUSTRATION VisionSkillsIncentivesResourcesAction PlanTREADMILL VisionSkillsIncentivesResources Action Plan Evaluative Feedback CHANGE for the better

61 Strengthening a Program Evaluation Plan Jane Jackson is a curriculum coordinator for Greenlawn County Public Schools and has been working closely with the district’s middle school faculty to implement interdisciplinary instruction as a means to improving students’ academic engagement and achievement. Within each grade, a team of teachers cooperates to develop lessons that are interdisciplinary. Individual members of the team have been assigned responsibility for the areas of English, mathematics, science, and social studies. The program has been implemented for two full years. Anecdotally, the 7 th grade Language Arts teachers (there are four of them, as well as collaboration teachers for both special and gifted education) appear to be engaging in interdisciplinary planning and instruction most enthusiastically and most regularly. Working with the school’s assistant principal, Lynnell Perkins, and the School Improvement Team chair, Archie Craun, Ms. Jackson has decided to evaluate the 7 th grade Language Arts program. Her evaluation tentatively includes the following: a)The means of the 6 th grade Writing and Language Arts SOL tests from last year compared to the means of the 7 th grade Writing and Language Arts SOL tests from this year. b)Monthly interviews of a random sample of 10% of the 7 th grade student population to assess students’ reactions to the Language Arts portion of the instructional program. c)A comparison of the mean, median, mode, and range of quarterly grades in 7 th grade Language Arts for this year’s 7 th grade students, last year’s 7 th grade students (current 8 th graders), and the 7 th grade students for the two years immediately prior to implementation of the program (current 9 th and 10 th graders). d)Observations by an outside observer twice a month, using a scale she has devised to record pupil interaction during class discussions and activities. e)A weekly tracking of teacher lesson plans for evidence of planned interdisciplinary instruction. Given what you know about program evaluation, what are some of the strengths of Ms. Jackson’s tentative evaluation plan? What would you advise her to change? What would you advise her to add? Adapted from Fitzpatrick, Sanders, & Worthen. (2004). Program evaluation: Alternative approaches and practical guidelines. (3 rd ed.) Boston: Pearson (pp ).

62 Inputs Processes Initial Outcomes Ultimate Outcomes Core Teachers English/LA Math Science History Resource Teachers Special Education Gifted Education Inter- disciplinary planning (lesson plans) Frequency Enthusiasm Student affect (interview) Student engagement (observation protocol) Student achievement Quarterly class grades in E/LA (mean, median, mode, & range) Quarterly class grades in E/LA (mean, median, mode, & range) Student achievement 6 th grade E/LA SOL 6 th grade E/LA SOL 7 th grade E/LA SOL 7 th grade E/LA SOL Greenlawn Middle School Interdisciplinary Instruction Initiative Intermediate Outcomes

63 Time to Evaluate 1.Understand basic concepts and principles of program evaluation at the school level 2.Create a logic model that accurately depicts the essential components of and relationships among input, process, and outcome variables of a school-based educational program 3.Select the appropriate focus for the evaluation of a program 4.Identify appropriate data sources aligned to the intended educational outcomes of a selected school-level program 5.Appreciate the importance of engaging in intentional program evaluation as an teacher leader

64 Empirical Research ≠ Program Evaluation Identifying focus Hypotheses testing Value judgments Replication of results Data collection Control of variables Generalizability of results Focus Organizational need Comparison of outcomes to intended program outcomes Contextually driven Low likelihood Broad and heavily dependent upon feasibility Little to none Dependent variables (i.e., results) more so than on Independent variables (i.e., the cause)

65 Investigating the Efficacy of a Clinical Faculty Program Christopher R. Gareis, Ed.D. Leslie W. Grant, Ph.D. The College of William & Mary

66

67 What we have already known

68

69 Indicators of Improved Effectiveness of Recruitment, Selection, & Training Currently Active

70 Percentage of Placements with CT vs CF

71 Indicators of Perceived Effectiveness Training Coherence between field and university: I have had student teachers in the past with absolutely no guidance or direction from the college. I’m not sure how effective I was. This class has been so beneficial and I feel I am more confident in mentoring this time! Thank you! Skill development as a mentor: I gained awareness in many of the philosophies behind coaching. As I learned techniques, I grew as a mentor. This program is essential for bringing professionalism to the training of student teachers. Mentoring does not always come naturally and having techniques and strategies to address problems and guide students is invaluable. Outcome orientation: I am prepared to mentor aspiring teachers. I am prepared to help them become independent and effective teachers. Professional growth as a teacher: I always feel very good about what I’m doing when I leave class. I like having the professional growth and being able to take a good look at my teaching. I have really enjoyed it. What a great program. I didn’t realize how much I didn’t know about being a CT.

72 Overall Evaluation of Experiences in the Clinical Faculty Training 1=Poor, 5=Excellent

73 What We Want to Know: Cooperating Teachers Clinical Faculty

74

75 Research Questions 1.To what degree do CF differ from CTs in their sense of self- efficacy for roles of a cooperating teacher? 2.To what degree do mid-term evaluations of student teachers placed with CF differ from those placed with CTs? 3.To what degree do final evaluations of student teachers placed with CF differ from those placed with CTs? 4.To what degree do new teachers who had been placed with CF differ from those who had been placed with CTs with regard to sense of efficacy for teaching, perceived impact on student learning, and intent to remain in the profession?

76

77 Research Methods – Data Collection Surveys – Clinical Faculty & Cooperating Teachers (R1) 1998 – 2011 n = 101 Response rate: 37.0% – Graduates of the School of Education (aka, new teachers) (R4) 2005 – 2010 n = 94 Response rate: 44.76% Mid-term & Final Student Teaching Evaluations (R2 & R3) – 2008 – 2011 (n = 319) Student Teacher self-evaluations CT/CF evaluations University Supervisor evaluations

78 Research Methods – Data Analysis Unit of Analysis – Cooperating teacher designation – Clinical Faculty (CF) – Trained through the School of Education’s Clinical Faculty Program – Cooperating Teacher (CT) – Not trained through the School of Education’s Clinical Faculty Program Statistical Analyses – T-tests for significant differences according to CT/CF designation

79 RQ 4

80 Research Questions 2 & 3 2.To what degree do mid-term evaluations of student teachers placed with CF differ from those placed with CTs? 3.To what degree do final evaluations of student teachers placed with CF differ from those placed with CTs?

81

82 Statistically Significant Differences in Mid- Term Ratings (RQ2) CompetencyCF RatingCT Rating 1.Understands subject matter and pedagogy Implements assessments for learning Participates in and applies professional development Demonstrates potential for teacher leadership out of 30 Competencies

83 Differences in Mid-Term Ratings (RQ2)

84 Statistically Significant Differences in Final Ratings (RQ3) CompetencyCF RatingCT Rating 16. Creates and selects appropriate assessments for learning out of 30 Competencies

85 Differences in Final Ratings (RQ3)

86 “Overall Teaching Effectiveness” Ratings (R2 & R3) Mid-TermFinal CFCTCFCT Student Teacher Self-Evaluation Ratings Cooperating Teacher Evaluation Ratings University Supervisor Evaluation Ratings

87 Overall Rating Comparison Student Teacher CF/CT University Supervisor

88 Research Question 1 (RQ1) 1.To what degree do CF differ from CTs in their sense of self-efficacy for roles of a cooperating teacher?

89 Sense of Efficacy for the Role of CF/CT (R1) Question Clinical Faculty (trained) n=76 Cooperating Teachers (non-trained) n=25 T-Test Equal Variances Assumed T-Test Equal Variances Not Assumed Convey Role as CF/CT* Four Fundamental Roles of Mentoring* Foster Relationship Effectively Observe* Use a Variety Supervision Strategies** Effectively Conference Summatively Evaluate ST* Impact Professional Abilities of ST Provide High Quality Field Exp Likelihood ST Positively Affects Pupil Learning Likelihood ST Enters Teaching Likelihood ST Remains in Teaching Likelihood ST Positive Impacts Pupil Learning in First Year Likelihood ST Emerges as a Teacher Leader *significance <.05 **significance <.10

90 To what experiences do CF and CT attribute their mentoring acumen? GroupExperience% Indicating Most Important Experience Cooperating TeachersExperience as a classroom teacher 28.0% Clinical FacultyClinical Faculty training*38.7% * 20% of CF selected “experience as a classroom teacher” as Most Important.

91 RQ 2 RQ 3 RQ 1

92 What could explain......differences in student teaching outcomes associated with Clinical Faculty?....the lack of differences between CF and CTs in both intermediate and long-term outcomes?

93

94 Should we continue, discontinue, or change the W&M Clinical Faculty Program?

95

96 SOL Jam & Cram Program Evaluation in Use Berkeley Middle School SOL Jam & Cram Spring 1999

97 INPUTSPROCESSESOUTPUTS INITIAL OUTCOMES INTERMEDIATE OUTCOMES ULTIMATE OUTCOMES Analyzed SOL results and class performance Selected students:  C or B avg.  LPT  7 th gr. SOL Invited 175 “in the middle” students Moti- vational “recruit- ment”  Analyzed SOL test blueprints  Targeted specific SOLs for focused review  Review & enrichment activities  High-activity, novel instructional strategies  8 sessions (1/2 hour each)  Timely (week prior to SOLs)  Volunteer teachers (but paid)  Collaboratively planned lessons  Lunch & door prizes provided  Sense of “efficacy” for SOL success (survey of students)  SOL assessment results  Academically prepared for success in high school  Independent thinker  Responsible citizen  Lifelong learner SOL Jam & Cram

98 Why Did Students Attend? 7 th 8th 50%32%It is important to me to do my best in school. 33%65%I want to avoid having to go to summer school. 10%3%My parents made me sign up. 7%--Some of my friends signed up, so I did, too.

99 Why Did Students NOT Attend? 7 th 8th 4%--I do not enjoy learning. --6%I already know that I will have to attend summer school. 4%--I don’t’ believe that Jam & Cram would help me on any of the SOL tests. 4%6%I do not feel that I fit into the group of students who were selected to attend. 52%56%I have a conflict on the day of Jam & Cram, so I wouldn’t be able to attend. 26%25%I forgot to return the slip on time 9%--I can study fine on my own --6%I didn’t want to go. (“It was really stupid, dumb.”) --3%I don’t like school.

100 Post-Survey Participants To what degree do you agree or disagree with the following statement: “Jam & Cram helped me to be better prepared for the SOL tests.” [4 pt. scale] 7 th Grade: th Grade: 3.25 Non-Participants 7 th 8 th 35%22%I wish I had attended 45%56%I’m glad that I didn’t attend because I think I did well anyway. 21%22%Other (mostly explanations of why they didn’t attend)

101 Percentage of Students Passing SOL Tests Berkeley Middle School Grade 8

102

103

104 An Analysis of SOL Data: What We Can Learn 1.We did great last year with gains of percentage points in all areas! 2.Science is our strongest area: highest pass rate, highest advanced pass rate, and relatively low disparity. 3.Writing is a strong area: high pass rate and relatively low disparity…but very few advanced passes. 4.English/Reading is a relatively strong area with a high pass rate and a very high advanced pass rate. 5.Computer Technology is a consistently strong area…but has the second highest disparity group 6.Math 8 is conquerable…but the disparity is wide and our overall pass rate is still in shaky territory 7.History/Social Studies remains our greatest challenge in terms of overall pass rate and disparity by both ethnicity and gender…BUT we had whopping gains last year! 8.Our year 2002 class (last year’s 6 th grade) is academically strong. Our 2003 class (this year’s 6 th grade) is even stronger! 9.Jam & Cram was a successful strategy by all anecdotal accounts, but was not effective statistically speaking. We may want to repeat the program but with definitive subgroups of students and not in a single cram sessions. 10.SOL Resource class appears to have been a success based on pass rates alone (nearly 100%), although it has not been compared to control data.

105


Download ppt "Principles of Program Evaluation A Workshop for the Southwest Virginia Professional Education Consortium Christopher R. Gareis, Ed.D. The College of William."

Similar presentations


Ads by Google