Scaling up our instruction Our prior work demonstrates an effective method for teaching the Control of Variables Strategy (CVS), a means of designing simple.

Slides:



Advertisements
Similar presentations
Instructional Decision Making
Advertisements

When Students Can’t Read…
National Reading Panel. Formation Congress requested its formation in Asked to assess the status of research-based knowledge about reading and the.
Exercise Science Chapter 19:Motor Learning and Skill Acquisition
Chapter Fifteen Understanding and Using Standardized Tests.
Supporting (aspects of) self- directed learning with Cognitive Tutors Ken Koedinger CMU Director of Pittsburgh Science of Learning Center Human-Computer.
Standards-Based Grading in the Science Classroom How do I make my grading support student learning? Ken Mattingly
By: Michele Leslie B. David MAE-IM WIDE USAGE To identify students who may be eligible to receive special services To monitor student performance from.
1 Alignment of Alternate Assessments to Grade-level Content Standards Brian Gong National Center for the Improvement of Educational Assessment Claudia.
Group 7. Goals and Objectives to teach the children about genes and how different combinations produce different offspring. To help children easily recognize.
Principles of High Quality Assessment
7/3/2015 Musgrove – Broward College Learning Theories & Technology Integration.
DED 101 Educational Psychology, Guidance And Counseling
New Hampshire Enhanced Assessment Initiative: Technical Documentation for Alternate Assessments Alignment Inclusive Assessment Seminar Brian Gong Claudia.
Assessing Student Learning
Science Inquiry Minds-on Hands-on.
 What makes a good intelligence test?  Do Intelligence Tests actually measure intelligence?
NAEYC Developmentally Appropriate Practice in Early Childhood Programs Serving Children from Birth through Age 8.
Wanda Y. Wade. Advanced Organizer Consequences Types of Social Skills Identifying deficits When Planning Looks of Social Skills Interventions Must Haves.
CURRICULUM EVALUATION. Citation and Skill Focus  Charles, R. I., et al. (1999). Math, Teacher’s Edition, Vol 2. New York: Scott Foresman-Addison Wesley.
Teaching for Reading Diagnosis and Improvement
High Stakes Testing EDU 330: Educational Psychology Daniel Moos.
Teaching, Learning, & Transfer of Experimental Procedures in Elementary School Science David Klahr Department of Psychology Pittsburgh Science of Learning.
Feedback and Next Step Marking
Pre-Conference Workshop – June 2007 BUILDING A NATIONAL TEAM: Theatre Education Assessment Models Robert A. Southworth, Jr., Ed.D. TCG Assessment Models.
Practical Approaches to Stretch and Challenge
Standards-Based Science Instruction. Ohio’s Science Cognitive Demands Science is more than a body of knowledge. It must not be misperceived as lists of.
An iterative approach to designing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr, Stephanie Siler, Cressida Magaro,
Classroom Assessments Checklists, Rating Scales, and Rubrics
Science Intervention Amber Jones, Consultant, Doni Cash, Consultant,
EDU 385 EDUCATION ASSESSMENT IN THE CLASSROOM
Comp 20 - Training & Instructional Design Unit 6 - Assessment This material was developed by Columbia University, funded by the Department of Health and.
Training in Experimental Design (TED): Developing Scalable and Adaptive Computer-based Science Instruction (Year 2) Stephanie Siler, Mari Strand Cary,
11 Principles of Effective Instruction and 2 misconceptions Colby Tofel-Grehl, USU Heavily modified by J. M. Cohoon 2013.
Cognitive Psychology in Mathematics Education Contributor© POSbase 2004 The overview of Anderson, Reder, & Simon (2000).Anderson, Reder, & Simon (2000).
CT 854: Assessment and Evaluation in Science & Mathematics
Grading Special Education Students Elementary, Middle School, and High School Ages “Research suggests that grading practices vary considerably among.
NAEYC Developmentally Appropriate Practice in Early Childhood Programs Key Messages and Implication.
Lecture by: Chris Ross Chapter 7: Teacher-Designed Strategies.
Summative vs. Formative Assessment. What Is Formative Assessment? Formative assessment is a systematic process to continuously gather evidence about learning.
Training in Experimental Design: Developing scalable and adaptive computer-based science instruction Mari Strand Cary, David Klahr Stephanie Siler, Cressida.
Standards-Based Grading in the Science Classroom How do I make my grading support student learning? Ken Mattingly B.A. – University of Kentucky M.A. –
Developed and implemented by the multidisciplinary team (MDT)
An Introduction to Formative Assessment as a useful support for teaching and learning.
Using UDL to Set Clear Goals and Support Every Student’s Learning.
Orchestrating Mathematical Discussion SESSION 3 OCTOBER 21, 2015.
SCIENCE 1 ASSOCIATE DEGREE IN EDUCATION
Understanding the 2015 Smarter Balanced Assessment Results Assessment Services.
Teaching the Control of Variables Strategy in Fourth Grade Classrooms Robert F. Lorch, Jr., William J. Calderhead, Emily E. Dunlap, Emily C. Hodell, Benjamin.
DATA ANALYSIS Looking at Student Work November 2013.
Using Data and Grouping to Teach All Students All the Time—Differently!
Artificial Intelligence
National Science Education Standards. Outline what students need to know, understand, and be able to do to be scientifically literate at different grade.
1 Far West Teacher Center Network - NYS Teaching Standards: Your Path to Highly Effective Teaching 2013 Far West Teacher Center Network Teaching is the.
Improving Clear Learning Goals Making Learning Goals into Learning Targets that are Standards Based.
What is a District Capacity Assessment (DCA)? The DCA is a measure of a district’s capacity to produce educational benefits for students by successfully.
Asking the Right K-12 Questions How to Answer Them to Evaluate K-12 STEM Outreach and Engagement Carlos Rodriguez, Ph.D., Principal Research Scientist.
Assessment for Learning Centre for Academic Practice Enhancement, Middlesex University.
Thinking aloud about NEW AND OLD BALLS and ramps Bemilies, et al. University of Kentucky--Lexington, KY ABSTRACT INTRODUCTION  Scientists use many methods.
TESTS FOR YOUNG LEARNERS. GENERAL APPROACH Children aged from about 5 to 12 Testing provides an opportunity to develop positive attitudes towards assessment,
CURRICULUM EVALUATION. Citation and Skill Focus  Charles, R. I., et al. (1999). Math, Teacher’s Edition, Vol 2. New York: Scott Foresman-Addison.
Classroom Assessments Checklists, Rating Scales, and Rubrics
What Does it Look Like in Grades Kindergarten-2nd ?
Quarterly Meeting Focus
Classroom Assessments Checklists, Rating Scales, and Rubrics
Bursting the assessment mythology: A discussion of key concepts
Prepared by: Toni Joy Thurs Atayoc, RMT
HOMEPAGE SCADAS Information UDL Activities Sharing Community.
CS160: Lecture 6 John Canny Fall /9/2018.
Understanding and Using Standardized Tests
Presentation transcript:

Scaling up our instruction Our prior work demonstrates an effective method for teaching the Control of Variables Strategy (CVS), a means of designing simple experiments, for students as young as third grade (Strand Cary & Klahr, to appear in special issue of Cognitive Development), Though promising, this brief, one-size-fits-all instruction has some disadvantages: It does not reach all students. Particularly students in low-income schools with large minority populations require explicit scaffolding, repetition, and instruction in multiple domains to master the procedures and concepts of CVS (Li, Klahr, & Siler, 2006). Such adaptations are labor intensive and still fail to reach some students. Many students have beliefs and misconceptions that are very resistant to change. Even when they seem to thoroughly understand CVS, some fail to transfer to new domains or question types. Thus we are in the process of creating the TED computer tutor (for Training in Experimental Design) to provide individualized instruction. Working toward robust knowledge of experimental design: Development of the TED Tutor Mari Strand Cary, Stephanie Siler, Cressida Magaro & David Klahr Carnegie Mellon University A need for instruction Experimental design and inference are critical skills for scientific thinking. Teachers emphasize this by asking students to complete science fair projects and standardized tests include related items. Although the ability to infer causal mechanisms is present to some degree in infancy (Schultz & Sommerville, 2006), the ability to design experiments from which to draw unambiguous causal inferences is lacking for many middle school children (Chen & Klahr, 1999; Klahr & Nigam, 2004). Most students do not attain experimental design and inference skills on their own or in their K-8th grade classes (see Klahr & Li, 2005 for review) Prior research (Strand Cary & Klahr, to appear in special issue of Cognitive Development; Chen & Klahr, 1999) has resulted in a promising technique for teaching experimental design skills. However, some students are left behind. This research was supported by the Department of Education, IES (# R305H060034) Questions? Comments? Please contact CDS 2007 Training in Experimental Design (TED) Tutor What is TED? An experimental design computer tutor used individually by 4th-8th grade students in full-class setting Used before, during, or after teacher-instruction Individualized, adaptive instruction about simple experimental design, evaluation, and interpretation. Efficient instruction that will be effective for all students Multifaceted instruction leading to long-lasting, generalized knowledge demonstrable in new domains and question formats How will it help all students learn? Instruction, practice, and assessment will be based on individuals’ knowledge and mastery in real time. Ongoing recording and analysis of student actions and formative assessments will enable tutor to do “knowledge tracing” and “model tracing” and adapt to the current “student model” (Anderson, Boyle, Corbett & Lewis, 1990; Corbett & Anderson, 1995) while selecting instruction. Teachers can use student performance information to decide which students to spend more time with or to guide additional full-class instruction (e.g., if common errors arise). How will the tutor adapt to each student? Build on student strengths and current knowledge Focus on student’s weak or missing knowledge Provide “just-in-time” and “just enough” feedback Ask a range of question types Balance domain specific and domain general instruction Adjust pacing and coverage “Fade” scaffolding Introduce new domains (to renew motivation, test understanding, and promote generalization) Require forward and backward reasoning (e.g., design an experiment to answer a specific question; determine the experimental question by looking at a well-designed experiment) How do students incorrectly design simple experiments? VariableRamp 1Ramp 2 SlopeSteepNot steep BallRedYellow SurfaceSmoothRough Starting positionTopMiddle VariableGroup 1Group 2 Studying locationDesk LightingWell litDim Noise levelLoudQuiet Does WHERE YOU STUDY affect your grades ? What is behind these errors? Guessing Carelessness Different goals Different or incorrect experimental logic Misuse of visual representations Completely confounded experiment Two identical set-ups Hold the target variable constant and vary the other variables Partially-confounded experiment Choose combinations of variables such that each set- up is optimally designed. Unconfounded experiment for wrong target variable Some fourth-grade students were doing a project for their science class. They were trying to find the answer to the question “Do beetles choose to live in bright light or in the shade?” The picture below shows how a student set up the experiment to find out if beetles choose to live in bright light or in the shade. Is this a good way to set up the experiment? Why or why not ? For example, identifying and fixing problematic experiments (as often required by confusing standardized test items like the one on the right) seems to be more difficult than designing experiments from scratch. Does the SURFACE affect how far balls roll? To find out whether seeds grow better in the light or dark, you could put some seeds on pieces of damp paper and A.keep them in a warm, dark place B.keep one group in a light place and another in a dark place C.keep them in a warm, light place D.put them in a light or dark place that is cool TED’s Iterative Design Process Version n Pilot testing and Classroom implementation (+ pre, post, and formative assessments) Human tutoring (+ delayed post assessment) Revise model of student thinking, instruction, and interface Teaching CVS with ramps (Screen shots taken from current version which involves small-group instruction by researchers) Ramp familiarizationReviewing CVSSeeking reminder during experiment designExplaining the visual representations Good / Bad References Anderson, J.R., Boyle, C.F., Corbett, A.T. & Lewis, M.W. (1990). Cognitive modeling and intelligent tutoring. Artificial Intelligence, 42 Corbett, A.T. & Anderson, J.R. (1995). Knowledge tracing: Modeling the acquisition of procedural knowledge. User modeling and user- adapted Interaction Chen, Z. & Klahr, D. (1999). All other things being equal: Children’s acquisition of the control of variables strategy. Child Development, 70(5), Klahr, D. & Li, J. (2005). Cognitive research and elementary science instruction: From the laboratory, to the classroom, and back. Journal of science and educational technology, 4. Klahr, D. & Nigam, M (2004). The equivalence of learning paths in early science instruction: Effects of direct instruction and discovery learning. Psychological science, 15(10) Li, J., Klahr, D. & Siler, S. (2006). What Lies Beneath the Science Achievement Gap? The Challenges of Aligning Science Instruction with Standards and Tests. Science Educator, 15, 1-12 Schulz, L. & Sommerville, J. (2006). God does not play dice: Causal determinism and preschoolers’ causal inferences. Child Development, 77(2) Strand Cary, M. & Klahr, D. (to appear in special issue of Cognitive Development)