Web-Homework Platforms: Measuring Student Efforts in Relationship to Achievement Michael J. Krause INTRODUCTION Since the 2007 fall semester, I have used.

Slides:



Advertisements
Similar presentations
STAAR/EOC Overview of Assessment Program HISD Professional Support & Development High School Science Team.
Advertisements

Maryland School Assessment (MSA) 2012 Science Results Carolyn M. Wood, Ph.D. Assistant Superintendent, Accountability, Assessment, and Data Systems August.
BOARD ENDS POLICY REVIEW E-2 Reading and Writing Testing Results USD 244 Board of Education March 12, 2001.
Student Growth Percentile Model Question Answered
1 New England Common Assessment Program (NECAP) Setting Performance Standards.
Managerial Accounting Excel Assignments: A Potential Leading Indicator of Exam Performance Michael J. Krause INTRODUCTION During the spring 2012 semester,
A-F School Grading Presentation October 2, History of A to F School Grading System Preliminary grades based on data from SY08-09 through SY10-11.
TEST SCORE COMPENSATION, MORAL HAZARD, AND STUDENT REACTION TO HIGH-SCORER DISINCENTIVES IN ECONOMICS PRINCIPLES Johnnie B. Linn III Concord University.
Economics 311 Money and Income Introduction Spring 2002 Department of Economics College of Business and Economics California State University-Northridge.
Writing Program Assessment Report Fall 2002 through Spring 2004 Laurence Musgrove Writing Program Director Department of English and Foreign Languages.
 There are 3 things that you can do right after you get home today:  Read a book  Homework  Take a nap  Next to each one, assign a value from 0-10.
Improving Learning via Tablet-PC-based In-Class Assessment Kimberle Koile, MIT CS and AI Lab David Singer, MIT Brain & Cognitive Sciences Classroom Presenter.
METHODS Study Population Study Population: 224 students enrolled in a 3-credit hour, undergraduate, clinical pharmacology course in Fall 2005 and Spring.
 Students have an opportunity to make choices in middle school that are new and exciting and like nothing they have experienced before…
KCCT Kentucky’s Commonwealth Accountability Testing System Overview of 2008 Regional KPR.
How Does Secondary Education in Louisiana Stack up? Presented by Dr. Bobby Franklin January 31, 2005.
The Power of Moving Averages in Financial Markets By: Michael Viscuso.
SB : The Great Teachers and Leaders Act State-wide definition of “effective” teacher and principal in Colorado Academic growth, using multiple measures.
Using VAM Data to Determine Points (50 % of the Total) toward Unified Single Rating Draft Procedures 11/21/2012 DRAFT DOCUMENT.
3.2.2 U NDERSTANDING MARKETS AND CUSTOMERS – M ARKETING DATA AQA Business 3 D ECISION MAKING TO IMPROVE MARKETING PERFORMANCE Is there a relationship between.
Language Development: The Course Jan. 6, The Course Designed to give students a comprehensive understanding of language development, primarily in.
State Charter Schools Commission of Georgia SCSC Academic Accountability Update State Charter School Performance
Paul Parkison: Teacher Education 1 Articulating and Assessing Learning Outcomes Stating Objectives Developing Rubrics Utilizing Formative Assessment.
Determination of Entrance Exam Scores as a Valid Predictor for Final Grade in BIOL 213 Through Data Visualizations ANGELA K. SHAFFER CDS301 DECEMBER 12,
University of Georgia – Chemistry Department JExam - A Method to Measure Outcomes Assessment Charles H. Atwood, Kimberly D. Schurmeier, and Carrie G. Shepler.
TEACHER EFFECTIVENESS INITIATIVE VALUE-ADDED TRAINING Value-Added Research Center (VARC)
Integration of Embedded Lead Tutors Abstract In a collaboration between the Pirate Tutoring Center and several faculty members on campus, we have implemented.
Comparative Civilizations 12 Introduction. Course Structure This is very much a web-based course. We also use plenty of text-based material from the Library,
A Short Overview Melcher-Dallas K-12 Gifted and Talented.
-A bar graph displays date with vertical or horizontal lines. Bar Graph Bar graphs are used to compare categorical data using bars. Amount of rainfall.
Parent Update 1. 1.How are grades identified? 2.What are dispositions? 3.What information will be on the report card? 2.
Exploring the Relationship between Teachers’ Literacy Strategy Use and Adolescent Achievement Kelly Feighan, Research for Better Schools Elizabeth Heeren,
This material is based upon work supported by the National Science Foundation under Grant No and Any opinions, findings, and conclusions.
THE NORMAL DISTRIBUTION AND Z- SCORES Areas Under the Curve.
Statistics: Unlocking the Power of Data Lock 5 Section 3.1 Sampling Distributions.
The Normal distribution and z-scores
QAR Question Answer Relationships. What is QAR? QAR stands for: Q- Question A- Answer R- Relationships –Using QAR we can determine question types to help.
Early Identification of Introductory Major's Biology Students for Inclusion in an Academic Support Program BETHANY V. BOWLING and E. DAVID THOMPSON Department.
Louisiana State University at Alexandria. Background Students Introductory Biology for majors 2 laboratory sections of ~ 25 students combined into 1 lecture.
B E S T S.  Opportunity for students to take part in an enjoyable and stimulating experience to learn more about their academic strengths  Chance for.
 The introduction of the new assessment framework in line with the new curriculum now that levels have gone.  Help parents understand how their children.
Minnesota’s Proposed Accountability System “Leading for educational excellence and equity. Every day for every one.”
Objectives of EFL Teaching objectives The difference between ”aims”, “goals” and “objectives”.  An aim is an expression of a long-term purpose, usually.
M.B.A. PROGRAMME. Introduction The Master of Business Administration (M.B.A.)is a Post-Graduate course offered by Princeton College, it is a Two- year.
Missouri Western State University NCAT Mid-Course Sharing Workshop Lou Fowler Associate Professor of Accounting
Measures of Variation. Variation Variation describes how widely data values are spread out about the center of a distribution.
Understanding Your PSAT/NMSQT Results
Bridging the Achievement Gap in Elementary Education Lincoln J
What’s Happening With Millennials In Community College Geosciences
Understanding Your PSAT/NMSQT Results
EVAAS Overview.
Understanding Your PSAT/NMSQT Results
STAAR State of Texas Assessment of Academic Readiness
Understanding Your PSAT/NMSQT Results
Welcome To AP Economics
Understanding Your PSAT/NMSQT Results
Access Center Assessment Report
Data Tables Packet #19.
KS2 SATS 2019.
Student Satisfaction Results
Understanding Your PSAT/NMSQT Results
The Prediction of National Physical Therapy Examination First Time Pass Rates Using Reading Comprehension and Critical Thinking Skills Tests Latoya green.
Madison Elementary / Middle School and the New Accountability System
WAO Elementary School and the New Accountability System
Presented by Joseph P. Stern
Understanding Your PSAT/NMSQT Results
Psychological Testing
Report of Achieving the Dream Data Team
Tukey Box Plots Review.
USG Dual Enrollment Data and Trends
Presentation transcript:

Web-Homework Platforms: Measuring Student Efforts in Relationship to Achievement Michael J. Krause INTRODUCTION Since the 2007 fall semester, I have used web-based accounting textbook homework platforms with two different elementary accounting books and also with two different Intermediate Accounting textbooks. Because of class size, I have had more viable measurement opportunities with beginning level students to analyze the effectiveness of their web-based systems. My definition of “effectiveness” centers around the ability of the web-based system to link a measurement of student effort with exam performance outcomes. Additionally, when a text test bank provides Blooms Taxonomy measures, I can differentiate performance outcomes between higher vs. lower levels of learning. I categorized the class performance on a unit exam by sub-dividing the population into thirds – top, middle and bottom achievers. I then tried to link the test performance to student efforts to prepare for the exam. The available artifact to reveal this link between efforts and outcomes can be generated by the web- homework platform itself. After grading homework and extra review assignments based upon my articulated parameters, the web-platform awards points. These platform points have no intuitive meaning as they can vary depending upon the number of questions assigned in anticipation of a particular unit exam. However, when related within population sub-groups, the platform points should give insights into the fundamental factors behind student achievement or lack of it. The presented observations made during the 2011 academic year. I taught four sections of Financial Accounting, two sections per semester. I measured student performance on two unit exams – the same exams each semester. The fall 2010 semester enrollment ranged from 73 to 79. While the spring 2011 semester enrollment started at 67 and ended with 58. Each exam had thirty multiple choice questions which could be categorized as either “knowledge or comprehension” (lower levels) or as “application or above” (higher levels) in keeping with the Blooms model. An original analysis of each exam generated an estimate of students who used the web-system. Web-users were projected to be the top two groups in platform unit score which therefore truncated the population by a third. My past efforts to link Blooms Taxonomy with outcome measurements produced a consistent finding. When utilizing multiple choice questions designed to measure knowledge, comprehension and application, an error pattern emerges in relation to the question population. In short, I compare the question category’s error rate versus its existence rate. The error rate for “knowledge” questions appears to be less than the existence percentage for that category on an exam. While the error pattern for “application” questions appear to be greater than the existence rate for that category on an exam. Finally, the error rate and the existence rate appear to be about the same for comprehension questions. Therefore this study seeks to replicate prior findings but in a different manner. This study combines “knowledge” questions with “comprehension’ questions. “Application” questions are grouped with higher levels of learning such as “analysis”. The text test bank provided the designation as to the learning level that a question measures. GENERAL OBSERVATIONS EXHIBIT 1 – Exam #1Test Score vs. Web-Platform Unit Score BLOOMS TAXONOMY OBSERVATIONS EXHIBIT 5 – Exam #1, Fall 2010 Analysis (Error Rate vs. Existence Rate) K or C = Knowledge or Comprehension; A or A = Application or Analysis Exhibit 6 – Exam #2, Fall 2010 Analysis (Error Rate vs. Existence Rate) Population N=79 -Fall N=67-Spring Exam #1 Fall 2010 Test Score Max = 100 Fall 2010 Platform Score Max = 165 Spring 2011 Test Score Max = 100 Spring 2011 Platform Score Max = 258 Top 3 rd Middle 3 rd Bottom 3 rd Whole Class Population N=79 -Fall N=67 –Spring Exam #1 Fall 2010 Test Score Max = 100 Fall 2010 Platform Score Max = 165 Spring 2011 Test Score Max = 100 Spring 2011 Platform Score Max = 258 Top 3 rd % Middle 3 rd 77.34%74.65%74.87%96.89% Bottom 3 rd 60.43%46.10%57.18%65.56% Whole Class79.23%73.60%77.31%87.62% Population N=73 -Fall N=58 –Spring Exam #2 Fall 2010 Test Score Max = 100 Fall 2010 Platform Score Max = 155 Spring 2011 Test Score Max = 100 Spring 2011 Platform Score Max = 230 Top 3 rd Middle 3 rd Bottom 3 rd Whole Class Population N=73 -Fall N=58 –Spring Exam #2 Fall 2010 Test Score Max = 100 Fall 2010 Platform Score Max = 155 Spring 2011 Test Score Max = 100 Spring 2011 Platform Score Max = 230 Top 3 rd % Middle 3 rd 78.82%74.08%73.35%78.57% Bottom 3 rd 56.24%33.23%55.26%48.60% Whole Class78.35%69.17%76.16%75.78% EXHIBIT 2 – Exam #1 Test Score vs. Web-Platform Unit Score (Common-Sized ) EXHIBIT 4 – Exam #2 Test Score vs. Web-Platform Unit Score (Common-Sized ) EXHIBIT 3 – Exam #2 Test Score vs. Web-Platform Unit Score Exam #1, Fall 2010 Top 3 rd Middle 3rd Bottom 3rd Total Class Total Answers Exist. Rate ErrorK or C24%42%55%40%158067% ErrorA or A31%53%64%50%79033% ErrorAll26%46%58%43% % Exam #2, Fall 2010 Top 3 rd Middle 3rd Bottom 3rd Total Class Total Answers Exist. Rate ErrorK or C25%38%47%37%109550% ErrorA or A31%50%65%49%109550% ErrorAll28%44%56%43% % WEB-PLATFORM SPECIFIC OBSERVATIONS EXHIBIT 7 – Test Scores of Exams #1 & 2 (All Results vs. Web-Platform Users) By ThirdsTop Mid Bot. Only = Used WebAllOnlyAllOnlyAllOnly Exam #1: Fall 2010 (N=79/53) Spring 2011 (N=67/45) Exam #2: Fall 2010 (N=73/49) Spring 2011 (N=58/39) CONCLUSIONS 1. Web-users of all talents consistently outperformed those who did not use it. 2. Bottom third of class significantly below class average on test score and web use. 3. Bottom third performs below expectations for Blooms lowest levels of learning. 4. Top third performs above expectations for Blooms highest levels of learning.