Presentation is loading. Please wait.

Presentation is loading. Please wait.

C R E S S T / U C L A Center for the Study of Evaluation National Center for Research on Evaluation, Standards, and Student Testing (CRESST) New Models.

Similar presentations


Presentation on theme: "C R E S S T / U C L A Center for the Study of Evaluation National Center for Research on Evaluation, Standards, and Student Testing (CRESST) New Models."— Presentation transcript:

1 C R E S S T / U C L A Center for the Study of Evaluation National Center for Research on Evaluation, Standards, and Student Testing (CRESST) New Models of Technology Sensitive Evaluation: Giving Up Old Program Evaluation Ideas Eva L. Baker and Joan L. Herman SRI February 25, 2000

2 C R E S S T / U C L A Goals of Presentation è Outline Purposes and Challenges of Technology Evaluation è Describe Present Limitations of Technology Evaluation è Suggest Improvements

3 C R E S S T / U C L A Purposes of Technology Evaluation è Soothe Anxiety è Justify Expenditures è Judge Impact è Identify Short-Falls è Improve Outcomes è Shore Up Managers’ Images è Demonstrate Technology Use in Evaluation

4 C R E S S T / U C L A Limitations of Current Approaches to the Evaluation of Technology è Conception of Evaluation è Designs è Measures è Validity of Results è That About Covers It

5 C R E S S T / U C L A Limitations: Concept of Evaluation è The Scholarly Overlay of “Formative” and “Summative” Evaluation Makes Little Sense in Education in General and No Sense in Technology Implementations è Focus on “Value Added” Using New Outcomes Instead of Limiting Measures to Lowest Common Denominator è Evaluation Should Match Cycle-Time of Technology, e.g., No Five-Year Studies

6 C R E S S T / U C L A Limitations: Designs è Existing Designs Are Usually Messy, Volunteer Studies of Available Classrooms è Randomized Treatment Allocations Are Possible, But Compensation from Home and Other Environments as Well as Pressures for Equal Access Make Them Impractical in the Long Run Without Creative Strategies è Treatments Need to Be Reconceptualized in Terms of Control and Uniformity

7 C R E S S T / U C L A Limitations: Design/Procedures è Need for Collective Bargaining Agreements for Participation in Evaluation—Data Provision, Types of Information, Ability to Monitor Children and Adults è Human Subjects and Informed Consent

8 C R E S S T / U C L A Limitations: Measures of Technology Effects è Opinion, Implementation, Smile-Tests è Student Performance Measures Insensitive to Implementations n Mismatch Between Desired Goals and Measures n Standardized Measures n Mom and Pop Measures Lacking Technical and Credible Qualities n Masked by “Standards-Based” Rhetoric

9 C R E S S T / U C L A Families of Cognitive Demands Self-Regulation Communication Content Understanding Problem Solving Teamwork and Collaboration Learning

10 C R E S S T / U C L A Cross-Walk to Guide the Simultaneous Design of Assessment and Instruction è Cognitive Models (Task Specification, Scoring Guides) Become Implemented Different Subject Matters è Domain-Independent and Domain-Dependent Components è Used for Design and/or Administration and/or Scoring

11 C R E S S T / U C L A Next Generation: Authoring Systems for Multiple Purposes è Not an Item Bank è Capture Content, Task, Process è Instant Scoring and Feedback è Expert-Based è Beyond the Screen

12 C R E S S T / U C L A Limitations: Validity of Results è Source: Inappropriate Measures, Unclear Assignment, Treatment Vagaries è Even in the Best Case: Generalizing to What? By the Time We Complete a Study, the Treatments Are Obsolete

13 C R E S S T / U C L A Suggestions for Improvement è Defining the Objectives of Evaluation è Fixing Measures è Addressing the Concept of Evaluation

14 C R E S S T / U C L A Distributed Evaluation Characteristics and Functions è Conceptually Congruent with Distribution and Decentralization è Provides Information for Users and Program Managers è Allows Flexibility in Implementation Measures, e.g., Measures of Engagement, While Raising the Standards for Validity

15 C R E S S T / U C L A An Indicators Approach è Flexibility è Longitudinal è Data Capture è Disaggregation and Aggregation è Data Export/Import è Local Engagement and Feedback

16 C R E S S T / U C L A An Indicators Approach è Report Generation to Appropriate Constituencies è Updatable è Operational è Creates the Right “Management, High Tech” Patina

17 C R E S S T / U C L A Quality School Portfolio è Longitudinal Database è Standards-Based è Multi-Purpose è Multi-User è Multiple Occasions è Local Goals è Automated Reports

18 C R E S S T / U C L A Key Attributes of Distributed Evaluation è Measures: Fixed and Flexible n Some Common Performance Measures n And “Indicator” Mentality for Outcome Measures from Archival Sources n Common and Site Specific Implementation Measures n Fixed and Flexible Data Collection Schedule n Feedback Is a Feature in Every Design

19 C R E S S T / U C L A More Characteristics è Local Site Is a Participant in Rather Than a Recipient of Evaluation è Software Based for Easy Data Entry Feedback, and Update è Tailored Reports è Design Independent


Download ppt "C R E S S T / U C L A Center for the Study of Evaluation National Center for Research on Evaluation, Standards, and Student Testing (CRESST) New Models."

Similar presentations


Ads by Google