Presentation is loading. Please wait.

Presentation is loading. Please wait.

Benchmark Advisory Test (BAT) Update BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008.

Similar presentations


Presentation on theme: "Benchmark Advisory Test (BAT) Update BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008."— Presentation transcript:

1 Benchmark Advisory Test (BAT) Update BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008

2 This Update Has Four Parts Why we began the BAT project. The role of proficiency standards. Why the BAT follows a construct-based, evidence-centered test design. When will BAT be available.

3 Why we began the BAT project. 2006 A survey was conducted on the desirability of a BILC-sponsored, “benchmark” test with advisory ratings.

4 Participation in the Survey 16 countries responded to the survey: Austria Bulgaria Canada Denmark EstoniaFinland Germany Hungary Italy Latvia Lithuania Poland Romania Spain Sweden Turkey

5 Survey Results 1.Would your country use a Benchmark Test if one were available? Definitely yes: 8 Probably yes: 5 Perhaps: 2 Most likely not: 0 Definitely not: 1

6 Survey Results 2.Does your country use “plus levels” when assigning STANAG ratings? Definitely yes: 3 Probably yes: 0 Perhaps: 1 Most likely not: 1 Definitely not:11

7 Survey Results 3.Would you like to have plus levels incorporated into a Benchmark Test? Definitely yes: 5 Probably yes: 5 Perhaps: 2 Most likely not: 2 Definitely not: 2

8 Conclusions A “benchmark” test would be welcomed by most countries. (The scores should be advisory in nature.) Providing “plus” level ratings would allow greater fidelity in making comparisons. BILC should proceed with plans to: –Develop a benchmark STANAG test of reading comprehension. –Explore internet delivery options.

9 The Role of Proficiency Standards Dr. Martha Herzog BILC Athens, Greece 22-26 June 2008

10 TRAINING IN TEST DESIGN LANGUAGE TESTING SEMINAR – 20 Iterations –265 participants – 38 nations – 4 NATO officers –Facilitators from 10 nations

11 BENCHMARK TESTS Tests of all four skills Measures Level 1 through Level 3

12 STANDARDS All standards have three components –Content –Tasks –Accuracy

13 TEAMWORK The Working Group functions as a team –13 members from 8 nations –Contributions from many other nations

14 Summary STANDARDS TRAINING IN TEST DESIGN TEAMWORK TECHNOLOGY

15 Why does the BAT follow a construct-based, evidence- centered test design?

16 Because a C BT, ECD design solves a major problem encountered when testing proficiency in the receptive skills, i.e. in testing Reading and Listening.

17 In contrast to traditional test development procedures, CBT allows direct (rather than indirect application) of the STANAG 6001 Proficiency Scales to the development and scoring of Reading and Listening Proficiency Tests.

18 Test Development Procedures: Norm-Referenced Tests Create a table of test specifications. Train item writers in item-writing techniques. Develop items. Test the items for difficulty, discrimination, and reliability by administering them to several hundred learners. Use statistics to eliminate “bad” items. Administer the resulting test. Report results compared to other students.

19 Test Development Procedures: Norm-Referenced Tests (cont.) Each test administration yields a total score. However, setting “cut” scores or “passing” scores on norm-referenced tests is a major challenge. And relating scores on norm-referenced tests to a polytomous set of criteria (such as levels in the STANAG 6001 or other proficiency scales) is even more problematic.

20 A Traditional Method Of Setting Cut Scores 100 50 0 Level 3 Grou p Level 2 Group Level 1 Group Test to be calibrated Groups of ”known” ability

21 The Results One Hopes For: 100 50 0 Level 3 Grou p Level 2 Group Level 1 Group Distinct “cut” scores between the scores of the calibration groups Groups of “known” ability

22 The Results One Always Gets: 100 ??? 50 ??? 0 Level 3 Grou p Level 2 Group Level 1 Group Bands of Overlapping Test Scores Groups of ”known” ability

23 No matter where the cut scores are set, they are wrong for someone. 100 ??? 50 ??? 0 Level 3 Grou p Level 2 Group Level 1 Group Where in the overlapping range should the cut score be set? Groups of ”known” ability

24 Why is this “overlap” in scores always present? A single test score on a multi-level test… –Gives equal credit for every right answer regardless of its proficiency level. –Camouflages by-level abilities. –Is a “compensatory” score. Proficiency abilities… –Are by definition “non-compensatory”. –Require demonstration of sustained ability at each level.

25 A Better Test Design: Construct-Based Proficiency Testing Uses a “floor” and “ceiling” approach similar to that used in Speaking and Writing tests. The proficiency rating is assigned based on two separate scores: –A “floor” proficiency level of sustained ability across a range of tasks and contexts specific to that level. –A “ceiling” proficiency level of non-sustained ability at the next higher proficiency level.

26 Therefore Construct-Based Testing Tests each proficiency level separately. –Three tests for levels 1 through 3. –Or three subtests within a longer test. Rates each level-specific test separately. Applies the “floor” and “ceiling” criteria used in rating productive skills using a scale such as: –Sustained (consistent evidence) = 70% to 100% –Developing (present, inconsistent)= 55% to 65% –Emerging (some limited evidence)= 40% to 50% –Random (no visible evidence)= 0% to 35%

27 Does it make a difference? Consider the following example.

28 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Level 1 Results Level 2 Results Level 3 Results Total Score True Level Alice195? Bob195? Carol195?

29 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice857040195 Bob195 Carol195

30 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 195 Bob195 Carol195

31 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob195 Carol195

32 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob908520195 Carol195

33 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 195 Carol195

34 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 1952 (Clearly) Carol195

35 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 1952 (Clearly) Carol906045195

36 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 1952 (Clearly) Carol90 Sustained 60 Developing 45 Emerging 195

37 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Learner Results @ Level 2 Learner Results @ Level 3 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 1952 (Clearly) Carol90 Sustained 60 Developing 45 Emerging 1951 (Clearly)

38 A Total Score (where 195=Level 1) Versus Construct-Based Scoring Learner Results @ Level 1 Total Score True Level Alice85 Sustained 70 Sustained 40 Emerging 1952 (Barely) Bob90 Sustained 85 Sustained 20 Random 1952 (Clearly) Carol90 Sustained 60 Developing 45 Emerging 1951 (+ developing ability @ 2)

39 Scores on Construct-Based Tests are: valid, easily explained, and informative! But how is a CBT developed?

40 Test Development Procedures: Construct-Based Proficiency Tests 1.Define each proficiency level as a construct to be tested. 2.Follow a construct-based, evidence- centered test design. 3.Train item writers –In the proficiency scales. –In matching text types to the tasks in the scales. –In item writing.

41 Test Development Procedures: Construct-Based Proficiency Tests 4.Develop items that exactly match all of the specifications for each level in the proficiency scale, with... –Examinee task aligned with the author’s [or the speaker’s ] purpose. –Level-appropriate topics and contexts.

42 Test Development Procedures: Construct-Based Proficiency Tests 5.Use “alignment”, “bracketing”, and “modified Angoff” item review and quality control procedures. –A specifications review to insure alignment of author purpose, text type, and reader task. –A bracketing review to check the adequacy of the item’s response options for test takers at higher and at lower proficiency levels. –Modified Angoff ratings of item difficulty for “at-level” test takers to set passing levels.

43 Test Development Procedures: Construct-Based Proficiency Tests 6.Use data from the Angoff reviews to define “sustained ability” for each level of the test. 7.Assemble the “good” items into level- specific tests or subtests. 8.Do validation testing. 9.Use statistical analyses to confirm reviewer ratings.

44 Test Development Procedures: Construct-Based Proficiency Tests 10.Replace items that do not “cluster” or act like the other items at each level. 11.Score and report results for each level using “sustained” proficiency criteria. 12.Continue to build the item data bases to enable: –Random selection of test items for multiple forms. –Computer adaptive testing.

45 What do the results of a CBT Reading proficiency look like? Here are some initial results from the BAT English Reading Proficiency Test.

46 Results on the Level 1 Test Results on the Level 2 Test Results on the Level 3 Test

47 When will BAT be available? Funds have been set aside for administering and scoring of 200 free advisory tests. –All four skills will be tested. –BAT Reading, Listening, and Writing tests will be online tests. –The Speaking test will be conducted over the telephone. These tests are to be used in test norming or calibration studies.

48 When will BAT be available? We anticipate the following timeline: –About October, 2008. Directions on how to apply will be sent out. –About November, 2008. Applications will be submitted. –About December, 2008. Applications will be reviewed and decisions made about how the 200 tests will be allocated. –Between February and June. The first round of advisory testing will be conducted.

49 When will BAT be available? More specific information will be sent out after consultation with ACT.

50 Are there any questions? ? ? ?


Download ppt "Benchmark Advisory Test (BAT) Update BILC Conference Athens, Greece Dr. Ray Clifford and Dr. Martha Herzog 22-26 June 2008."

Similar presentations


Ads by Google