Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dean Luo, Wentao Gu, Ruxin Luo and Lixin Wang

Similar presentations


Presentation on theme: "Dean Luo, Wentao Gu, Ruxin Luo and Lixin Wang"— Presentation transcript:

1 Dean Luo, Wentao Gu, Ruxin Luo and Lixin Wang
Investigation of the Effects of Automatic Scoring Technology on Human Raters' Performances in L2 Speech Proficiency Assessment Dean Luo, Wentao Gu, Ruxin Luo and Lixin Wang

2 Background English speaking tests have become mandatory in college and senior high school entrance examinations in many cities in China Most of them are assessed manually Cost a lot time and efforts Hard to recruit enough qualified experts Recent advances in automatic scoring based on ASR Used in high-stakes tests Comparable performances with human raters Many educators remain skeptical about the technology

3 Objectives of this research
Try to find out the answers to these research questions: 1) how different are non-expert teachers' performances compared to experts? 2) Will showing them the ‘facts’ of different aspects of fluency based on acoustic features and experts’ judgement changes their minds? 3) How can we better utilize automatic scorings technology to assist human raters instead of replacing them?

4 Experiments Examined how experts and non-experts perform in assessing real speaking tests Extracted acoustic features and conducted automatic scoring on the same data Presented to the non-expert teachers the results of multi-dimensional automatic scores on different aspects of pronunciation fluency when assessing an utterance, and examined how that might change their judgments.

5 Speech data The recording data of the English speaking test in Shenzhen High School Unified Examination repeating of a one-minute-long video clip 300 utterances 50 from each of the 6 proficiency level groups Develop set : 150; Test set 150

6 Human Assessment Participants Results 2 phonetically trained experts
14 non-expert high school English teachers 10 college students majored in English education Results The correlation between the two experts is The 24 non-experts were clustered into 4 groups according to similarity of the scores among raters Group A Group B Group C Group D Expert A 0.801 0.775 0.743 0.734 Expert B 0.810 0.769 0.751 0.725

7 Expert Annotation Perceptual dimensions annotated by an experienced expert include: 1) Intelligibility: understanding of what has been said (0: very poor,5:excellent) 2) Fluency: indicate the level of interruptions, hesitations, filled pauses (0: very poor, 5:excellent) 3) Correctness: indicate if all the phonemes have been correctly pronounced (0: very poor, 5: excellent) 4) Intonation: indicate to which extent the pitch and stress patterns clearly resembles the ones in English (0: unnatural , 5: natural) 5) Rhythm: indicate to which extent the timing resembles the one in English ( 0: unnatural, 5:natural) 60 utterances (10 from each proficiency level group) from the development data were annotated

8 Acoustic Models Data from Wall Street Journal CSR Corpus and TIMIT were used to train CD- DNN-HMM and CD-GMM-HMM The DNN training in this study follow the procedure described in (G.E.Dahl. et al, 2012) using KAIDI. Similar word error rate reduction has been achieved on test set of WSJ corpus as reported in (W. Hu, et al, 2013)

9 GOP(Goodness of Pronunciation) Scores
The GOP score is defined as follows W. Hu, et al proposed a better implementation of GOP by calculating the average frame posteriors of a phone with the output of DNN model: Where Is the posterior probability that the speaker uttered phoneme p given speech observation , Q is the full set of phonemes Where is an output of DNN are the start and end frame of phone P

10 Other features scores Word and Phone Correctness
Pitch and Energy Features The Euclidean distances of F0 and energy contours between students’ speech and correct models Timing Features rate of speech (ROS) phoneme duration pauses Unsupervised Clustering Starting from each frame of the acoustic features, any adjacent feature frames that are similar to each other will be clustered as a group. If an utterance is distinctly pronounced, there will be more clusters in a given sentence than those that are not clearly pronounced.

11 Correlations between Feature Scores and the Average of Experts’ Scores
Average GOP 0.79 Word_Acc 0.74 Phone_Acc 0.60 Pitch distance 0.51 Energy distance 0.55 Clustering 0.58 ROS 0.39 Phoneme duration 0.42 Pause duration 0.57 Linear Regression 0.80

12 Human-machine Hybrid Scoring
Examine whether non-experts’ performance would change by presenting multidimensional automatic scores during assessment. Radar Chart Analysis Use a Gnuplot script to generate a 10-point radar chart for each utterance of all the development and test data

13 Scoring Procedure Training Assessment
Can view radar chart plots of any utterances from development data set. The reference score is presented Can listen to the utterance to check pronunciation. Participants can view different shapes of radar charts from the same proficiency group or compare radar charts from different proficient level groups Assessment The radar charts of the utterances from test set are randomly presented together with a link to the corresponding utterance file. Raters are instructed to first look at the chart and then click on the link to check the audio before making the final decision. They are required to give an overall fluency score for the utterance.

14 Results Correlations between non-experts and experts’ scores in human-machine hybrid scoring Rates of agreement with experts in human rating and human-machine hybrid rating Group A Group B Group C Group D Expert A 0.811 0.805 0.810 0.802 Expert B 0.821 0.814 0.820 0.817 Group A Group B Group C Group D Human only 80.5% 73.5% 72.2% 71.3% Hybrid rating 87.0% 85.4% 87.5% 86.4%

15 Conclusion Investigated how non-expert and expert human raters perform in the assessment of speaking test Found inconsistencies in non-experts' ratings compared with the experts Proposed a radar chart based multi-dimensional automatic scoring to assist non-expert human raters Experimental results show that presenting the automatic analysis of different fluency aspects can affects human raters' judgement The proposed human-machine hybrid scoring system can help human raters give more consistent and reliable assessment


Download ppt "Dean Luo, Wentao Gu, Ruxin Luo and Lixin Wang"

Similar presentations


Ads by Google