Detecting and Adapting to Student Uncertainty in a Spoken Tutorial Dialogue System Diane Litman Computer Science Department & Learning Research & Development.

Slides:



Advertisements
Similar presentations
Spoken Dialogue for Intelligent Tutoring Systems: Opportunities and Challenges Diane Litman Computer Science Department Learning Research & Development.
Advertisements

Mihai Rotaru Diane J. Litman DoD Group Meeting Presentation
Data Mining Methodology 1. Why have a Methodology  Don’t want to learn things that aren’t true May not represent any underlying reality ○ Spurious correlation.
Detecting Certainness in Spoken Tutorial Dialogues Liscombe, Hirschberg & Venditti Using System and User Performance Features to Improve Emotion Detection.
5/10/20151 Evaluating Spoken Dialogue Systems Julia Hirschberg CS 4706.
Uncertainty Corpus: Resource to Study User Affect in Complex Spoken Dialogue Systems Kate Forbes-Riley, Diane Litman, Scott Silliman, Amruta Purandare.
Sorry, I didn’t catch that! – an investigation of non-understandings and recovery strategies Dan Bohuswww.cs.cmu.edu/~dbohus Alexander I. Rudnickywww.cs.cmu.edu/~air.
Student simulation and evaluation DOD meeting Hua Ai 03/03/2006.
What can humans do when faced with ASR errors? Dan Bohus Dialogs on Dialogs Group, October 2003.
Outline Why study emotional speech?
Emotional Grounding in Spoken Dialog Systems Jackson Liscombe Giuseppe Riccardi Dilek Hakkani-Tür
Click to edit the title text format An Introduction to TuTalk: Developing Dialogue Agents for Learning Studies Pamela Jordan University of Pittsburgh Learning.
Topics = Domain-Specific Concepts Online Physics Encyclopedia ‘Eric Weisstein's World of Physics’ Contains total 3040 terms including multi-word concepts.
Annotating Student Emotional States in Spoken Tutoring Dialogues Diane Litman and Kate Forbes-Riley Learning Research and Development Center and Computer.
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman and Kate Forbes-Riley University of Pittsburgh Pittsburgh, PA USA.
Toshiba Update 04/09/2006 Data-Driven Prosody and Voice Quality Generation for Emotional Speech Zeynep Inanoglu & Steve Young Machine Intelligence Lab.
® Automatic Scoring of Children's Read-Aloud Text Passages and Word Lists Klaus Zechner, John Sabatini and Lei Chen Educational Testing Service.
Modeling User Satisfaction and Student Learning in a Spoken Dialogue Tutoring System with Generic, Tutoring, and User Affect Parameters Kate Forbes-Riley.
Interactive Dialogue Systems Professor Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh Pittsburgh,
circle A Comparison of Tutor and Student Behavior in Speech Versus Text Based Tutoring Carolyn P. Rosé, Diane Litman, Dumisizwe Bhembe, Kate Forbes, Scott.
1 / 12 PSLC Summer School, June 21, 2007 Identifying Students’ Gradual Understanding of Physics Concepts Using TagHelper Tools Nava L.
Kate’s Ongoing Work on Uncertainty Adaptation in ITSPOKE.
Spoken Dialogue for Intelligent Tutoring Systems: Opportunities and Challenges Diane Litman Computer Science Department & Learning Research & Development.
 Text Representation & Text Classification for Intelligent Information Retrieval Ning Yu School of Library and Information Science Indiana University.
On Speaker-Specific Prosodic Models for Automatic Dialog Act Segmentation of Multi-Party Meetings Jáchym Kolář 1,2 Elizabeth Shriberg 1,3 Yang Liu 1,4.
Click to edit the title text format An Introduction to TuTalk: Developing Dialogue Agents for Learning Studies Pamela Jordan University of Pittsburgh Learning.
circle Adding Spoken Dialogue to a Text-Based Tutorial Dialogue System Diane J. Litman Learning Research and Development Center & Computer Science Department.
Comparing Synthesized versus Pre-Recorded Tutor Speech in an Intelligent Tutoring Spoken Dialogue System Kate Forbes-Riley and Diane Litman and Scott Silliman.
Multimodal Information Analysis for Emotion Recognition
Adaptive Spoken Dialogue Systems & Computational Linguistics Diane J. Litman Dept. of Computer Science & Learning Research and Development Center University.
Correlations with Learning in Spoken Tutoring Dialogues Diane Litman Learning Research and Development Center and Computer Science Department University.
Experiments with ITSPOKE: An Intelligent Tutoring Spoken Dialogue System Dr. Diane Litman Associate Professor, Computer Science Department and Research.
The Four P’s of an Effective Writing Tool: Personalized Practice with Proven Progress April 30, 2014.
Collaborative Research: Monitoring Student State in Tutorial Spoken Dialogue Diane Litman Computer Science Department and Learning Research and Development.
1 Computation Approaches to Emotional Speech Julia Hirschberg
Predicting Student Emotions in Computer-Human Tutoring Dialogues Diane J. Litman&Kate Forbes-Riley University of Pittsburgh Department of Computer Science.
Why predict emotions? Feature granularity levels [1] uses pitch features computed at the word-level Offers a better approximation of the pitch contour.
Using Word-level Features to Better Predict Student Emotions during Spoken Tutoring Dialogues Mihai Rotaru Diane J. Litman Graduate Research Competition.
Speech and Language Processing for Educational Applications Professor Diane Litman Computer Science Department & Intelligent Systems Program & Learning.
CoCQA : Co-Training Over Questions and Answers with an Application to Predicting Question Subjectivity Orientation Baoli Li, Yandong Liu, and Eugene Agichtein.
Creating Subjective and Objective Sentence Classifier from Unannotated Texts Janyce Wiebe and Ellen Riloff Department of Computer Science University of.
Diane Litman Learning Research & Development Center
Spoken Dialogue in Human and Computer Tutoring Diane Litman Learning Research and Development Center and Computer Science Department University of Pittsburgh.
Predicting Voice Elicited Emotions
Speech and Language Processing for Adaptive Training Diane Litman Professor, Computer Science Department Senior Scientist, Learning Research & Development.
Spoken Dialog Systems Diane J. Litman Professor, Computer Science Department.
Using Prosody to Recognize Student Emotions and Attitudes in Spoken Tutoring Dialogues Diane Litman Department of Computer Science and Learning Research.
(Speech and Affect in Intelligent Tutoring) Spoken Dialogue Systems Diane Litman Computer Science Department and Learning Research and Development Center.
Metacognition and Learning in Spoken Dialogue Computer Tutoring Kate Forbes-Riley and Diane Litman Learning Research and Development Center University.
circle Spoken Dialogue for the Why2 Intelligent Tutoring System Diane J. Litman Learning Research and Development Center & Computer Science Department.
Modeling Student Benefits from Illustrations and Graphs Michael Lipschultz Diane Litman Computer Science Department University of Pittsburgh.
A Tutorial Dialogue System that Adapts to Student Uncertainty Diane Litman Computer Science Department & Intelligent Systems Program & Learning Research.
circle Towards Spoken Dialogue Systems for Tutorial Applications Diane Litman Reprise of LRDC Board of Visitors Meeting, April 2003.
Spoken Dialogue for Intelligent Tutoring Systems: Opportunities and Challenges Diane Litman Computer Science Department Learning Research & Development.
Improving (Meta)cognitive Tutoring by Detecting and Responding to Uncertainty Diane Litman & Kate Forbes-Riley University of Pittsburgh Pittsburgh, PA.
Experiments with ITSPOKE: An Intelligent Tutoring Spoken Dialogue System Diane Litman Computer Science Department and Learning Research and Development.
User Simulation for Spoken Dialogue Systems Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh.
Acoustic Cues to Emotional Speech Julia Hirschberg (joint work with Jennifer Venditti and Jackson Liscombe) Columbia University 26 June 2003.
Using Natural Language Processing to Analyze Tutorial Dialogue Corpora Across Domains and Modalities Diane Litman, University of Pittsburgh, Pittsburgh,
RESEARCH MOTHODOLOGY SZRZ6014 Dr. Farzana Kabir Ahmad Taqiyah Khadijah Ghazali (814537) SENTIMENT ANALYSIS FOR VOICE OF THE CUSTOMER.
Prosodic Cues to Disengagement and Uncertainty in Physics Tutorial Dialogues Diane Litman, Heather Friedberg, Kate Forbes-Riley University of Pittsburgh.
Research Methodology Proposal Prepared by: Norhasmizawati Ibrahim (813750)
Predicting and Adapting to Poor Speech Recognition in a Spoken Dialogue System Diane J. Litman AT&T Labs -- Research
Predicting Emotion in Spoken Dialogue from Multiple Knowledge Sources Kate Forbes-Riley and Diane Litman Learning Research and Development Center and Computer.
Applications of Discourse Structure for Spoken Dialogue Systems
Towards Emotion Prediction in Spoken Tutoring Dialogues
Conditional Random Fields for ASR
For Evaluating Dialog Error Conditions Based on Acoustic Information
Dialogue-Learning Correlations in Spoken Dialogue Tutoring
Presentation transcript:

Detecting and Adapting to Student Uncertainty in a Spoken Tutorial Dialogue System Diane Litman Computer Science Department & Learning Research & Development Center University of Pittsburgh, USA Currently Leverhulme Visiting Professor School of Informatics University of Edinburgh, UK

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

Overview: Towards Adaptive Spoken Dialogue Systems User Affect Detection u Promising across affective states and applications, e.g.: u Craig et al., 2006 u Litman & Forbes-Riley, 2006 u Lee & Narayanan, 2005 u Vidrascu & Devillers, 2005 u Batliner et al., 2003 Adaptation u Sparse, can be difficult to show adaptation improves performance u Some used basic adaptations and showed likeability increases u For other performance metrics, basic adaptations not clear a priori System Adaptation Health Assessment Stress Empathy [Liu & Picard 2005] Gaming Frustration Apology [Klein et al. 2002] Application Tutoring ??????????????

Monitoring Student Affect (Motivation) Tutor26: Let me ask my question in a more general way. What happens to the velocity of a body when there is no force acting on it? Student27:dammit (ASR: it is) Tutor28 :Could you please repeat that? Student29 :same (ASR: i same) Tutor30 :Right. And what happens to the HORIZONTAL velocity of a body when there is no HORIZONTAL force acting on it? Student31:zero (ASR: the zero) Tutor32 :Well... OK, let's consider how forces in the vertical direction affect the motion of a body in a horizontal direction in a different scenario Student33:oh gravity you already said this (ASR: beats gravity you're exert less) Tutor34:Fine. Are there any other forces acting on the apple as it falls? Student35:no why are you doing this again (ASR: no y and to it yes) Tutor36:Correct. The only force acting on the apple as it falls to the earth is the earth's gravitational force. In which direction does gravitational force act? Student37:downward you computer (ASR: downward you computer)

Affective Tutorial Dialogue Systems  Opportunity –Affective spoken dialogue system technology can improve student learning and other measures of performance [Aist et al. 2002; Pon-Barry et al. 2006]  Challenges – What to detect? – How to respond? – Evaluation?

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

ITSPOKE: Motivation  Current learning gap between human and computer tutors –Humans: learning increases of up to 2 standard deviations [Bloom 1984] –Computers: learning increases of only 1 standard deviation [Anderson 1995, VanLehn 2006]  How to bridge this gap? –Currently only humans use full-fledged natural language dialogue –ITSPOKE: a platform for investigating the role of speech and affect in tutorial dialogue systems

Back-end is Why2-Atlas system [VanLehn, Jordan, Rose et al. 2002] Sphinx2 speech recognition and Cepstral text-to-speech

Back-end is Why2-Atlas system [VanLehn, Jordan, Rose et al. 2002] Sphinx2 speech recognition and Cepstral text-to-speech

Back-end is Why2-Atlas system [VanLehn, Jordan, Rose et al. 2002] Sphinx2 speech recognition and Cepstral text-to-speech

Two Types of Tutoring Corpora  Human Tutoring –14 students / 128 dialogues (physics problems) –5948 student turns, 5505 tutor turns  Computer Tutoring –ITSPOKE v1 »20 students / 100 dialogues »2445 student turns, 2967 tutor turns –ITSPOKE v2 » 57 students / 285 dialogues » both synthesized and pre-recorded tutor voices

ITSPOKE Experimental Procedure  College students without physics –Read a small background document –Took a multiple-choice Pretest –Worked 5 problems (dialogues) with ITSPOKE –Took an isomorphic Posttest  Goal was to optimize Learning Gain – e.g., Posttest – Pretest

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

Affect Detection in Spoken Dialogue: Empirical Methodology  Manual Annotation of Affect and Attitudes –Naturally-occurring spoken dialogue data –[Ang et al. 2002; Lee et al. 2002; Batliner et al. 2003; Devillers et al. 2003; Shafran et al. 2003; Liscombe et al. 2005]  Prediction via Machine Learning –Automatically extract features from user turns –Use different feature sets (e.g. prosodic, lexical) to predict affect –Significant reduction of baseline error  Analytical approaches also possible

What to Annotate?  Communicator and Customer Care Systems –Negative: Angry, Annoyed, Frustrated, Tired –Positive/Neutral: Amused, Cheerful, Delighted, Happy, Serious [Ang et al. 2002; Shafran et al. 2003; Lee and Narayanan 2005; Liscombe et al. 2005]

What to Annotate?  Communicator and Customer Care Systems –Negative: Angry, Annoyed, Frustrated, Tired –Positive/Neutral: Amused, Cheerful, Delighted, Happy, Serious [Ang et al. 2002; Shafran et al. 2003; Lee and Narayanan 2005; Liscombe et al. 2005]  Tutorial Dialogue Systems –Negative: Angry, Annoyed, Frustrated, Bored, Confused, Uncertain, Contempt, Disgusted, Sad –Positive/Neutral: Certain, Curious, Enthusiastic, Eureka [Litman and Forbes-Riley 2006, D’Mello et al. 2006]

Example Student Affect in ITSPOKE ITSPOKE: What else do you need to know to find the box‘s acceleration? Student: the direction [NEGATIVE: UNCERTAIN] ITSPOKE : If you see a body accelerate, what caused that acceleration? Student: force [POSITIVE: CERTAIN] ITSPOKE : Good job. Say there is only one force acting on the box. How is this force, the box's mass, and its acceleration related? Student: velocity [NEGATIVE: UNCERTAIN] ITSPOKE : Could you please repeat that? Student: velocity [NEGATIVE: ANNOYED]

How to Annotate?  Trained Judges versus Self-Report  Offline versus Online Coding  Evaluation

How to Annotate?  Trained Judges versus Self-Report  Offline versus Online Coding  Evaluation –Kappas of in [Ang et al. 2002; Narayanan 2002; Shafran et al. 2003, Litman and Forbes-Riley 2004]

Prediction via Machine Learning  Multiple feature types per student turn, e.g. –Acoustic-prosodic –Lexical –Identifiers –System and student performance  Sample research questions –Relative utility of feature types –Impact of speech recognition –Speaker and task dependence –Impact of learning algorithm, amount of training data

Detecting Neg/Pos/Neu in ITSPOKE - Baseline Accuracy via Majority Class Prediction

Detecting Neg/Pos/Neu in ITSPOKE -Use of prosodic (sp), recognized (asr) and/or actual (lex) lexical features outperforms baseline

Detecting Neg/Pos/Neu in ITSPOKE -As with other applications, highest predictive accuracies are obtained by combining multiple feature types [Litman and Forbes-Riley, Speech Communication 2006]

Detecting Neg/Pos/Neu in ITSPOKE - However, relative feature utility differs in tutoring (e.g., for speech features: temporal > energy > pitch)

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

System Adaptation: How to Respond?  Our initial focus: responding to student uncertainty –Most frequent affective state in our data –Focus of other studies [VanLehn et al. 2003; Craig et al. 2006, Porayska-Pomsta et al. 2007; Pon-Barry et al. 2006] –.62 Kappa  Approaches to adaptive system design –Theory-based –Data-driven

Theory-Based Adaptation  In tutoring, not all negatively valenced affect is bad –Such states often represent learning opportunities  E.g., uncertainty represents one type of learning impasse [VanLehn et al. 2003] –An impasse motivates a student to take an active role in constructing a better understanding of the principle  Hypothesis: The system should adapt to student uncertainty in the same way it responds to other impasses

Example from ITSPOKE ITSPOKE: What is the net force acting on the truck equal to? Student: I don’t know [Uncertain, Frustrated] ITSPOKE: Since net force is defined as the vector sum of all the forces exerted on an object, let's first find the forces acting on the truck. Try to name them. Student: The impact force, the normal force, and gravity [Certain, Neutral] ITSPOKE: Yep.

Data-Driven Adaptation: How Do Human Tutors Respond?  An empirical method for designing dialogue systems adaptive to student affect –extraction of “dialogue bigrams” from annotated human tutoring corpora –χ 2 analysis to identify dependent bigrams –generalizable to any domain with corpora labeled for user state and system response –[Forbes-Riley and Litman, Sigdial 2005; Forbes-Riley and Litman ACII 2007; Forbes-Riley et al., NAACL- HLT 2007]

Example Human Tutoring Excerpt S: So the- when you throw it up the acceleration will stay the same? [Uncertain] T: Acceleration uh will always be the same because there is- that is being caused by force of gravity which is not changing. [Restatement, Expansion] S: mm-k. [Neutral] T: Acceleration is– it is in- what is the direction uh of this acceleration- acceleration due to gravity? [Short Answer Question] S: It’s- the direction- it’s downward. [Certain] T: Yes, it’s vertically down. [Positive Feedback, Restatement]

Bigram Dependency Analysis EXPECTED Tutor IncludePos Tutor OmitsPos neutral certain uncertain mixed OBSERVED Tutor IncludesPos Tutor OmitsPos neutral certain uncertain mixed71161 χ2 = (critical χ2 value at p =.001 is 16.27) - “Student Certainness – Tutor Positive Feedback” Bigrams

Bigram Dependency Analysis (cont.) EXPECTED Includes Pos Omits Pos neutral OBSERVED Includes Pos Omits Pos neutral Less Tutor Positive Feedback after Student Neutral turns

Bigram Dependency Analysis (cont.) EXPECTED Includes Pos Omits Pos neutral certain uncertain mixed OBSERVED Includes Pos Omits Pos neutral certain uncertain mixed Less Tutor Positive Feedback after Student Neutral turns - More Tutor Positive Feedback after “Emotional” turns

Findings  Statistically significant dependencies exist between students’ state of certainty and the responses of an expert human tutor –After uncertain, tutor Bottoms Out & avoids expansions –After certain, tutor Restates –After mixed, tutor Hints –After any emotion, tutor increases Feedback  Dependencies suggest adaptive strategies for implementation in computer tutoring systems

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

Approaches to Evaluation  “Correlational” Studies, e.g. –Student uncertainty positively correlates with learning [Craig et al. 2004] –Adding uncertainty and frustration metrics to regression models increases model fit [Forbes-Riley et al. 2008]  “Causal” Studies, e.g. –Adding human-provided emotional scaffolding to a reading tutor increases student persistence [Aist et al. 2002] –Experimentally manipulate tutor responses to student uncertainty and investigate impact on learning [Pon-Barry et al. 2006]

Adaptation to Student Uncertainty in ITSPOKE: A First Causal Evaluation  Theoretically-motivated uncertainty adaptation –Most systems respond only to (in)correctness –Recall that literature suggests uncertain as well as incorrect student answers signal learning impasses  Proposed Adaptation: Treat uncertain+correct student answers as if they were incorrect

Platform: Adaptive WOZ-TUT System  Modified version of ITSPOKE –Dialogue manager adapts to uncertainty »system responses based on combined uncertainty and correctness –Full automation replaced by Wizard of Oz (WOZ) components »human wizard recognizes student speech »human also annotates both uncertainty and correctness

WOZ-TUT Screenshot

Experimental Design  3 conditions: (parameterized versions of WOZ-TUT) –Experimental: treat all uncertain+incorrect turns as incorrect –First Control: ignore uncertainty (logged) –Second Control: ignore uncertainty (logged), but treate a percentage of random correct answers as incorrect

 TUTOR: What will the velocity of the object be a second after that (where the initial velocity is 9.8m/s and the acceleration is 9.8m/s2)?  STUDENT: Nineteen point six meters per second?? [uncertain+correct]  TUTOR in First Control Condition moves on: Good. So at every point in time during the fall of the man and his keys, how do their velocities compare with each other?  TUTOR in Experimental Condition remediates: Okay. As we have seen, if a falling object has an acceleration of 9.8m/s2, its velocity changes by 9.8m/s every second. So if a second after it began falling its velocity is 9.8m/s, a second later its velocity will be 9.8m/s + 9.8m/s = 19.6m/s. So what will its velocity be a second after that? Treatments in Different Conditions

Experimental Procedure  60 subjects randomly assigned to 3 conditions (gender- balanced) –Native English speakers with no college physics –Procedure: 1) read background material, 2) took pretest, 3) worked training problem with WOZ-TUT, 4) took posttest, 5) worked isomorphic test problem with non-adaptive WOZ-TUT

Resulting Corpus u 120 dialogues from 60 students (.ogg format, 20 hours) u Student turns manually transcribed u Tutor turns and Wizard annotations in log files u Available through the Pittsburgh Science of Learning Center u u [Forbes-Riley et al., LREC 2008 ] StudentTutor Total Turns Total Uncertain Turns796- Total Words Average Words per Turn

Evaluation Results (1)  Learning Gains – short answer pre and posttests – no significant differences across conditions  Dialogue-Based –incorrect and/or uncertain answers in the isomorphic test problem –no significant differences across conditions overall –however ….

Evaluation Results (2) Comparing questions originally answered Correct+Uncertain (CU) Answer to Same Question in Isomorphic Test Dialogue EXP (contingent adaptation) CTRL1 (original system) CTRL2 (random adaptation) Total CU -> C Total CU -> nonU Total CU -> CnonU Significant Differences and Trends (compared to CTRL1)

Evaluation Results (3)  Summary of Findings [Forbes-Riley et al., Intelligent Tutoring Systems 2008] –Correct+Uncertain answers are more likely to stay correct if they receive the uncertainty adaptation(s) –The uncertainty adaptations also reduce uncertainty –However, results are stronger in the random condition  Problems with Experimental Design –One (rather than five) training problems –Use of vague “Okay” for positive feedback

Outline  Affective Spoken Dialogue Systems  The ITSPOKE System and Corpora  Research Opportunities and Challenges – Affect Detection – System Adaptation – Experimental Evaluation  Summing Up

Summing Up  Affective Systems are receiving increasing attention in Spoken (Tutorial) Dialogue Research  Many opportunities and challenges remain – Affect Detection – System Adaptation – Experimental Evaluation

Current Directions in ITSPOKE  Affect Detection – bootstrapping approaches to annotation – further development of features  System Adaptation – new methods for learning data-driven strategies – responding with tutor affect  Experimental Evaluation – new WOZ-TUT experiment »5 problems, clearer feedback, new empirical adaptation – future ITSPOKE (fully automated) experiment

Acknowledgements  Kate Forbes-Riley  ITSPOKE group –Hua Ai, Alison Huettner, Beatriz Maeireizo-Tokeshi, Greg Nicholas, Amruta Purandare, Mihai Rotaru, Scott Silliman, Joel Tetrault, Art Ward –Columbia Collaborators: Julia Hirschberg, Jackson Liscombe, Jennifer Venditti   Why2-Atlas and Human Tutoring groups

Thank You! Questions?  Further Information –  Annotated WOZ-TUT Corpus –

Corpus Description by Condition u One-way ANOVAs showed no significant differences: u number of correct, uncertain, or uncertain+correct turns u number adapted-to turns (EXP vs CTRL2) Training ProblemEXPCTRL1CTRL2 Ave Turns Ave Correct Turns Ave Uncertain Turns Ave Uncertain+Correct Turns Ave Adapted-To Turns Ave Uncertain+Correct and Adapted-To Turns 100%0%36%