Presentation is loading. Please wait.

Presentation is loading. Please wait.

User Simulation for Spoken Dialogue Systems Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh.

Similar presentations


Presentation on theme: "User Simulation for Spoken Dialogue Systems Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh."— Presentation transcript:

1 User Simulation for Spoken Dialogue Systems Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh (Currently Leverhulme Visiting Professor, University of Edinburgh) Joint work with Hua Ai Intelligent Systems Program, University of Pittsburgh

2 User Simulation Motivation: Empirical Research requires Dialogue Corpora l Less expensive More efficient More (and better?) data compared to humans

3 User Simulation Power of evaluation measures How realistic? Discriminative ability [AAAI WS, 2006] Motivation: Empirical Research requires Dialogue Corpora Impact of the source corpus Subjects vs. real users [SIGDial, 2007] Human assessment Validation of evaluation Less expensive More efficient More (and better?) data compared to humans [ACL, 2008]

4 User Simulation Power of evaluation measures Task Dependent How realistic? How useful? Dialogue Strategy Learning Utility of realistic vs. exploratory models for reinforcement learning Discriminative ability [AAAI WS, 2006] [NAACL, 2007] Motivation: Empirical Research requires Dialogue Corpora Dialogue System Evaluation More realistic models via knowledge consistency [Interspeech, 2007] Impact of the source corpus Subjects vs. real users [SIGDial, 2007] Human assessment Validation of evaluation Less expensive More efficient More (and better?) data compared to humans [ACL, 2008]

5 User Simulation Power of evaluation measures Task Dependent How realistic? How useful? Dialogue Strategy Learning Utility of realistic vs. exploratory models for reinforcement learning Discriminative ability [AAAI WS, 2006] [NAACL, 2007] Motivation: Empirical Research requires Dialogue Corpora Dialogue System Evaluation More realistic models via knowledge consistency [Interspeech, 2007] Impact of the source corpus Subjects vs. real users [SIGDial, 2007] Human assessment Validation of evaluation Less expensive More efficient More (and better?) data compared to humans [ACL, 2008] * *

6 Outline User Simulation Models –Previous work –Our initial models Are more realistic models always “better”? Developing more realistic models via knowledge consistency Summary and Current Work

7 User Simulation Models Simulate user dialogue behaviors in simple (or, not too complicated) ways How to simulate –Various strategies: random, statistical, analytical What to simulate –Model dialogue behaviors on different levels: acoustic, lexical, semantic / intentional

8 Previous Work Most models simulate on the intentional level, and are statistically trained from human user corpora Bigram Models –P(next user action | previous system action) [Eckert et al., 1997] –Only accept the expected dialogue acts [Levin et al., 2000] Goal-Based Models –Hard-coded fixed goal structures [Scheffler, 2002] –P(next user action | previous system action, user goal) [Pietquin, 2004] –Goal and agenda-based models [Schatzmann et al., 2007]

9 Previous Work (continued) Models that exploit user state commonalities –Linear combinations of shared features [Georgila et al., 2005] –Clustering [Rieser et al., 2006] Improve speech recognizer and understanding components –Word-level simulation [Chung, 2004]

10 Our Domain: Tutoring ITSpoke: Intelligent Tutoring Spoken Dialogue System –Back-end is Why2-Atlas system [VanLehn et al., 2002] –Sphinx2 speech recognition and Cepstral text-to-speech The system initiates a tutoring conversation with the student to correct misconceptions and to elicit explanations Student answers: correct, incorrect

11

12

13

14 ITSpoke Corpora Two different student groups in f03 and s05 Systems have minor variations (e.g., voice, slightly different language models) Corpus Student Population System Voice Number of Dialogues f032003Synthesized100 s05 syn2005Synthesized136 pre2005Pre-recorded135

15 Our Simulation Approach Simulate on the word level –We use the answers from the real student answer sets as candidate answers for simulated students First step – basic simulation models –A random model Gives random answers –A probabilistic model Answers a question with the same correctness rate as our real students

16 The Random Model A unigram model –Randomly pick a student answer from all utterances, neglecting the tutor question Example dialogue ITSpoke:The best law of motion to use is Newton’s third law. Do you recall what it says? Student: Down. ITSpoke: Newton’s third law says… … ITSpoke: Do you recall what Newton’s third law says? Student: More.

17 The ProbCorrect Model A bigram model –P(Student Answer | Tutor Question) –Give correct/incorrect answers with the same probability as the real students Example dialogue ITSpoke: The best law of motion to use is Newton’s third law. Do you recall what it says? Student: Yes, for every action, there is an equal and opposite reaction. ITSpoke: This is correct! … ITSpoke: Do you recall what Newton’s third law says? Student:No.

18 Outline User Simulation Models –Previous work –Our initial models Are more realistic models always “better”? –Task: Dialogue Strategy Learning Developing more realistic models via knowledge consistency Summary and Current Work

19 Learning Task ITSpoke can only respond to student (in)correctness, but student (un)certainty is also believed to be relevant Goal: Learn how to manipulate the strength of tutor feedback, in order to maximize student certainty

20 Corpus Part of S05 data (with annotation) –26 human subjects, 130 dialogues Automatically logged –Correctness (c, ic); percent incorrectness (ic%) –Kappa (automatic/manual) = 0.79 Human annotated –certainty (cert, ncert) –Kappa (two annotators) = 0.68

21 Sample Coded Dialogue ITSPoke:Which law of motion would you use? Student:Newton’s second law.[ ic, ic%=100, ncert ] ITSpoke:Well… The best law to use is Newton’s third law. Do you recall what it says? Student:For every action there is an equal and opposite reaction. [ c, ic%=50, ncert ]

22 Markov Decision Processes (MDPs) and Reinforcement Learning What is the best action for an agent to take at any state to maximize reward? MDP Representation –States, Actions, Transition Probabilities –Reward Learned Policy –Optimal action to take for each state

23 MDP’s in Spoken Dialogue MDP Dialogue System Training data Policy User Simulator Human User MDP can be created offline Interactions work online

24 Our MDP Action Choices Tutor feedback –Strong Feedback (SF) “This is great!” –Weak Feedback (WF) “Well…”, doesn’t comment on the correctness Strength of tutor’s feedback is strongly related to the percentage of student certainty (chi-square, p<0.01)

25 Our MDP States and Rewards State features are derived from Certainty and Correctness Annotations Reward is based on the percentage of Certain student utterances during the dialogue

26 Our MDP Configuration States –Representation 1: c + ic% –Representation 2: c + ic% + cert Actions –Strong Feedback, Weak Feedback Reward –+100 (high certainty), -100 (low certainty)

27 Our Reinforcement Learning Goal Learn an optimal policy using simulated dialogue corpora Example Learned Policy –Give Strong Feedback when the current student answer is Incorrect and the percentage of Incorrect answers is greater than 50% –Otherwise give Weak Feedback Research Question: what is the impact of using different simulation models?

28 Probabilistic Simulation Model Capture realistic student behavior in a probabilistic way Strong Feedback c+cert (5)c+ncert (1)ic+cert (2)ic+ncert (3) Weak Feedback c+cert (4)c+ncert (4)ic+cert (1)ic+ncert (3) For each question:

29 Total Random Simulation Model Explore all possible dialogue states Ignores what the current question is or what feedback is given Randomly picks one utterance from the candidate answer set

30 Restricted Random Model Compromise between the exploration of the dialogue state space and the realness of generated user behaviors. Strong Feedback c+cert (1)c+ncert (1)ic+cert (1)ic+ncert (1) Weak Feedback c+cert (1)c+ncert (1)ic+cert (1)ic+ncert (1) For each question:

31 Methodology Old System Prob Total Ran. Res. Ran. Corpus1 Corpus2 Corpus3 Policy1 Policy2 Policy3 Sys1 Sys2 Sys3 Prob MDP 500 40,000

32 Methodology (continued) For each configuration, we run the simulation models until the learned policies do not change anymore Evaluation measure –number of dialogues that would be assigned reward +100 using the old median split –Baseline = 250

33 Evaluation Results Simulation Model State Rep. 1State Rep. 2 Probabilistic222217 Total Random192211 Restricted Random390368 Blue: Restricted Random significantly outperforms the other two models Underline: the learned policy significantly outperforms the baseline NB: Results similar with other reward functions and evaluation metrics

34 Discussion We suspect that the performance of the Probabilistic Model is harmed by the data sparsity issue in the real corpus –In State Representation 1, 25.8% of the possible states do not exist in the real corpus –Of most frequent states in State Representation 1 70.1% are seen frequently in Probabilistic Training corpus 76.3% are seen frequently in Restricted Random corpus 65.2% are seen frequently in Total Random corpus

35 In Sum When using simulation models for MDP policy training –Hypothesis confirmed: when trained from a sparse data set, it may be better to use a Restricted Random Model than a more realistic Probabilistic Model or a more exploratory Total Random Model Next Step: –Test the learned policies with human subjects to validate the learning process –How about the cases when we do need a realistic simulation model?

36 Outline User Simulation Models –Previous work –Our initial models Are more realistic models always “better”? Developing more realistic models via knowledge consistency Summary and Current Work

37 A New Model & A New Measure Goal Consistency Knowledge Consistency Student’s knowledge during a tutoring session is consistent. Knowledge consistency can be measured using learning curves. If the student answers a question correctly, the student is more likely to answer a similar question correctly later. If a simulated student behaves similarly to a real student, we should see a similar learning curve in the simulated data. A new simulation model A new evaluation measure

38 The Cluster Model Model student learning –P(Student Answer |Cluster of Tutor Question, last Student Correctness) Example dialogue ITSpoke: The best law of motion to use is Newton’s third law. Do you recall what it says? Student: Yes, for every action, there is an equal reaction. ITSpoke: This is almost right… there is an equal and opposite reaction … ITSpoke: Do you recall what Newton’s third law says? Student: Yes, for every action, there is an equal and opposite reaction.

39 Knowledge Component Representation Knowledge component – “concepts” discussed by the tutor The choice of grain size is determined by the instructional objectives of the designers A domain expert manually clustered the 210 tutor questions into 20 knowledge components (f03 data) –E.g., 3rdLaw, acceleration, etc.

40 Sample Coded Dialogue ITSpoke: Do you recall what Newton’s third law says? [3rdLaw] Student:No. [incorrect] ITSpoke: Newton’s third law says … If you hit the wall harder, is the force of your fist acting on the wall greater or less?[3rdLaw] Student:Greater. [correct]

41 Evaluation: Learning Curves (1) Learning effect – the student performs better after practicing more We can visualize the learning effect by plotting an exponentially decreasing learning curve [PSLC, http://learnlab.web.cmu.edu/mhci_2005/documentation/design2d.html ]

42 Learning Curves (2) 0.365 Among all the students, 36.5% of them made at least 1 error at their 2 nd opportunity to practice

43 Learning Curves (3) Standard way to plot the learning curve –First compute separate learning curves for each knowledge components, then, average them to get an overall learning curve We only see smooth learning curves among high learners –High/Low Learners: median split based on normalized learning gain –Learning Curve: Mathematical representation

44 Experiments (1) Simulation Models –ProbCorrect Model P(A | Q) A:Student Answer Q:Tutor Question –Cluster Model P(A | KC, C) A:Student Answer KC:Knowledge Component C:Correctness of the student’s answer to the last previous question that requires the same KC

45 Experiments (2) Evaluation Measures: Compare simulated user dialogues to human user dialogues using automatic measures New Measure: User Processing based on Knowledge Consistency –R-squared – How good the simulated learning curve correlates with the observed learning curve in the real student data Prior Measures: High-level Dialogue Features [Schatzmann et al., 2005]

46 Prior Evaluation Measures Schatzmann et al.Our measuresAbbreviation High-level Dialogue Features Dialogue Length (Number of turns) Number of students/tutor turns Sturn, Tturn Turn Length (Number of actions per turn) Total words per student/tutor turn Swordrate, Twordrate Participant Activity (Ratio of system/user actions per dialogue) Ratio of system/user words per dialogue wordRatio

47 Prior Evaluation Measures Schatzmann et al.Our measuresAbbreviation High-level Dialogue Features Dialogue Length (Number of turns) Number of students/tutor turns Sturn, Tturn Turn Length (Number of actions per turn) Total words per student/tutor turn Swordrate, Twordrate Participant Activity (Ratio of system/user actions per dialogue) Ratio of system/user words per dialogue wordRatio Learning Feature: % of Correct Answers CRate

48 Experiments (3) Simulation Models –The ProbCorrect Model P(A | Q) –The Cluster Model P(A | KC, c) Evaluation Measures –Previously proposed Evaluation Measures –Knowledge Consistency Measures Both of the simulation models interact with the system, generating 500 dialogues for each model

49 Results: Prior Measures Both models do not significantly differ from the real students, on all the original evaluation measures Thus, both models can simulate realistic high-level dialogue behaviors

50 Results: New Measures ModelprobCorrectCluster R0.2520.564 adjusted R0.1020.477 2 2

51 Results: New Measures ModelprobCorrectCluster R0.2520.564 adjusted R0.1020.477 2 2 The Cluster model outperforms the ProbCorrect simulation model, with respect to learning curves

52 Results: New Measures ModelprobCorrectCluster R0.2520.564 adjusted R0.1020.477 2 2 The Cluster model outperforms the ProbCorrect simulation model, with respect to learning curves Model ranking also validated by human judges [Ai and Litman, 2008]

53 In Sum Recall goal: simulate consistent user behaviors based on user knowledge consistency rather than fixed user goals Knowledge consistent models outperform the probabilistic model when measured by knowledge consistency measures –Do not differ on high-level dialogue measures –Similar approach should be applicable for other temporal user processes (e.g., forgetting)

54 User Simulation Power of evaluation measures Task Dependent How realistic? How useful? Dialogue Strategy Learning Utility of realistic vs. exploratory models for reinforcement learning Discriminative ability [AAAI WS, 2006] [NAACL, 2007] Conclusions: The Big Picture Dialogue System Evaluation More realistic models via knowledge consistency [Interspeech, 2007] Impact of the source corpus Subjects vs. real users [SIGDial, 2007] Human assessment Validation of evaluation * * [ACL, 2008]

55 Other ITSpoke Research Affect detection and adaptation in dialogue systems –Annotated ITSpoke Corpus now available! –https://learnlab.web.cmu.edu/datashop/index.jsphttps://learnlab.web.cmu.edu/datashop/index.jsp Reinforcement Learning Using NLP and psycholinguistics to predict learning –Cohesion, alignment/convergence, semantics More details: http://www.cs.pitt.edu/~litman/itspoke.html

56 Questions? Thank You!

57


Download ppt "User Simulation for Spoken Dialogue Systems Diane Litman Computer Science Department & Learning Research and Development Center University of Pittsburgh."

Similar presentations


Ads by Google