Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating Question Answering Validation Anselmo Peñas (and Alvaro Rodrigo) NLP & IR group UNED nlp.uned.es Information Science Institute Marina del Rey,

Similar presentations


Presentation on theme: "Evaluating Question Answering Validation Anselmo Peñas (and Alvaro Rodrigo) NLP & IR group UNED nlp.uned.es Information Science Institute Marina del Rey,"— Presentation transcript:

1 Evaluating Question Answering Validation Anselmo Peñas (and Alvaro Rodrigo) NLP & IR group UNED nlp.uned.es Information Science Institute Marina del Rey, December 11, 2009

2 UNED nlp.uned.es Old friends Question Answering Nothing else than answering a question Natural Language Understanding Something there, if you are able to answer a question QA: extrinsic evaluation for NLU Suddenly… (See the track?) …The QA Track at TREC

3 UNED nlp.uned.es Question Answering at TREC Object of evaluation itself Redefined as a (roughly speaking): Highly-precision-oriented IR task Where NLP was necessary Specially for Answer Extraction

4 UNED nlp.uned.es What’s this story about? 20032004200520062007200820092010 QA Tasks at CLEF Multiple Language QA Main TaskResPubliQA Temporal restrictions and lists Answer Validation Exercise (AVE) GikiCLEF Real Time QA over Speech Transcriptions (QAST) WiQA WSD QA

5 UNED nlp.uned.es Outline 1. Motivation and goals 2. Definition and general framework 3. AVE 2006 4. AVE 2007 & 2008 5. QA 2009

6 UNED nlp.uned.es Short cycleLong cycle Out-line 1. Analysis of current systems performance 2. Mid term goals and strategy 3. Evaluation Task definition 4. Analysis of the evaluation cycle Result analysis Methodology analysis Generation of methodology and evaluation resources Task activation and development

7 UNED nlp.uned.es Systems performance 2003 - 2006 (Spanish) Overall Best result <60% Definitions Best result >80% NOT IR approach

8 UNED nlp.uned.es Pipeline Upper Bounds SOMETHING to break the pipeline Question Answer Question analysis Passage Retrieval Answer Extraction Answer Ranking 1.00.8 0.64xx= Not enough evidence

9 UNED nlp.uned.es Results in CLEF-QA 2006 (Spanish) Perfect combination 81% Best system 52,5% Best with ORGANIZATION Best with PERSON Best with TIME

10 UNED nlp.uned.es Collaborative architectures Different systems response better different types of questions Specialization Collaboration QA sys 1 QA sys 2 QA sys 3 QA sys n Question Candidate answers SOMETHING for combining / selecting Answer

11 UNED nlp.uned.es Collaborative architectures How to select the good answer? Redundancy Voting Confidence score Performance history Why not deeper content analysis?

12 UNED nlp.uned.es Mid Term Goal Goal Improve QA systems performance New mid term goal Improve the devices for: Rejecting / Accepting / Selecting Answers The new task (2006) Validate the correctness of the answers Given by real QA systems......the participants at CLEF QA

13 UNED nlp.uned.es Outline 1. Motivation and goals 2. Definition and general framework 3. AVE 2006 4. AVE 2007 & 2008 5. QA 2009

14 UNED nlp.uned.es Define Answer Validation Decide whether an answer is correct or not More precisely: The Task: Given Question Answer Supporting Text Decide if the answer is correct according to the supporting text Let’s call it Answer Validation Exercise (AVE)

15 UNED nlp.uned.es Whish list Test collection Questions Answers Supporting Texts Human assessments Evaluation measures Participants

16 UNED nlp.uned.es Evaluation linked to main QA task Question Answering Track Systems’ answers Systems’ Supporting Texts Answer Validation Exercise Questions (ACCEPT / REJECT) Human Judgements (R,W,X,U) QA Track results Mapping (ACCEPT / REJECT) Evaluation AVE Track results Reuse human assessments

17 UNED nlp.uned.es Candidate answer Supporting Text Answer is not correct or not enough evidence Question Answer is correct Answer Validation Answer Validation Exercise (AVE) AVE 2007 - 2008 Textual Entailment Hypothesis Automatic Hypothesis Generation AVE 2006

18 UNED nlp.uned.es Outline Motivation and goals Definition and general framework AVE 2006 Underlying architecture: pipeline Evaluating the validation As RTE exercise: pairs text-hypothesis AVE 2007 & 2008 QA 2009

19 UNED nlp.uned.es AVE 2006: A RTE exercise If the text semantically entails the hypothesis, then the answer is expected to be correct. Question Supporting snippet Exact Answer QA system Hypothesis Text Entailment? Is this true? Yes 95% with current QA systems (J LOG COMP 2009)

20 UNED nlp.uned.es Collections AVE 2006 Available at:nlp.uned.es/clef-qa/ave/ Testing (pairs entail.)Training English2088 (10% YES)2870 (15% YES) Spanish2369 (28% YES)2905 (22% YES) German1443 (25% YES) French3266 (22% YES) Italian1140 (16% YES) Dutch807 (10% YES) Portuguese1324 (14% YES)

21 UNED nlp.uned.es Evaluating the Validation Validation Decide if each candidate answer is correct or not YES | NO Not balanced collections Approach: Detect if there is enough evidence to accept an answer Measures: Precision, recall and F over correct answers Baseline system: Accept all answers

22 UNED nlp.uned.es Evaluating the Validation Correct Answer Incorrect Answer Answer Accepted n CA n WA Answer Rejected n CR n WR

23 UNED nlp.uned.es Results AVE 2006 LanguageBaseline (F) Best system (F) Reported Techiques English.27.44Logic Spanish.45.61Logic German.39.54Lexical, Syntax, Semantics, Logic, Corpus French.37.47Overlapping, Learning Dutch.19.39Syntax, Learning Portuguese.38.35Overlapping Italian.29.41Overlapping, Learning

24 UNED nlp.uned.es Outline Motivation and goals Definition and general framework AVE 2006 AVE 2007 & 2008 Underlying architecture: multi-stream Quantify the potential benefit of AV in QA Evaluating the correct selection of one answer Evaluating the correct rejection of all answers QA 2009

25 UNED nlp.uned.es QA sys 1 QA sys 2 QA sys 3 QA sys n Question Candidate answers + Supporting Texts Answer Validation & Selection Answer Participant systems in a CLEF – QA Evaluation of Answer Validation & Selection AVE 2007 & 2008

26 UNED nlp.uned.es Collections What is Zanussi? was an Italian producer of home appliances Zanussi For the Polish film director, see Krzysztof Zanussi. For the hot-air balloon, see Zanussi (balloon). Zanussi was an Italian producer of home appliances that in 1984 was bought who had also been in Cassibile since August 31 Only after the signing had taken place was Giuseppe Castellano informed of the additional clauses that had been presented by general Ronald Campbell to another Italian general, Zanussi, who had also been in Cassibile since August 31. 3 (1985) 3 Out of 5 Live (1985) What Is This?

27 UNED nlp.uned.es Evaluating the Selection Goals Quantify the potential gain of Answer Validation in Question Answering Compare AV systems with QA systems Develop measures more comparable to QA accuracy

28 UNED nlp.uned.es Evaluating the selection Given a question with several candidate answers Two options: Selection Select an answer ≡ try to answer the question Correct selection: answer was correct Incorrect selection: answer was incorrect Rejection Reject all candidate answers ≡ leave question unanswered Correct rejection: All candidate answers were incorrect Incorrect rejection: Not all candidate answers were incorrect

29 UNED nlp.uned.es Evaluating the Selection n questions n= n CA + n WA + n WS + n WR + n CR Question with Correct Answer Question without Correct Answer Question Answered Correctly (One Answer Selected) n CA - Question Answered Incorrectly n WA n WS Question Unanswered (All Answers Rejected) n WR n CR

30 UNED nlp.uned.es Evaluating the Selection Rewards rejection (not balanced cols) Interpretation for QA: all questions correctly rejected by AV will be answered correctly

31 UNED nlp.uned.es Evaluating the Selection Interpretation for QA: questions correctly rejected has value as if they were answered correctly in qa_accuracy proportion

32 UNED nlp.uned.es Analysis and discussion (AVE 2007 Spanish) Validation Selection Comparing AV & QA

33 UNED nlp.uned.es Techniques in AVE 2007 Generates hypotheses 6 Wordnet 3 Chunking 3 n-grams, longest common Subsequences 5 Phrase transformations 2 NER 5 Num. Expressions 6 Temp. expressions 4 Coreference resolution 2 Dependency analysis 3 Syntactic similarity4 Functions (sub, obj, etc)3 Syntactic transformations1 Word-sense disambiguation2 Semantic parsing4 Semantic role labeling2 First order logic representation3 Theorem prover3 Semantic similarity2

34 UNED nlp.uned.es Conclusion of AVE Answer Validation before It was assumed as a QA module But no space for its own development The new devices should help to i mprove QA they Introduce more content analysis Use Machine Learning techniques Are able to break pipelines or combine streams Let’s transfer them to QA main task

35 UNED nlp.uned.es Outline Motivation and goals Definition and general framework AVE 2006 AVE 2007 & 2008 QA 2009

36 UNED nlp.uned.es CLEF QA 2009 campaign ResPubliQA: QA on European Legislation GikiCLEF: QA requiring geographical reasoning on Wikipedia QAST: QA on Speech Transcriptions of European Parliament Plenary sessions

37 UNED nlp.uned.es CLEF QA 2009 campaign Task Registered groups Participant groups Submitted Runs Organizing people ResPubliQA2011 28 + 16 (baseline runs) 9 Giki CLEF27817 runs2 QAST12486 (5 subtasks)8 Total 59 showed interest 23 Groups 147 runs evaluated 19 + additional assessors

38 ResPubliQA 2009: QA on European Legislation Organizers Anselmo Peñas Pamela Forner Richard Sutcliffe Álvaro Rodrigo Corina Forascu Iñaki Alegria Danilo Giampiccolo Nicolas Moreau Petya Osenova Additional Assessors Fernando Luis Costa Anna Kampchen Julia Kramme Cosmina Croitoru Advisory Board Donna Harman Maarten de Rijke Dominique Laurent

39 UNED nlp.uned.es Evolution of the task 2003200420052006200720082009 Target languages 378910118 Collections News 1994+ News 1995 + Wikipedia Nov. 2006 European Legislation Number of questions 200500 Type of questions 200 Factoid + Temporal restrictions + Definitions - Type of question + Lists + Linked questions + Closed lists - Linked + Reason + Purpose + Procedure Supporting information DocumentSnippetParagraph Size of answer SnnipetExactParagraph

40 UNED nlp.uned.es Collection Subset of JRC-Acquis (10,700 docs x lang) Parallel at document level EU treaties, EU legislation, agreements and resolutions Economy, health, law, food, … Between 1950 and 2006

41 UNED nlp.uned.es 500 questions REASON Why did a commission expert conduct an inspection visit to Uruguay? PURPOSE/OBJECTIVE What is the overall objective of the eco-label? PROCEDURE How are stable conditions in the natural rubber trade achieved? In general, any question that can be answered in a paragraph

42 UNED nlp.uned.es 500 questions Also FACTOID In how many languages is the Official Journal of the Community published? DEFINITION What is meant by “whole milk”? No NIL questions

43 UNED nlp.uned.es Systems response No Answer ≠ Wrong Answer 1. Decide if they answer or not [ YES | NO ] Classification Problem Machine Learning, Provers, etc. Textual Entailment 2. Provide the paragraph (ID+Text) that answers the question Aim To leave a question unanswered has more value than to give a wrong answer

44 UNED nlp.uned.es Assessments R: The question is answered correctly W: The question is answered incorrectly NoA: The question is not answered NoA R: NoA, but the candidate answer was correct NoA W: NoA, and the candidate answer was incorrect Noa Empty: NoA and no candidate answer was given Evaluation measure: c@1 Extension of the traditional accuracy (as proportion of questions correctly answered) Considering unanswered questions

45 UNED nlp.uned.es Evaluation measure n: Number of questions n R : Number of correctly answered questions n U : Number of unanswered questions

46 UNED nlp.uned.es Evaluation measure If n U = 0 then c@1=n R /n  Accuracy If n R = 0 then c@1=0 If n U = n then c@1=0 Leave a question unanswered gives value only if this avoids to return a wrong answer Accuracy The added value is the performance shown with the answered questions: Accuracy

47 UNED nlp.uned.es List of Participants SystemTeam elixELHUYAR-IXA, SPAIN iciaRACAI, ROMANIA iiitSearch & Info Extraction Lab, INDIA ilesLIMSI-CNRS-2, FRANCE isikISI-Kolkata, INDIA logaU.Koblenz-Landau, GERMAN miraMIRACLE, SPAIN nlelU. politecnica Valencia, SPAIN synaSynapse Developpment, FRANCE uaicAI.I.Cuza U. of IASI, ROMANIA unedUNED, SPAIN

48 UNED nlp.uned.es Value of reducing wrong answers Systemc@1Accuracy#R#W#NoA#NoA R #NoA W #NoA empty combination0.76 3811190000 icia092roro0.680.522608415600 icia091roro0.580.4723715610700 UAIC092roro0.47 236264000 0 UAIC091roro0.45 227273000 0 base092roro0.44 220280000 0 base091roro0.37 185315000 0

49 UNED nlp.uned.es Detecting wrong answers Systemc@1Accuracy#R#W#NoA#NoA R #NoA W#NoA empty combination0.56 2782220000 loga091dede0.440.4186221931668 9 loga092dede0.440.4187230831262 9 base092dede0.38 189311000 0 base091dede0.35 174326000 0 Maintaining the number of correct answers, the candidate answer was not correct for 83% of unanswered questions Very good step towards improving the system

50 UNED nlp.uned.es IR important, not enough Systemc@1Accuracy#R#W#NoA#NoA R#NoA W#NoA empty combination0.9 451490000 uned092enen0.61 288184281512 1 uned091enen0.60.59282190281513 0 nlel091enen0.580.57287211200 2 uaic092enen0.540.52243204531835 0 base092enen0.53 263236110 0 base091enen0.51 256243101 0 elix092enen0.48 240260000 0 uaic091enen0.440.42200253471136 0 elix091enen0.42 211289000 0 syna091enen0.28 141359000 0 isik091enen0.25 126374000 0 iiit091enen0.20.115437409011 398 elix092euen0.18 91409000 0 elix091euen0.16 78422000 0 Achievable Task Perfect combination is 50% better than best system Many systems under the IR baselines

51 UNED nlp.uned.es Outline Motivation and goals Definition and general framework AVE 2006 AVE 2007 & 2008 QA 2009 Conclusion

52 UNED nlp.uned.es Conclusion New QA evaluation setting Assuming that To leave a question unanswered has more value than to give a wrong answer This assumption give space to further development QA systems And hopefully improve their performance

53 Thanks! http://nlp.uned.es/clef-qa/ave http://www.clef-campaign.org Acknowledgement: EU project T-CLEF (ICT-1-4-1 215231)


Download ppt "Evaluating Question Answering Validation Anselmo Peñas (and Alvaro Rodrigo) NLP & IR group UNED nlp.uned.es Information Science Institute Marina del Rey,"

Similar presentations


Ads by Google