Presentation is loading. Please wait.

Presentation is loading. Please wait.

Question Answering Avishek Gosh (07305048)‏ Abhishek Agarkar (07305053)‏ Nikhilesh Sharma (07305045)‏ Harshada Gune (07305904)‏

Similar presentations


Presentation on theme: "Question Answering Avishek Gosh (07305048)‏ Abhishek Agarkar (07305053)‏ Nikhilesh Sharma (07305045)‏ Harshada Gune (07305904)‏"— Presentation transcript:

1 Question Answering Avishek Gosh ( )‏ Abhishek Agarkar ( )‏ Nikhilesh Sharma ( )‏ Harshada Gune ( )‏

2 Roadmap Definition Motivation How is it different from search engine? Generic QA architecture START QA system OpenEphyra QA system Joint answer ranking system Conclusion References

3 What is Question Answering? Type of information retrieval. Given a collection of documents,the system should be able to retrieve answers to questions posed in natural language.information retrievalquestionsnatural language -Wikipedia

4 Motivation Google Search

5 START search

6 QA Different From Search Extensive research Quick referencesUses List of documents Concise answerOutput Query containing keywords Natural Language Question Input SearchQA

7 Issues to be Considered Question Classes Question Processing Data Sources Answer Extraction Answer Formulation Advanced Reasoning for QA

8 Generic QA Architecture

9 START (SynTactic Analysis using Reversible Transformations)‏

10 Facts Was developed by Boris Katz at MIT's Artificial Intelligence Laboratory Was first connected to the World Wide Web in December, 1993 Key technique is "natural language annotation"

11 Two basic foundations Sentence level Natural Language Processing capabilities Natural Language Annotations

12 T-expressions(Ternary expressions)‏ Exp: Sentence: “Bill surprised Hillary with his answer” T-expression: with answer> History: contains other parameters(adverbs,tense,etc.)‏ attached to the expression Knowledge base is serached for the T-expression

13 Solution by T-expression Q: Whom did Bill surprise with his answer? Step1: Bill surprised whom with his answer Step2: with answer> Step3: It matches to the earlier T-expresion Step4: produce the answer A: Bill surprised Hillary with his answer.

14 S-rules Q: Whose answer surprised Hillary? T-expression : No match for this T-expresssion

15 S-rules(Cont.)‏ Need a rule like: A surprised B with C => A's C surprised B Surprise S-rule: If with object2> Then

16 S-rules(Cont.)‏ Should be generalised for similar verbs as surprise like anger, disappoint, please,etc.(emotional-reaction class)‏ Property-factoring S-rule: If with object2> Then Provided verb IN emotional-reaction class

17 Natural Language Annotations Computer analyzable collections of natural language sentences and phrases that describe the contents of various information segments A pointer is associated with sentences which points to the information segment summarized by the annotations. Instead of producing just the sentence as answer START follows the pointer and gives the text segment with the answer sentence.

18 Omnibase assists START Q: “Who directed gone with the wind?” START should know that “Gone with the wind” is an entity(movie name). Omnibase can help here with this info It also tells where to find the info (here in movie data source)‏ now the question is : “Who directed X”

19 OpenEphyra

20 Ephyra Introduction Open domain question answering system Have a very modular approach and can be adapted to other languages Uses semantic role labeling and pattern matching Time dependent questions not taken into consideration Like all other systems composed mainly of three modules Query formulation Search Answer selection

21 Process of Question Interpretation Co reference resolution Resolve personal, possessive and demonstrative pronouns Question Normalization Drops punctuation and quotation marks Stems verbs and nouns Semantic Parsing of Questions Question is transformed into a statement Semantic Role Labeling is done Query Generation and Expansion Keyword queries Term queries Predicate queries Reformulation queries Question: In what year was the CMU campus at the west coast established?

22 Search Module The search module consists of two kind of searchers Knowledge Miners: searches from unstructured data Knowledge Annotators: they provide semi-structured information, ex. Gazetteer web service Answer Candidates

23 Answer Extraction Two ways to extract answers Answer Type analysis Pattern Learning approach Answer Type Analysis Consists of a set of 154 answer types Uses features to classify answers Lexical Features: UNIGRAM Syntactic Features: MAIN_VERB, WH_WORD Semantic Features: FOCUS_TYPE Patterns are associated with the types Example Named Entity Type: Weekday Example Pattern: (what|which|name) (.*)?(day of (the)?week|weekday)‏ Has high precision but fails for answers that cannot be tagged

24 Semantic Role based extraction of Candidate Answers Initial Pruning Number of tokens is within limit The predicate verb or a semantically related verb must be present Must contain an entity of the same type as that of the question If type not known; must contain one semantically similar term Similarity Score Term similarity : Expanded term similarity: Verb similarity: Argument Similarity: Predicate Similarity:

25 Pattern Learning Approach Answer patterns can be learned using question answer pairs as training data Learning algorithm Tuples containing target, context and answer are fed to searchers Example tuple: callories, Big Mac, 560 All named entities are replaced by their types Some generic patterns are used to learn property specific pattern Generalization done by keeping the NE types and making other tokens optional Answer ranking is done by adding the confidence value Answer Evaluation Confidence = (correct/(correct+incorrect))‏ Support = correct/snippet

26 A Joint Answer Ranking Model

27 Joint Answer Ranking A graphical model based method. Estimates the correctness of individual answers. Gives corelation between the answers.

28 Candidate Generation Document retrieval Question analysis Answer extraction

29 Issues with Candidates Answer Relevence Answer Similarity

30 The model Estimate the joint probablity of all the answer candidates. Infer the probablity of individual candidates from the above.

31 The problem is modeled into an undirected graphical model problem.

32 Algorithm Create empty answer pool. Estimate joint probablity of all answer candidates. Calculate the marginal probability that an individual answer candidate is correct. Choose the answer candidate whose marginal probability is highest, and move it to the answer pool.

33 For the remaining answer candidates : Calculate the conditional probability of individual answers given the chosen answer(s). Calculate the score of each answer candidate from the marginal and conditional probability. Choose the answer whose Score(Aj) is maximum, and move it to the answer pool.

34 Example Question : Who have been the U.S. presidents since 1993? Answer :- P(correct(William J. Clinton)) = P(correct(Bill Clinton)) = 0.755l P(correct(George W. Bush) = 0.617

35 P(correct(Bill Clinton)|correct(William J. Clinton)) = P(correct(George W. Bush)|correct(William J. Clinton)) = 0.617

36 Evaluation TREC (Text Retrieval conference)‏ FIRE (Forum for Information Retrieval Evaluation)‏ CLEF (Cross Language Evaluation Forum)‏

37 Conclusion Question Answering is an exciting area of research. A real world application of NLP technologies. Lies at the intersection of information retrieval and natural language processing. The dream : a vast repository of knowledge we can “talk to”. We are a long way from there...

38 References Issues, Tasks and Program Structures to Roadmap Research in Question & Answering (Q&A),2002. M. W. Bilotti. Query expansion techniques for question answering. Master's thesis, Massachusetts Institute of Technology, Hoa Trang Dang1, Jimmy Lin2, and Diane Kelly3.Overview of the TREC 2006 Question Answering Track. In Proceedings of the Fifteenth Text REtrieval Conference (TREC 2006).

39 References Jeongwoo Ko, Luo Si, Eric Nyberg. A Probabilistic Graphical Model for Joint Answer Ranking in Question Answering. In proceedings of SIGIR, Nico Schlaefer, Jeongwoo Ko, Justin Betteridge, Guido Sautter, Manas Pathak, Eric Nyberg. Semantic Extensions of the Ephyra QA System for TREC 2007.In Proceedings of the Sixteenth Text REtrieval Conference (TREC), DIALOGUE (TSD), Nico Schlaefer, Petra Gieselmann, Guido Sautter. The Ephyra QA System at TREC In Proceedings of the Fifteenth Text REtrieval Conference (TREC), Nico Schlaefer, Petra Gieselmann, Thomas Schaaf, Alex Waibel. A Pattern Learning Approach to Question Answering within the Ephyra Framework.In Proceedings of the Ninth International Conference on TEXT, SPEECH and and DIALOGUE (TSD), 2006.

40 References (contd.)‏ Katz, B., “Annotating the World Wide Web using Natural Language,” in Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet (RIAO '97). Boris Katz, Gary Borchardt and Sue Felshin. Natural Language Annotations for Question Answering. Proceedings of the 19th International FLAIRS Conference (FLAIRS 2006), May 2006, Melbourne Beach, FL. Boris Katz, Sue Felshin, Deniz Yuret, Ali Ibrahim, Jimmy Lin, Gregory Marton, Alton Jerome McFarland, and Baris Temelkurane, “Omnibase: Uniform Access to Heterogeneous Data for Question Answering”, In proceedings of the 7 th International Workshop on Applications Natural Language to Information Systems (NLDB 2002), june 2002, Stockholm, Sweden


Download ppt "Question Answering Avishek Gosh (07305048)‏ Abhishek Agarkar (07305053)‏ Nikhilesh Sharma (07305045)‏ Harshada Gune (07305904)‏"

Similar presentations


Ads by Google