Question Answering Gideon Mann Johns Hopkins University

Slides:



Advertisements
Similar presentations
Arnd Christian König Venkatesh Ganti Rares Vernica Microsoft Research Entity Categorization Over Large Document Collections.
Advertisements

Information Retrieval and Organisation Chapter 12 Language Models for Information Retrieval Dell Zhang Birkbeck, University of London.
Posner and Keele; Rosch et al.. Posner and Keele: Two Main Points Greatest generalization is to prototype. –Given noisy examples of prototype, prototype.
Chapter 5: Introduction to Information Retrieval
SEARCHING QUESTION AND ANSWER ARCHIVES Dr. Jiwoon Jeon Presented by CHARANYA VENKATESH KUMAR.
QA and Language Modeling (and Some Challenges) Eduard Hovy Information Sciences Institute University of Southern California.
NYU ANLP-00 1 Automatic Discovery of Scenario-Level Patterns for Information Extraction Roman Yangarber Ralph Grishman Pasi Tapanainen Silja Huttunen.
Query Dependent Pseudo-Relevance Feedback based on Wikipedia SIGIR ‘09 Advisor: Dr. Koh Jia-Ling Speaker: Lin, Yi-Jhen Date: 2010/01/24 1.
Dialogue – Driven Intranet Search Suma Adindla School of Computer Science & Electronic Engineering 8th LANGUAGE & COMPUTATION DAY 2009.
1 Entity Ranking Using Wikipedia as a Pivot (CIKM 10’) Rianne Kaptein, Pavel Serdyukov, Arjen de Vries, Jaap Kamps 2010/12/14 Yu-wen,Hsu.
Search Engines and Information Retrieval
Information Retrieval Ling573 NLP Systems and Applications April 26, 2011.
Event Extraction: Learning from Corpora Prepared by Ralph Grishman Based on research and slides by Roman Yangarber NYU.
Automatic Image Annotation and Retrieval using Cross-Media Relevance Models J. Jeon, V. Lavrenko and R. Manmathat Computer Science Department University.
The Informative Role of WordNet in Open-Domain Question Answering Marius Paşca and Sanda M. Harabagiu (NAACL 2001) Presented by Shauna Eggers CS 620 February.
1 QA in Discussion Boards  Companies (e.g., Dell, IBM) use discussion boards as ways for customers to get answers to their questions  90% of 40 analyzed.
Approaches to automatic summarization Lecture 5. Types of summaries Extracts – Sentences from the original document are displayed together to form a summary.
Information Retrieval
1 CS 502: Computing Methods for Digital Libraries Lecture 11 Information Retrieval I.
Finding Advertising Keywords on Web Pages Scott Wen-tau YihJoshua Goodman Microsoft Research Vitor R. Carvalho Carnegie Mellon University.
Information Retrieval in Practice
A Web-based Question Answering System Yu-shan & Wenxiu
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
Search and Retrieval: Relevance and Evaluation Prof. Marti Hearst SIMS 202, Lecture 20.
Disambiguation of References to Individuals Levon Lloyd (State University of New York) Varun Bhagwan, Daniel Gruhl (IBM Research Center) Varun Bhagwan,
AQUAINT Kickoff Meeting – December 2001 Integrating Robust Semantics, Event Detection, Information Fusion, and Summarization for Multimedia Question Answering.
Challenges in Information Retrieval and Language Modeling Michael Shepherd Dalhousie University Halifax, NS Canada.
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification on Reviews Peter D. Turney Institute for Information Technology National.
Search Engines and Information Retrieval Chapter 1.
Hang Cui et al. NUS at TREC-13 QA Main Task 1/20 National University of Singapore at the TREC- 13 Question Answering Main Task Hang Cui Keya Li Renxu Sun.
1 The BT Digital Library A case study in intelligent content management Paul Warren
S nippet Sleuth Question to Query Question to Query Information Fluency Information Fluency Illinois Mathematics and Science Academy, Aurora, IL Soccer.
AnswerBus Question Answering System Zhiping Zheng School of Information, University of Michigan HLT 2002.
A Simple Unsupervised Query Categorizer for Web Search Engines Prashant Ullegaddi and Vasudeva Varma Search and Information Extraction Lab Language Technologies.
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
When Experts Agree: Using Non-Affiliated Experts To Rank Popular Topics Meital Aizen.
A Probabilistic Graphical Model for Joint Answer Ranking in Question Answering Jeongwoo Ko, Luo Si, Eric Nyberg (SIGIR ’ 07) Speaker: Cho, Chin Wei Advisor:
Planning a search strategy.  A search strategy may be broadly defined as a conscious approach to decision making to solve a problem or achieve an objective.
Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering Mark A. Greenwood and Robert Gaizauskas Natural Language.
Ranking in Information Retrieval Systems Prepared by: Mariam John CSE /23/2006.
LIS618 lecture 3 Thomas Krichel Structure of talk Document Preprocessing Basic ingredients of query languages Retrieval performance evaluation.
Mining Topic-Specific Concepts and Definitions on the Web Bing Liu, etc KDD03 CS591CXZ CS591CXZ Web mining: Lexical relationship mining.
Binxing Jiao et. al (SIGIR ’10) Presenter : Lin, Yi-Jhen Advisor: Dr. Koh. Jia-ling Date: 2011/4/25 VISUAL SUMMARIZATION OF WEB PAGES.
Comparing and Ranking Documents Once our search engine has retrieved a set of documents, we may want to Rank them by relevance –Which are the best fit.
Collocations and Information Management Applications Gregor Erbach Saarland University Saarbrücken.
Searching the web Enormous amount of information –In 1994, 100 thousand pages indexed –In 1997, 100 million pages indexed –In June, 2000, 500 million pages.
WIRED Week 3 Syllabus Update (next week) Readings Overview - Quick Review of Last Week’s IR Models (if time) - Evaluating IR Systems - Understanding Queries.
LANGUAGE MODELS FOR RELEVANCE FEEDBACK Lee Won Hee.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
Chapter 23: Probabilistic Language Models April 13, 2004.
Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering Mark A. Greenwood and Robert Gaizauskas Natural Language.
Automatic Question Answering  Introduction  Factoid Based Question Answering.
A Classification-based Approach to Question Answering in Discussion Boards Liangjie Hong, Brian D. Davison Lehigh University (SIGIR ’ 09) Speaker: Cho,
UWMS Data Mining Workshop Content Analysis: Automated Summarizing Prof. Marti Hearst SIMS 202, Lecture 16.
Mining Dependency Relations for Query Expansion in Passage Retrieval Renxu Sun, Chai-Huat Ong, Tat-Seng Chua National University of Singapore SIGIR2006.
Answer Mining by Combining Extraction Techniques with Abductive Reasoning Sanda Harabagiu, Dan Moldovan, Christine Clark, Mitchell Bowden, Jown Williams.
Acquisition of Categorized Named Entities for Web Search Marius Pasca Google Inc. from Conference on Information and Knowledge Management (CIKM) ’04.
ASSOCIATIVE BROWSING Evaluating 1 Jinyoung Kim / W. Bruce Croft / David Smith for Personal Information.
Discovering Relations among Named Entities from Large Corpora Takaaki Hasegawa *, Satoshi Sekine 1, Ralph Grishman 1 ACL 2004 * Cyberspace Laboratories.
Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.
User Interfaces and Information Retrieval Dina Reitmeyer WIRED (i385d)
Overview of Statistical NLP IR Group Meeting March 7, 2006.
AQUAINT Mid-Year PI Meeting – June 2002 Integrating Robust Semantics, Event Detection, Information Fusion, and Summarization for Multimedia Question Answering.
Question Answering Passage Retrieval Using Dependency Relations (SIGIR 2005) (National University of Singapore) Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan,
Robust Semantics, Information Extraction, and Information Retrieval
CS 430: Information Discovery
Introduction to Information Retrieval
CS246: Information Retrieval
Information Retrieval and Web Design
Presentation transcript:

Question Answering Gideon Mann Johns Hopkins University

Information Retrieval Tasks  Retired General Wesley Clark  How old is General Clark?  How long did Clark serve in the military?  Will Clark run for President?

Ad-Hoc Queries  Prior work has been concerned mainly with answering ad-hoc queries : General Clark  Typically a few words long, not an entire question  What is desired is general information about the subject in question

Answering Ad-Hoc Queries  Main focus of Information Retrieval past 2-3 decades  Solution(s) : –Vector-based methods –SVD, query expansion, language modeling –Return a page as an answer  Resulting systems Extremely Useful –Google, Altavista

Traditional IR Query Document Collection Document Ranking Document retrieval

But not all queries are Ad-Hoc! How old is General Clark?  Does not fit well into an Ad-hoc paradigm –“How” and “is” are not relevant for appropriate retrieval –Potentially useful cues in the question are ignored in traditional ad-hoc retrieval system

Documents are not Facts  Traditional IR systems return Pages –Useful when only a vague information need has been identified  Insufficient when a fact is desired: –How old is General Clark? 58 –How long did Clark serve in the mililary? 36 years –Will Clark run for president? Maybe

Question Answering as Retrieval Given a document collection and a question: A question answering system should retrieve a short snippet of text which exactly answer the question asked.

Question Answering Query Document Collection Document Ranking Document retrieval Ranked Answers Answer Extraction(Sentence ranking)

QA as a Comprehension Task  For perfect recall, the answer only has to appear once in the collection.  In essence, this forces the QA system to function as a text understanding system  Thus QA may be interesting, not only for retrieval, but also to test understanding

QA as a stepping stone  Current QA focused on Fact extraction –Answers appear verbatim in text How old is General Clark?  How can we answer questions which don’t appear exactly in the text? How long has Clark been in the military? Will Clark run for President?  Maybe build on low-level QA extracted facts

QA Methods Two Main Categories of Methods for Question Answering –Answer Preference Matching –Answer Context Matching

Lecture Outline 1.Answer Preferences  Question Analysis  Type identification  Learning Answering Typing 2.Answer Context Learning Context Similarity  Alignment  Surface Text Patterns

Answer Type Identification From the question itself infer the likely type of the answer How old is General Clark?  How Old  When did Clark retire?  When  Who is the NBC war correspondent?  Who 

NASSLI!  April 12 Deadline – –Could be extended…. –Mail to ask for more

Answer Type Identification From the question itself infer the likely type of the answer How old is General Clark?  How Old  Age When did Clark retire?  When  Date Who is the NBC war correspondent?  Correspondent  Person

Wh-Words WhoPerson, Organization, Location WhenDate, Year WhereLocation In WhatLocation What??

Difficult to Enumerate All Possibilities Though What is the service ceiling for a PAC750?

WordNet wingspan length diameterradius altitude ceiling

WordNet For Answer Typing wingspan length diameterradius altitude ceiling NUMBER What is the service ceiling for a PAC750?

Lecture Outline 1.Answer Preferences  Question Analysis  Type identification  Learning Answering Typing 2.Answer Context Learning Context Similarity  Alignment  Surface Text Patterns

Answer Typing gives the Preference…  From Answer Typing, we have the preferences imposed by the question  But in order to use those preferences, we must have a way to detect potential candidate answers

Some are Simple…  Number  [0-9]+  Date  ($month) ($day) ($year)  Age  0 – 100

… Others Complicated  Who shot Martin Luther King? –Person preference  Requires a Named Entity Identifier  Who saved Chrysler from bankruptcy? –Not just confined to people… –Need a Tagger to identify appropriate candidates

Use WordNet for Type Identification “What 20 th century poet wrote Howl?” writer poet Ginsburg Frost Rilke Candidate Set communicator

Simple Answer Extraction How old is General Clark? Age General Clark, from Little Rock, Arkansas, turns 58 after serving 36 years in the service, this December 23, General Clark, from Little Rock, Arkansas, turns 58 after serving 36 years in the service, this December 23, Age Tagger

Lecture Outline 1.Answer Preferences  Question Analysis  Type identification  Learning Answering Typing 2.Answer Context Learning Context Similarity  Alignment  Surface Text Patterns

Learning Answer Typing  What is desired is a model which predicts P(type|question)  Usually a variety of possible types –Who   Person (“Who shot Kennedy?” Oswald)  Organization (“Who rescued Chrysler from bankruptcy?” The Government)  Location (“Who won the Superbowl?” New England)

What training data?  Annotated Questions –“Who shot Kennedy” [PERSON]  Problems : –Expensive to annotate –Must be redone, every time the tag set is devised

Trivia Questions!  Alternatively, use unannotated Trivia Questions –Q: “Who shot Kennedy” –A: Lee Harvey Oswald  Run your Type-Tagger over the answers, to get tags –A: Lee Harvey Oswald [ PERSON]

MI Model  From tags, you can build a MI model –Predict from the question head-word  MI(Question Head Word, Type Tag) = P(Type Tag | QuestionHeadWord) P(Type Tag) –From this you can judge the fit of a question/word pair –(Mann 2001)

MaxEnt Model  Rather than just use head word alone train on the entire set of words, and build a Maximum Entropy model to combine features suggested by the entire phrase “What was the year in which Castro was born?” (Ittycheriah et al. 2001)

Maybe you don’t even need training data!  Looking at occurrences of words in text, look at what types occur next to them  Use these co-occurrence statistics to determine appropriate type of answer  (Prager et al. 2002)

Lecture Outline 1.Answer Preferences  Question Analysis  Type identification  Learning Answering Typing 2.Answer Context Learning Context Similarity  Alignment  Surface Text Patterns

Is Answer Typing Enough?  Even when you’ve found the correct sentence, and know the type of the answer a lot of ambiguity in the answer still remains  Some experiments show that in every sentence, around 2/3 choices of appropriate type for a sentence which answers a question  For high precision systems, this is unacceptable

Answer Context Who shot Martin Luther King? Answer Preference Answer Context

Using Context  Many systems simply look for an answer of the correct type in a context which seems appropriate –Many matching keywords –Perhaps using query expansion

Another alternative  If the question is “Who shot Kennedy”  Search for all exact phrases matches  “X shot Kennedy”  And simple alternations  “Kennedy was shot by X”  (Brill et al. 2001)

Beyond…  The first step beyond simple keyword matching, is to use relative position information  One way of doing this is to use alignment information

Lecture Outline 1.Answer Preferences  Question Analysis  Type identification  Learning Answering Typing 2.Answer Context Learning Context Similarity  Alignment  Surface Text Patterns

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Three Potential Candidates by type

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Matching Context Question Head word

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Anchor word

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Potential alignments

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. One Alignment Three Alignment Features :

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. One Alignment Three Alignment Features : 1.Dws : Distance between Question Head word and Anchor in the sentence 2

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Three Alignment Features : 2. Dwq Distance between Question Head word and Anchor In the question 1

Local Alignment Who shot Kennedy? Jack assassinated Oswald, the man who shot Kennedy, and was Mrs. Ruby’s Husband. Three Alignment Features : 3. R Has the Head Word changed position?  Headword position flipped

Build a Statistical Model  Pr (answer | question, sentence) = Pr ( Dws | answer, question, sentence) *Pr(Dwq | answer, question, sentence) *Pr(R | answer, question, sentence) and if unsure about type preference, can add in a term there

 In essence, this local alignment model gives a robust method for using the context of the question to pick out the correct answer from a given sentence containing an answer

Surface text Patterns  Categorize question into what kind of data it is looking for  Use templates to build specialized models  Use resulting “surface text patterns” for searching

Birthday Templates W. A. Mozart1756 I. Newton1642 M. Gandhi1869 V. S. Naipaul1932 Bill Gates1951

Web Search to generate patterns Web pages w/“Mozart” “1756” Sentences with “Mozart” “1756” Substrings with “Mozart” “1756”

How can we pick good patterns?  Frequent ones may be too general  Infrequent ones not that useful  Want precise, specific ones Use held out templates to evaluate patterns