Presentation is loading. Please wait.

Presentation is loading. Please wait.

Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst.

Similar presentations


Presentation on theme: "Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst."— Presentation transcript:

1 Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst

2 Issues Formal basis for QA Question modeling Answer granularity Semi-structured data Answer updating

3 Formal basis for QA Problem – QA systems tend to be ad hoc combinations of NLP, IR and other techniques  Performance can be fragile  Requires considerable knowledge engineering  Difficult to transition between question and answer types some questions can be answered with “facts”, others require more Process of determining probability of correct answer very similar to probabilistic IR models  i.e. satisfying information need  better answers  better IR

4 QA Assumptions? IR first, then QA Single answer vs. ranked list Answer is transformed text vs. collection of source texts  who does the inference?  analyst’s current environment Questions are questions vs. “queries”  experience at WESTLAW  spoken questions? Answers are text extracts vs. people

5 Basic Approach Use language models as basis for a QA system  build on recent success in IR  test limits in QA Try to capture important aspects of QA  question types  answer granularity  multiple answers  feedback Develop learning methodologies for QA Currently more Qs than As

6 LM for QA (QALM) View question text as being generated by a mixture of relevance model and question model  relevance model is related to topic of question  question model is related to form of question Question models are associated with answer models Answers are document text generated by a mixture of relevance model and answer model TREC-style QA corresponds to specific set of question and answer models Default question and answer models leads to usual IR process  e.g. “what are the causes of asthma?”

7 Basic Approach Question text Question Model Relevance Model Answer Model Answer texts Language Models Estimate models Rank answers by probability

8 Relevance Models: idea For every topic, there is an underlying Relevance Model R Queries and relevant documents are samples from R Simple case: R is a distribution over vocabulary (unigram)  estimation straightforward if we had examples of relevant docs  use the fact that query is a sample from R to estimate parameters  use word co-occurrence statistics Relevance Model Query Relevant Documents

9 Relevance Model: Estimation with no training data Relevance Model israeli q1q1 palestinian q2q2 raids q3q3 ??? w R

10 Relevance Model: Putting things together sample probabilities palestinian israeli raids ??? q1q1 q2q2 q3q3 w M M M Relevance Model

11 Retrieval using Relevance Models Probability Ranking Principle:  P(w|R) is estimated by P(w|Q) (Relevance Model)  P(w|N) is estimated by collection probabilities P(w) Document-likelihood vs. query-likelihood approach  query expansion vs. document smoothing  can be applied to multiple granularities  i.e. sentence, passage, document, multi-document

12 Question Models Language modeling (or other techniques) can be used to classify questions based on predefined categories  best features need for each category need to be determined What is the best process for estimating the relevance and question models? How are new question models trained?  how many question models are enough?

13 Answer Models and Granularity Answer models are associated with question models  Best features need to be determined What is best process for estimating probability of answer texts given answer and relevance models? How is answer granularity modeled?  Learn default granularity for question category and backoff to other granularities  Is there an optimal granularity for a particular question/database? How are answer models trained?  Relevance feedback approach a possibility

14 Semi Structured Data How can structure and metadata indicated by markup be used to improve QA effectiveness?  Examples – tables, passages in Web pages Answer models will include markup and metadata features  Could be viewed as an indexing problem – what is an answer passage? – contiguous text too limited  Construct “virtual documents” Current demo: QuasmQuasm

15 Answer Updating Answers can change or evolve over time Dealing with multiple answers and answers arriving in streams of information are similar problems  Novelty detection in TDT task Evidence combination and answer granularity will be important components of a solution Filtering thresholds also important

16 Evaluation The CMU/UMass LEMUR toolkit will be used for development TREC QA data used for initial tests of general approach Web-based data used for evaluation of semi-structured data Modified TDT environment will be used for answer updating experiments Others….


Download ppt "Relevance Models and Answer Granularity for Question Answering W. Bruce Croft and James Allan CIIR University of Massachusetts, Amherst."

Similar presentations


Ads by Google