Presentation is loading. Please wait.

Presentation is loading. Please wait.

Other IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models boolean vector.

Similar presentations


Presentation on theme: "Other IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models boolean vector."— Presentation transcript:

1 Other IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models boolean vector probabilistic Set Theoretic Fuzzy Extended Boolean Probabilistic Inference Network Belief Network Algebraic Generalized Vector Lat. Semantic Index Neural Networks Browsing Flat Structure Guided Hypertext

2 Another Vector Model: Motivation 1. Index terms have synonyms. [Use thesauri?] 2. Index terms have multiple meanings (polysemy). [Use restricted vocabularies or more precise queries?] 3. Index terms are not independent; think “phrases”. [Use combinations of terms?]

3 Latent Semantic Indexing/Analysis Basic Idea: Keywords in a query are just one way of specifying the information need. One really wants to specify the key concepts rather than words. Assume a latent semantic structure underlying the term-document data that is partially obscured by exact word choice.

4 LSI In Brief Map from terms into lower dimensional space (via SVD) to remove “noise” and force clustering of similar words. Pre-process corpus to create reduced vector space Match queries to docs in reduced space

5 SVD for Term-Doc Matrix Docs Terms t x d = t x m m x mm x d C = where m is the rank of X (<=min(t,d)), T is orthonornal matrix of eigenvectors for term-term correlation, D is orthonornal matrix of eigenvectors from transpose of doc-doc correlation

6 Reducing Dimensionality Order singular values in S 0 by size, keep the k largest, and delete other rows/columns in S 0, T 0 and D 0 to form Approximate model is the rank-k model with best possible least-squares-fit to X. Pick k large enough to fit structure, but small enough to eliminate noise – usually ~100-300.

7 Computing Similarities in LSI How similar are 2 terms? dot product between two row vectors of How similar are two documents? dot product between two column vectors of How similar are a term and a document? value of an individual cell

8 Query Retrieval As before, treat query as short document: make it column 0 of C First row of C provides rank of docs wrt query.

9 LSI Issues Requires access to corpus to compute SVD How to efficiently compute for Web? What is the right value of k ? Can LSI be used for cross-language retrieval? Size of corpus is limited: “one student’s reading through high school” (Landauer 2002).

10 Other Vector Model: Neural Network Basic idea: 3 layer neural net: query terms, document terms, documents Signal propagation based on classic similarity computation Tune weights.

11 Neural Network Diagram from Wilkinson and Hingston, SIGIR 1991 Document Terms Query Terms Document s kaka kbkb kckc kaka kbkb kckc k1k1 ktkt d1d1 djdj d j+1 dNdN

12 Computing Document Rank Weight from query to document term Wiq Wiq = wiq sqrt (  i wiq ) Weight from document term to document Wij Wij = wij sqrt (  i wij )

13 Probabilistic Models Principle: Given a user query q and a document d in the collection, estimate the probability that the user will find d relevant. (How?) User rates a retrieved subset. System uses rating to refine the subset. Over time, retrieved subset should converge on relevant set.

14 Computing Similarity I probability that document dj is relevant to query q, probability that dj is non-relevant to the query q, probability of randomly selecting dj from set R probability that a randomly selected document is relevant

15 Computing Similarity II probability that index term ki is present in document randomly selected from R, Assumes independence of index terms

16 Initializing Probabilities assume constant probabilities for index terms: assume distribution of index terms in non-relevant documents matches overall distribution:

17 Improving Probabilities Assumptions: approximate probability given relevant as % docs with index i retrieved so far: approximate probabilities given non- relevant by assuming not retrieved are non-relevant:

18 Classic Probabilistic Model Summary Pros: ranking based on assessed probability can be approximated without user intervention Cons: really need user to determine set V ignores term frequency assumes independence of terms

19 Probabilistic Alternative: Bayesian (Belief) Networks A graphical structure to represent the dependence between variables in which the following holds: 1. a set of random variables for the nodes 2. a set of directed links 3. a conditional probability table for each node, indicating relationship with parents 4. a directed acyclic graph

20 Belief Network Example BEP(A) TT.95 TF.94 FT.29 FF.001 Burglary Earthquake Alarm JohnCallsMary Calls P(B).001 P(E).002 AP(J) T.90 F.05 AP(M) T.70 F.01 from Russell & Norvig

21 Belief Network Example (cont.) BEP(A ) TT.95 TF.94 FT.29 FF.00 1 P(B).001 P(E).002 AP(J) T.90 F.05 AP(M) T.70 F.01 Probability of false notification: alarm sounded and both people call, but there was no burglary or earthquake Burglary Earthquake Alarm JohnCallsMary Calls

22 Inference Networks for IR Random variables are associated with documents, index terms and queries. Edges from document node to term nodes increases belief in terms.

23 Computing rank in Inference Networks for IR q is keyword query. q1 is Boolean query. I is information need. Rank of document is computed as P(q^dj)

24 Where do probabilities come from? (Boolean Model) uniform priors on documents only terms in the document are active query is matched to keywords ala Boolean model

25 Belief Network Formulation different network topology does not consider each document individually adopts set theoretic view


Download ppt "Other IR Models Non-Overlapping Lists Proximal Nodes Structured Models Retrieval: Adhoc Filtering Browsing U s e r T a s k Classic Models boolean vector."

Similar presentations


Ads by Google