Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Digital Libraries Searching

Similar presentations


Presentation on theme: "Introduction to Digital Libraries Searching"— Presentation transcript:

1 Introduction to Digital Libraries Searching

2 Technical View: Retrieval as Matching Documents to Queries
Algorithm Surrogates Surrogates Document Space Terms Query Form A Query Space Sample Sample Vectors Query Form B NO User! No Feedback! Etc.. Etc.. Retrieval is algorithmic. Evaluation is typically a binary decision for each pairwise match and one or more aggregate values for a set of matches (e.g., recall and precision). 5

3 Human View: Information-Seeking Process
Results Perceived Needs Queries Indexes Problem Actions Data Physical Interface Information seeking is an active, iterative process controlled by a human who Changes throughout the process. Evaluation is relative to human needs. 6

4 IR Models s e Adhoc r Filtering T a k Browsing Extended Boolean vector
Set Theoretic Fuzzy Extended Boolean Classic Models boolean vector probabilistic Algebraic Generalized Vector Lat. Semantic Index Neural Networks U s e r T a k Retrieval: Adhoc Filtering Non-Overlapping Lists Proximal Nodes Structured Models Probabilistic Inference Network Belief Network Browsing Browsing Flat Structure Guided Hypertext

5 “Classic” Retrieval Models
Boolean Documents and queries are sets of index terms Vector Documents and queries are documents in N-dimensional space Probabilistic Based on probability theory

6 Boolean Searching Exactly what you would expect
and, or, not operations defined requires an exact match based on inverted file (computer and science) and (not(animals)) would prevent a document with “use of computers in animal science research” from being retrieved

7 Boolean ‘AND’ Information AND Retrieval Information Retrieval

8 Example Draw a Venn diagram for: What is the meaning of:
Care and feeding and (cats or dogs) What is the meaning of: Information and retrieval and performance or evaluation

9 Exercise D1 = “computer information retrieval”
D2 = “computer retrieval” D3 = “information” D4 = “computer information” Q1 = “information  retrieval” Q2 = “information  ¬computer”

10 Boolean-based Matching
Exact match systems; separate the documents containing a given term from those that do not. Terms Queries Mediterranean adventure agriculture cathedrals horticulture scholarships disasters bridge leprosy recipes flags tennis Venus flags AND tennis leprosy AND tennis Venus OR (tennis AND flags) Documents (bridge OR flags) AND tennis

11 Exercise 1 Swift 2 Shakespeare 3 4 Milton 5 6 7 8 Chaucer 9 10 11 12 13 14 15 ((chaucer OR milton) AND (NOT swift)) OR ((NOT chaucer) AND (swift OR shakespeare))

12 Boolean features Order dependency of operators Nesting of search terms
( ), NOT, AND, OR (DIALOG) May differ on different systems Nesting of search terms Nutrition and (fast or junk) and food

13 Boolean Limitations Searches can become complex for the average user
too much ANDing can clobber recall tricky syntax: “research AND NOT computer science” “research AND NOT (computer science)” (implicit OR) “research AND NOT (computer AND science)” all different -- (frequently seen in NTRS logs)

14 Vector Model Calculate degree of similarity between document and query
Ranked output by sorting similarity values Also called ‘vector space model’ Imagine your documents as N-dimensional vectors (where N=number of words) The “closeness” of 2 documents can be expressed as the cosine of the angle between the two vectors

15 Vector Space Model Documents and queries are points in N-dimensional space (where N is number of unique index terms in the data collection) Q D

16 Vector Space Model with Term Weights
assume document terms have different values for retrieval therefore assign weights to each term in each document example: proportional to frequency of term in document inversely proportional to frequency of term in collection

17 Graphic Representation
Example: D1 = 2T1 + 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 T3 T1 T2 D1 = 2T1+ 3T2 + 5T3 D2 = 3T1 + 7T2 + T3 Q = 0T1 + 0T2 + 2T3 7 3 2 5 Is D1 or D2 more similar to Q? How to measure the degree of similarity? Distance? Angle? Projection?

18 Document and Query Vectors
Documents and Queries are vectors of terms Vectors can use binary keyword weights or assume 0-1 weights (term frequencies) Example terms: “dog”,”cat”,”house”, “sink”, “road”, “car” Binary: (1,1,0,0,0,0), (0,0,1,1,0,0) Weighted: (0.01,0.01, 0.002, 0.0,0.0,0.0)

19 Document Collection Representation
A collection of n documents can be represented in the vector space model by a term-document matrix. An entry in the matrix corresponds to the “weight” of a term in the document; zero means the term has no significance in the document or it simply doesn’t exist in the document. T1 T2 … Tt D1 w11 w21 … wt1 D2 w12 w22 … wt2 : : : : Dn w1n w2n … wtn

20 Inner Product: Example 1
k1 k2 k3

21 factors information help human operation retrieval systems
Vector Space Example Simple Match Query ( ) Rec1 ( ) ( ) =4 Rec2 ( ) ( ) =3 Rec3 ( ) ( ) =2 indexed words: factors information help human operation retrieval systems Query: human factors in information retrieval systems Vector: ( ) Record 1 contains: human, factors, information, retrieval Vector: ( ) Record 2 contains: human, factors, help, systems Vector: ( ) Record 3 contains: factors, operation, systems Vector: ( ) Weighted Match Query ( ) Rec1 ( ) ( ) =13 Rec2 ( ) ( ) =8 Rec3 ( ) ( ) =3

22 Term Weights: Term Frequency
More frequent terms in a document are more important, i.e. more indicative of the topic. fij = frequency of term i in document j May want to normalize term frequency (tf) across the entire corpus: tfij = fij / max{fij}

23 Some formulas for Sim Dot product Cosine Dice Jaccard t1 D Q t2

24 Example Documents: Austen's Sense and Sensibility, Pride and Prejudice; Bronte's Wuthering Heights cos(SAS, PAP) = .996 x x x 0.0 = 0.999 cos(SAS, WH) = .996 x x x .254 = 0.929

25 Extended Boolean Model
Boolean model is simple and elegant. But, no provision for a ranking As with the fuzzy model, a ranking can be obtained by relaxing the condition on set membership Extend the Boolean model with the notions of partial matching and term weighting Combine characteristics of the Vector model with properties of Boolean algebra

26 The Idea qor = kx  ky; wxj = x and wyj = y ky (1,1) dj+1 OR dj
y = wyj (0,0) x = wxj kx We want a document to be as far as possible from (0,0) sim(qor,dj) = sqrt( x + y ) 2

27 with each term is associated a fuzzy set
Fuzzy Set Model Queries and docs represented by sets of index terms: matching is approximate from the start This vagueness can be modeled using a fuzzy framework, as follows: with each term is associated a fuzzy set each doc has a degree of membership in this fuzzy set This interpretation provides the foundation for many models for IR based on fuzzy theory

28 Probabilistic Model P(REL|D)
Views retrieval as an attempt to answer a basic question: “What is the probability that this document is relevant to this query?” expressed as: P(REL|D) ie. Probability of x given y (Probability that of relevance given a particular document D)

29 Probabilistic Model An initial set of documents is retrieved somehow
User inspects these docs looking for the relevant ones (in truth, only top need to be inspected) The system uses this information to refine description of ideal answer set By repeting this process, it is expected that the description of the ideal answer set will improve Have always in mind the need to guess at the very beginning the description of the ideal answer set Description of ideal answer set is modeled in probabilistic terms

30 Recombination after dimensionality reduction

31 Classic IR Models Vector vs. probabilistic
“Numerous experiments demonstrate that probabilistic retrieval procedures yield good results. However, the results have not been sufficiently better than those obtained using Boolean or vector techniques to convince system developers to move heavily in this direction

32 Example Build the inverted file for the following document
F1={Written Quiz for Algorithms and Techniques of Information Retrieval} F2={Program Quiz for Algorithms and Techniques of Web Search} F3={Search on the Web for Information on Algorithms}

33 Example You have the collection of documents that contain the following index terms: D1: alpha bravo charlie delta echo foxtrot golf D2: golf golf golf delta alpha D3: bravo charlie bravo echo foxtrot bravo D4: foxtrot alpha alpha golf golf delta Use a frequency matrix of terms to calculate a similarity matrix for these documents, with weights proportional to the term frequency and inversely proportional to the document frequency.

34 Terms Documents c1 c2 c3 c4 c5 m1 m2 m3 m4 __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ human interface computer user system response time EPS survey trees graph minors Give the scores of the 9 documents for the query trees, minors using Boolean search Give the scores of the 9 documents for the query trees, minors using the vector model.


Download ppt "Introduction to Digital Libraries Searching"

Similar presentations


Ads by Google