Presentation on theme: "CSE3201/4500 Information Retrieval Systems"— Presentation transcript:
1CSE3201/4500 Information Retrieval Systems Term WeightingCSE3201/4500Information Retrieval Systems
2Weighting TermsHaving decided on a set of terms for indexing, we need to consider whether all terms should be given the same significance. If not, how should we decide on their significance?
3Weighting Terms - tfLet tfij be the term frequency for term i on document j. The more a term appears in a document, the more likely it is to be a highly significant index term.
4Weighting Terms - df & idf Let dfi be document frequency of the i-th term.Since the significance increases with a decrease in the document frequency, we have the inverse document frequency, idfi = loge (N/dfi) where N is the number of documents in the database; loge is the natural logarithm (ln in the calculator)
5Weighting Terms - tf. idf The above two indicators are very often multiplied together to form the “tf.idf” weight, wij = tfij * idfior as is now more popular wij = loge (1 + tfij ) (1 + idfi)
6ExampleConsider 5 document collection: D1= “Dogs eat the same things that cats eat” D2 = “No dog is a mouse” D3 = “Mice eat little things” D4 = “Cats often play with rats and mice” D5 = “Cats often play, but not with other cats”
7Example - Cont. We might generate the following index sets: V1 = ( dog, eat, cat ) V2 = ( dog, mouse ) V3 = ( mouse, eat ) V4 = ( cat, play, rat, mouse ) V5 = (cat, play)System dictionary (cat,dog,eat,mouse,play,rat)
13A Larger ExampleDoc 1: The problem of how to describe documents for retrieval is called indexing.Doc 2: It is possible to use a document as its own index.Doc 3: The problem is that a document will exactly match only one query, namely document itself.Doc 4: The purpose of indexing then is to provide a description of a document so that it can be retrieved with queries that concern the same subject as the document.Doc 5: It must be a sufficiently specific description so that the document will not be returned for queries unrelated to the document.
14A Larger ExampleDoc 6: A simple way of indexing a document is to give a single code from a predefined set.Doc 7: We have the task of describing how we are going to match queries against document.Doc 8: The vector space model creates a space in which both document and queries are represented by vectors.Doc 9: A vector is obtained for each document and query from sets of index terms with associated weights.Doc 10: In order to compare the similarity of these vectors, we may measure the angle between them.
15A Larger ExampleIf we index these document using all words not on a stop list, we might obtain D1- problem, describe, documents, retrieval, called, indexing D2 - possible, document, own, index D3 - problem, document (*), exactly, match, one, query, namely D4 - purpose, indexing, provide, description, document(*), retrieved, queries, concern, subject D5 - sufficiently, specific, description, document(*), returned, queries, unrelated
16A Larger ExampleIf we index these documents using all words not on a stop list, we might obtain D6- simple, way, indexing, document, give, single, code, predefined, list D7- task, describing, going, match, queries, against, documents D8- vector (*), space(*), model, creates, documents, queries, represented D9- vector, obtained, document, query, sets, index, terms, associated, weights D10- order, compare, similarity, vectors, measure, angle
17A larger ExampleWe may now choose to stem the terms, which may leave us : D1- problem, describ, docu, retriev, call, index D2- possibl, docu, own, index D3- problem, docu (*), exact, match, on, quer, name D4- purpos, index, provid, descript, docu (*), retriev, queries, concern, subject D5- suffic, specif, descript, docu (*), return, quer, unrelat
18A Larger ExampleWe may now choose to stem the terms, which may leave us: D6-simpl, way, index, docu, giv, singl, cod, predefin, list D7- task, describ, go, match, quer, against, docu D8- vect (*), spac (*), model, creat, docu, quer, represent D9- vect, obtain, docu, quer, set, index, terms, associat, weight D10- order, compar, similarit, vect, measur, angle
20A Larger ExampleWe can now calculate the weights of the terms of one of the documents. For document 8, using the tf . idf formula, we give the terms the following weights: vect (2.41), spac (4.60), model (2.30), creat(2.30), docu (0.22), quer (0.51), represent (2.30)
21CSE3201/4500 Information Retrieval Systems Retrieval ModelCSE3201/4500Information Retrieval Systems
23Retrieval Paradigms How do we match? Produce non-ranked output Boolean retrievalProduce ranked outputvector space modelprobabilistic retrieval
24Advantages of RankingGood control over how many documents are viewed by a user.Good control over in which order documents are viewed by a user.The first documents that are viewed may help modify the order in which later documents are viewed.The main disadvantage is computational cost.
25Boolean RetrievalA query is a set of terms combined by the Boolean connectives “and”, “or” and “not”.e.g... FIND (document OR information) AND retrieval AND (NOT (information AND systems))Each term is matched against this query and either matches (TRUE) or it doesn’t (FALSE)
26Systems Provide Most systems provide match information such as FIND (document or information)1,000 records foundFIND (document OR information) AND retrieval40 records foundFIND (document OR information) AND retrieval AND *NOT (information AND systems))10 records foundSHOW
27An ExampleConsider the following document collection: D1 = “Dogs eat the same things that cats eat” D2 = “no dog is a mouse” D3 = “mice eat little things” D4 = “Cats often play with rats and mice” D5 = “cats often play, but not with other cats”indexed by: D1 = dog, eat, cat D2 = dog, mouse D3 = mouse, eat D4 = cat, play, rat, mouse D5 =cat, play
28An Example The Boolean query (cat AND dog) returns D1 (cat OR (dog AND eat)) returns D1, D4, D5
29Problem with Boolean No ranking No weights on query terms users must fuss with retrieved set size, structural reformulationusers must scan entire retrieved setNo weights on query termsusers cannot give more importance to some terms --- retrieval:2 AND system:1users cannot give more importance to some clauses --- retrieval:1 AND (system OR model):2
30Problem with Boolean No weights on document terms no use can be made of importance of a term in a document --- if occurs frequentlyno use can be made of importance of a term in the collection --- if occurs rarely
31Any Good News for Boolean? Yes.Advantagesconceptually simplecomputationally inexpensivecommercially available
32Introduction to Vectors A.B = |A||B| cos A=(a1, a2, a3,…, an), B=(b1, b2, b3,…, bn)A.B = (a1b1+ a2b2+ a3b3+ …+ anbn)Magnitude of a vector |A|=(a1, a2, a3,…, an) is defined as
34The Vector Space ModelEach document and query is represented by a vector. A vector is obtained for each document and query from sets of index terms with associated weights.The document and query representatives are considered as vectors in n dimensional space where n is the number of unique terms in the dictionary/document collection.Measuring vectors similarity:inner productvalue of cosine of the angle between the two vectors.
35Vector SpaceAssume that document’s vector is represented by vector D and the query is represented by vector Q.The total number of terms in the dictionary is n.Similarity between D and Q is measured by the angle .
37Cosine The similarity between D and Q can be written as: Using the weight of the term as the components of D and Q:
38Simple Example (1) Assume: there are 2 terms in the dictionary (t1, t2)Doc-1 contains t1 and t2, with weights 0.5 and 03 respectively.Doc-2 contains t1 with weight 0.6Doc-3 contains t2 with weights 0.4.Query contains t2 with weight 0.5.
39Simple Example (2) The vectors for the query and documents: Doc# wt1 0.50.320.630.4Doc-1= (0.5,0.3)Doc-2= (0.6,0)Doc-3= (0,0.4)Query = ( 0, 0.5)
41Simple Example - Cosine Similarity measured between Query(Q) andDoc-1Doc-2Doc-3Ranked output: D3, D1, D2
42Large Example (1)Consider the same five document collection D1= “Dogs eat the same things that cats eat” D2 = “No dog is mouse” D3 = “Mice eat little things” D4 = “Cats often play with rats and mice” D5 = “Cats often play, but not with other cats” Indexed by V1 = ( dog, eat, cat ) V2 = ( dog, mouse ) V3 = ( mouse, eat ) V4 = ( cat, play, rat, mouse ) V5 = (cat, play)
43Large Example (2)The set of all terms (dictionary) (cat, dog, eat, mouse, play, rat)Using tf.idf weights, we obtain weights v1 = (cat(0.51), eat(1.82), dog(0.91)) v2 = (dog(0.91), mouse(0.51)) v3 = (mouse(0.51), eat(0.91)) v4 = (cat(0.51), play(0.91), rat(1.61), mouse(0.51)) v5 = (cat (1.02), play (0.91))
44Large Example (3)In the vector space model, we obtain vectors (0.51, 0.91, 1.82, 0.00, 0.00, 0.00) (0.00, 0.91, 0.00, 0.51, 0.00, 0.00) (0.00, 0.00, 0.91, 0.51, 0.00, 0.00) (0.51, 0.00, 0.00, 0.51, 0.91, 1.61) (1.02, 0.00, 0.00, 0.00, 0.91, 0.00)6 dimensional space for 6 terms
45Inner-ProductQuery: “what do cats play with?” forms a query vector as (0.51, 0.00, 0.00, 0.00, 0.91, 0.00)D1= 0.51x0.51+0x0.91+0x1.82+0x0+0x0.91+0x0=0.2601D2= 0.00x x0+0x0+0.51x0+0x0.91+0x0=0D3= 0.00x x0+0.91x0+0.51x0+0x0.91+0x0=0D4= 0.51x0.51+0x0+0x0+0.51x0+0.91x x0=1.0882D5= 1.02x0.51+0x0+0x0+0x0+0.91x0.91+0x0=1.3483Ranking: D5, D4, D1, D2, D3
46Cosine SimilarityQuery: “what do cats play with?” forms a query vector as (0.51, 0.00, 0.00, 0.00, 0.91, 0.00)using the cosine measure (cm), we obtain the following similarity measures: D1 = 0.512/[( )0.5 x( )0.5] D2 = 0.0 D3 = 0.0 D4 = ( )/[( )0.5x( )0.5] D5 = (0.51* )/[( )0.5x( )0.5]Thus we obtain he ranking: D5, D4, D1, D2, D3 (or D3, D2)