Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011.

Similar presentations


Presentation on theme: "CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011."— Presentation transcript:

1 CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011

2 Today Finish PageRank HITs Start ML-based ranking

3 12/20/2015 CSCI 5417 - IR 3 PageRank Sketch The pagerank of a page is based on the pagerank of the pages that point at it. Roughly

4 12/20/2015 CSCI 5417 - IR 4 PageRank scoring Imagine a browser doing a random walk on web pages: Start at a random page At each step, go out of the current page along one of the links on that page, equiprobably “In the steady state” each page has a long-term visit rate - use this as the page’s score Pages with low rank are pages rarely visited during a random walk 1/3

5 12/20/2015 CSCI 5417 - IR 5 Not quite enough The web is full of dead-ends. Pages that are pointed to but have no outgoing links Random walk can get stuck in such dead- ends Makes no sense to talk about long-term visit rates in the presence of dead-ends. ??

6 12/20/2015 CSCI 5417 - IR 6 Teleporting At a dead end, jump to a random web page At any non-dead end, with probability 10%, jump to a random web page With remaining probability (90%), go out on a random link. 10% - a parameter (call it alpha)

7 12/20/2015 CSCI 5417 - IR 7 Result of teleporting Now you can’t get stuck locally. There is a long-term rate at which any page is visited How do we compute this visit rate? Can’t directly use the random walk metaphor

8 State Transition Probabilities We’re going to use the notion of a transition probability. If we’re in some particular state, what is the probability of going to some other particular state from there. If there are n states (pages) then we need an n x n table of probabilities. 12/20/2015CSCI 5417 - IR8

9 Markov Chains So if I’m in a particular state (say the start of a random walk) And I know the whole n x n table Then I can compute the probability distribution over all the next states I might be in in the next step of the walk... And in the step after that And the step after that 12/20/2015CSCI 5417 - IR9

10 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 10 1 1 3 3 2 2

11 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 11 1 1 3 3 2 2 ? P(3  2)

12 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 12 1 1 3 3 2 2 2/3 P(3  2)

13 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 13 1 1 3 3 2 2 1/62/31/6 P(3  *)

14 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 14 1 1 3 3 2 2 1/62/31/6 5/121/65/12 1/62/31/6

15 Say alpha =.5 Example 12/20/2015 CSCI 5417 - IR 15 1 1 3 3 2 2 1/62/31/6 5/121/65/12 1/62/31/6 Assume we start a walk in 1 at time T0. Then what should we believe about the state of affairs in T1? What should we believe about things at T2?

16 Example 12/20/2015 CSCI 5417 - IR 16 PageRank values

17 17 More Formally  A probability (row) vector x = (x 1,..., x N ) tells us where the random walk is at any point.  Example:  More generally: the random walk is on the page i with probability x i.  Example:  x i = 1 (000…1…000) 123…i…N- 2 N- 1 N (0.0 5 0.0 1 0.0…0.2…0.0 1 0.0 5 0.0 3 ) 123…i…N-2N-1N

18 18 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?

19 19 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?  Recall that row i of the transition probability matrix P tells us where we go next from state i.

20 20 Change in probability vector  If the probability vector is x = (x 1,..., x N ), at this step, what is it at the next step?  Recall that row i of the transition probability matrix P tells us where we go next from state i.  So from x, our next state is distributed as xP.

21 21 Steady state in vector notation

22 22 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.

23 23 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  Use  to distinguish it from the notation for the probability vector x.)

24 24 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  Use  to distinguish it from the notation for the probability vector x.)   is the long-term visit rate (or PageRank) of page i.

25 25 Steady state in vector notation  The steady state in vector notation is simply a vector  = (  ,  , …,   ) of probabilities.  Use  to distinguish it from the notation for the probability vector x.)   is the long-term visit rate (or PageRank) of page i.  So we can think of PageRank as a very long vector – one entry per page.

26 26 Steady-state distribution: Example

27 27 Steady-state distribution: Example  What is the PageRank / steady state in this example?

28 28 Steady-state distribution: Example

29 29 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t1 0.250.75 P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22 P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

30 30 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t1 0.250.750.250.75

31 31 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t1 0.25 0.75 0.250.75 PageRank vector =  = ( ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

32 32 Steady-state distribution: Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.25 P 21 = 0.25 P 12 = 0.75 P 22 = 0.75 t0t1t0t1 0.25 0.75 0.250.75 (convergence) PageRank vector =  = ( ,   ) = (0.25, 0.75) P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

33 33 How do we compute the steady state vector?

34 34 How do we compute the steady state vector?  In other words: how do we compute PageRank?

35 35 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...

36 36 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.

37 37 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!

38 38 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P

39 39 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .

40 40 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .   is the principal left eigenvector for P …

41 41 How do we compute the steady state vector?  In other words: how do we compute PageRank?  Recall:  = (  1,  2, …,  N ) is the PageRank vector, the vector of steady-state probabilities...  … and if the distribution in this step is x, then the distribution in the next step is xP.  But  is the steady state!  So:  =  P  Solving this matrix equation gives us .   is the principal left eigenvector for P …  That is,  is the left eigenvector with the largest eigenvalue.

42 42 One way of computing the PageRank 

43 43 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution

44 44 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.

45 45 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.

46 46 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.

47 47 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.

48 48 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.

49 49 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.  Recall: regardless of where we start, we eventually reach the steady state 

50 50 One way of computing the PageRank   Start with any distribution x, e.g., uniform distribution  After one step, we’re at xP.  After two steps, we’re at xP 2.  After k steps, we’re at xP k.  Algorithm: multiply x by increasing powers of P until convergence.  This is called the power method.  Recall: regardless of where we start, we eventually reach the steady state   Thus: we will eventually reach the steady state.

51 51 Power method: Example

52 52 Power method: Example  What is the PageRank / steady state in this example?

53 53 Computing PageRank: Power Example

54 54 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 01= xP t1t1 = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

55 55 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 = xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

56 56 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.7= xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

57 57 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 = xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

58 58 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.76= xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

59 59 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 = xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

60 60 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.748= xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

61 61 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.7480.24960.7504= xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

62 62 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.7480.24960.7504= xP 4... t∞t∞ = xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

63 63 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.7480.24960.7504= xP 4... t∞t∞ 0.250.75= xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

64 64 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.7480.24960.7504= xP 4... t∞t∞ 0.250.750.250.75= xP ∞ P t (d 1 ) = P t-1 (d 1 ) * P 11 + P t-1 (d 2 ) * P 21 P t (d 2 ) = P t-1 (d 1 ) * P 12 + P t-1 (d 2 ) * P 22

65 65 Computing PageRank: Power Example x 1 P t (d 1 ) x 2 P t (d 2 ) P 11 = 0.1 P 21 = 0.3 P 12 = 0.9 P 22 = 0.7 t0t0 010.30.7= xP t1t1 0.30.70.240.76= xP 2 t2t2 0.240.760.2520.748= xP 3 t3t3 0.2520.7480.24960.7504= xP 4... t∞t∞ 0.250.750.250.75= xP ∞

66 66 Power method: Example  What is the PageRank / steady state in this example?  The steady state distribution (= the PageRanks) in this example are 0.25 for d 1 and 0.75 for d 2.

67 67 PageRank summary  Preprocessing  Given graph of links, build matrix P  Apply teleportation  From modified matrix, compute    i is the PageRank of page i.  Query processing  Retrieve pages satisfying the query  Rank them by their PageRank  Return reranked list to the user

68 68 PageRank issues  Real surfers are not random surfers.  Examples of nonrandom surfing: back button, bookmarks, directories, tabs, search, interruptions  → Markov model is not a good model of surfing.  But it’s good enough as a model for our purposes.  Simple PageRank ranking produces bad results for many pages.

69 69 How important is PageRank?  Frequent claim: PageRank is the most important component of Google’s web ranking  The reality:  There are several components that are at least as important: e.g., anchor text, phrases, proximity, tiered indexes...  Rumor has it that PageRank in his original form (as presented here) now has a negligible impact on ranking!  However, variants of a page’s PageRank are still an essential part of ranking.  Adressing link spam is difficult and crucial.

70 Break Today’s colloquium is relevant to the current material 12/20/2015CSCI 5417 - IR70

71 12/20/2015 CSCI 5417 - IR 71 Machine Learning for ad hoc IR We’ve looked at methods for ranking documents in IR using factors like Cosine similarity, inverse document frequency, pivoted document length normalization, Pagerank, etc. We’ve looked at methods for classifying documents using supervised machine learning classifiers Naïve Bayes, kNN, SVMs Surely we can also use such machine learning to rank the documents displayed in search results? Sec. 15.4

72 12/20/2015 CSCI 5417 - IR 72 Why is There a Need for ML? Traditional ranking functions in IR used a very small number of features Term frequency Inverse document frequency Document length It was easy to tune weighting coefficients by hand And people did But you saw how “easy” it was on HW1

73 12/20/2015 CSCI 5417 - IR 73 Why is There a Need for ML Modern systems – especially on the Web – use a large number of features: Log frequency of query word in anchor text Query term proximity Query word in color on page? # of images on page # of (out) links on page PageRank of page? URL length? URL contains “~”? Page edit recency? Page length? The New York Times (2008-06-03) quoted Amit Singhal as saying Google was using over 200 such features.

74 Using ML for ad hoc IR Well classification seems like a good place to start Take an object and put it in a class With some confidence What do we have to work with in terms of training data? Documents Queries Relevance judgements 12/20/2015CSCI 5417 - IR74

75 12/20/2015 CSCI 5417 - IR 75 Using Classification for ad hoc IR Collect a training corpus of (q, d, r) triples Relevance r is here binary Documents are represented by a feature vector Say 2 features just to keep it simple Cosine sim score between doc and query Note this hides a bunch of “features” inside the cosine (tf, idf, etc.) Minimum window size around query words in the doc Train a machine learning model to predict the class r of each document-query pair Where class is relevant/non-relevant Then use classifier confidence to generate a ranking

76 12/20/2015 CSCI 5417 - IR 76 Training data

77 12/20/2015 CSCI 5417 - IR 77 Using classification for ad hoc IR A linear scoring function on these two features is then Score(d, q) = Score(α, ω) = aα + bω + c And the linear classifier is Decide relevant if Score(d, q) > θ … just like when we were doing text classification Sec. 15.4.1

78 12/20/2015 CSCI 5417 - IR 78 Using classification for ad hoc IR 0 2345 0.05 0.025 cosine score  Term proximity  R R R R R R R R R R R N N N N N N N N N N Sec. 15.4.1 Decision surface

79 12/20/2015 CSCI 5417 - IR 79 More Complex Cases We can generalize this to classifier functions over more features We can use any method we have for learning the linear classifier weights

80 12/20/2015 CSCI 5417 - IR 80 An SVM Classifier for IR [Nallapati 2004] Experiments: 4 TREC data sets Comparisons done with Lemur, another state- of-the-art open source IR engine (LM) Linear kernel normally best or almost as good as quadratic kernel 6 features, all variants of tf, idf, and tf.idf scores

81 12/20/2015 CSCI 5417 - IR 81 An SVM Classifier for IR [Nallapati 2004] Train \ Test Disk 3Disk 4-5WT10G (web) Disk 3LM0.17850.25030.2666 SVM0.17280.24320.2750 Disk 4-5LM0.17730.25160.2656 SVM0.16460.23550.2675 At best, the results are about equal to LM Actually a little bit below

82 12/20/2015 CSCI 5417 - IR 82 An SVM Classifier for IR [Nallapati 2004] Paper’s advertisement: Easy to add more features Especially for specialized tasks Homepage finding task on WT10G: Baseline LM 52% success@10, baseline SVM 58% SVM with URL-depth, and in-link features: 78% S@10

83 12/20/2015 CSCI 5417 - IR 83 Problem The ranking in this approach is based on the classifier’s confidence in its judgment It’s not clear that that should directly determine a ranking between two documents That is, it gives a ranking of confidence not a ranking of relevance Maybe they correlate, maybe not

84 12/20/2015 CSCI 5417 - IR 84 Learning to Rank Maybe classification isn’t the right way to think about approaching ad hoc IR via ML Background ML Classification problems Map to a discrete unordered set of classes Regression problems Map to a real value Ordinal regression problems Map to an ordered set of classes

85 12/20/2015 CSCI 5417 - IR 85 Learning to Rank Assume documents can be totally ordered by relevance given a query These are totally ordered: d 1 < d 2 < … < d J This is the ordinal regression setup Assume training data is available consisting of document-query pairs represented as feature vectors ψ i and a relevance ranking between them Such an ordering can be cast as a set of pair-wise judgements, where the input is a pair of results for a single query, and the class is the relevance ordering relationship between them

86 12/20/2015 CSCI 5417 - IR 86 Learning to Rank But assuming a total ordering across all docs is a lot to expect Think of all the training data So instead assume a smaller number of categories C of relevance exist These are totally ordered: c 1 < c 2 < … < c J Definitely rel, relevant, partially, not relevant, really really not relevant... Etc. Indifferent to differences within a category Assume training data is available consisting of document-query pairs represented as feature vectors ψ i and relevance ranking based on the categories C

87 12/20/2015 CSCI 5417 - IR 87 Experiments Based on the LETOR test collection (Cao et al) An openly available standard test collection with pregenerated features, baselines, and research results for learning to rank OHSUMED, MEDLINE subcollection for IR 350,000 articles 106 queries 16,140 query-document pairs 3 class judgments: Definitely relevant (DR), Partially Relevant (PR), Non-Relevant (NR)

88 12/20/2015 CSCI 5417 - IR 88 Experiments OHSUMED (from LETOR) Features: 6 that represent versions of tf, idf, and tf.idf factors BM25 score (IIR sec. 11.4.3) A scoring function derived from a probabilistic approach to IR, which has traditionally done well in TREC evaluations, etc.

89 12/20/2015 CSCI 5417 - IR 89 Experimental Results (OHSUMED)

90 12/20/2015 CSCI 5417 - IR 90 MSN Search Second experiment with MSN search Collection of 2198 queries 6 relevance levels rated: Definitive 8990 Excellent 4403 Good 3735 Fair20463 Bad36375 Detrimental 310

91 12/20/2015 CSCI 5417 - IR 91 Experimental Results (MSN search)

92 12/20/2015 CSCI 5417 - IR 92 Limitations of Machine Learning Everything that we have looked at (and most work in this area) produces linear models of features by weighting different base features This contrasts with most of the clever ideas of traditional IR, which are nonlinear scalings and combinations of basic measurements log term frequency, idf, pivoted length normalization At present, ML is good at weighting features, but not at coming up with nonlinear scalings Designing the basic features that give good signals for ranking remains the domain of human creativity

93 12/20/2015 CSCI 5417 - IR 93 Summary Machine learned ranking over many features now easily beats traditional hand-designed ranking functions in comparative evaluations And there is every reason to think that the importance of machine learning in IR will only increase in the future.


Download ppt "CSCI 5417 Information Retrieval Systems Jim Martin Lecture 20 11/3/2011."

Similar presentations


Ads by Google